id
stringlengths
10
10
title
stringlengths
7
231
abstract
stringlengths
3
2.43k
authors
stringlengths
5
21.5k
published_date
stringlengths
20
20
link
stringlengths
33
34
markdown
stringlengths
133
1.92M
2308.10107
Bayes Risk Transducer: Transducer with Controllable Alignment Prediction
Automatic speech recognition (ASR) based on transducers is widely used. In training, a transducer maximizes the summed posteriors of all paths. The path with the highest posterior is commonly defined as the predicted alignment between the speech and the transcription. While the vanilla transducer does not have a prior preference for any of the valid paths, this work intends to enforce the preferred paths and achieve controllable alignment prediction. Specifically, this work proposes Bayes Risk Transducer (BRT), which uses a Bayes risk function to set lower risk values to the preferred paths so that the predicted alignment is more likely to satisfy specific desired properties. We further demonstrate that these predicted alignments with intentionally designed properties can provide practical advantages over the vanilla transducer. Experimentally, the proposed BRT saves inference cost by up to 46% for non-streaming ASR and reduces overall system latency by 41% for streaming ASR.
Jinchuan Tian, Jianwei Yu, Hangting Chen, Brian Yan, Chao Weng, Dong Yu, Shinji Watanabe
2023-08-19T20:48:16Z
http://arxiv.org/abs/2308.10107v1
# Bayes Risk Transducer: Transducer with Controllable Alignment Prediction ###### Abstract Automatic speech recognition (ASR) based on transducers is widely used. In training, a transducer maximizes the summed posteriors of all paths. The path with the highest posterior is commonly defined as the predicted alignment between the speech and the transcription. While the vanilla transducer does not have a prior preference for any of the valid paths, this work intends to enforce the preferred paths and achieve controllable alignment prediction. Specifically, this work proposes Bayes Risk Transducer (BRT), which uses a Bayes risk function to set lower risk values to the preferred paths so that the predicted alignment is more likely to satisfy specific desired properties. We further demonstrate that these predicted alignments with intentionally designed properties can provide practical advantages over the vanilla transducer. Experimentally, the proposed BRT saves inference cost by up to 46% for non-streaming ASR and reduces overall system latency by 41% for streaming ASR. 1 Footnote 1: We gratefully acknowledge the support of NVIDIA Corporation with the donation of the GPUs used for this research. Jinchuan Tian\({}^{1,3}\), Jianwei Yu\({}^{1,2}\), Hanging Chen\({}^{1}\), Brian Yan\({}^{3}\), Chao Weng\({}^{1,2}\), Dong Yu\({}^{1}\), Shinji Watanabe\({}^{3}\)\({}^{1}\)Tencent AI LAB, \({}^{2}\)Tencent ASR Oteam, \({}^{3}\)Language Technologies Institute, Carnegie Mellon University [email protected], [email protected] **Index Terms**: speech recognition, transducer, alignment ## 1 Introduction Automatic speech recognition (ASR) based on transducers [1] is one of the most popular frameworks [2, 3, 4]. In the past few years, a series of approaches have been proposed as extensions of the transducer with the goals of optimizing its recognition accuracy [5, 6], language model integration [7], flexibility [8], decoding efficiency [9] and simplicity [10, 11], memory efficiency during training [12, 13, 14] and overall system latency during streaming decoding [15, 16, 17, 18, 19, 20]. During training, the vanilla transducer, as well as its extensions [9, 12, 13, 15, 16, 17, 20], maximizes the summed posterior of all potential aligning sequences (a.k.a, _paths_) between the speech and the transcription. In particular, these extensions achieve their goals by manipulating the transducer paths, such as allowing multi-frame big skips [9], pruning paths with minor posteriors with [13, 17, 20] or without [12] reference alignment labels, discouraging blank emissions [15] and encouraging non-blank emissions [16]. This work provides another extension of the transducer, which also conducts manipulation over paths. Specifically, as a follow-up of the previous work [21] which attempts to achieve controllable alignment prediction in CTC criterion [22], this work extends this controllability to the transducer model by taking its distinctive forward-backward process into consideration. The alignment prediction of the transducer is commonly defined as the path with the highest posterior. In vanilla transducer formulation, there is no prior preference among the paths since predicting each valid path will yield the correct textual transcription. Currently, the alignment selection among the paths (i.e., which path will become the predicted alignment) can hardly be affected by human intervention during training. To achieve controllable alignment prediction in the transducer is exactly to intentionally choose paths with specific desired properties as the alignment prediction. With this motivation, this work proposes an extension of the transducer called Bayes Risk Transducer (BRT), which adopts a Bayes risk function to intentionally enforce a preference for paths with the desired properties, so that the predicted alignments are more likely to be characterized by these properties. Particularly, the original forward-backward algorithm of the transducer is revised into a divide-and-conquer approach: all paths are firstly divided into multiple exclusive groups and the groups with more favored properties are enforced by receiving lower risk values than the others. This work further demonstrates that BRT with controllable alignment prediction has practical advantages over vanilla transducers. By designing various Bayes risk functions, we can obtain alignment predictions with desired properties that are specific to different task setups, which subsequently helps to offer novel solutions for two practical challenges in ASR: inference cost for non-streaming ASR and overall system latency for streaming ASR. In the non-streaming setup, a Bayes risk function is designed to enforce the paths that emit the last non-blank predictions earlier. As a benefit, the last non-blank prediction occurs at an early time stamp so the inference cost can be reduced by terminating the decoding loop early without exploring all frames. In the streaming setup, another Bayes risk function is designed to encourage early emissions for all non-blank tokens. Thus, the model emits before waiting too long context and the latency for each non-blank token is reduced. Experimentally, the former case accelerates non-streaming inference by up to 46% and the latter case reduces the overall system latency of streaming ASR system by 41%. ## 2 Bayes Risk Transducer ### Vanilla Transducer In training, vanilla transducer maximizes the posterior of the transcription \(1=[l_{1},...,l_{U}]\) with the given acoustic feature sequence \(\mathbf{x}=[\mathbf{x}_{1},...,\mathbf{x}_{T}]\). Instead of maximizing \(P(\mathbf{l}|\mathbf{x})\) directly, the transducer maximizes the summed posterior of all _paths_ in the transducer _lattice_ (see Fig.1.a). Note \(\varnothing\) as the blank symbol and extend the vocabulary \(\hat{\mathcal{V}}=\mathcal{V}\cup\{\varnothing\}\), each symbol sequence \(\pi=[\pi_{1},...,\pi_{T+U}]\) is a valid path if all entries of \(\pi\) are in \(\hat{\mathcal{V}}\) and \(\mathcal{B}(\pi)=1\). Here \(\mathcal{B}\) is a mapping that removes all \(\varnothing\). So the vanilla transducer objective to minimize is defined as: \[\mathbf{J}_{\text{transducer}}(\mathbf{l},\mathbf{x})\triangleq-\log P( \mathbf{l}|\mathbf{x})=-\log\sum_{\pi\in\mathcal{B}^{-1}(\mathbf{l})}P(\pi| \mathbf{x}) \tag{1}\] where \(\mathcal{B}^{-1}(\mathbf{l})\) is the set of all valid paths. Next, the posterior of each path \(P(\pi|\mathbf{x})\) is computed as: \[P(\pi|\mathbf{x})=\prod_{i=1}^{T+U}p(\pi_{i}|\mathbf{x}_{1:t},\mathbf{l}_{1:u}) \tag{2}\] where the condition \((\mathbf{x}_{1:t},\mathbf{l}_{1:u})\) specifies the node \((t,u)\) on the transducer lattice s.t. \(\mathcal{B}(\pi_{1:u})=\mathbf{l}_{1:u}\) and \(t+u=i-1\). Instead of enumerating all paths and summing their posteriors, the transducer objective is computed efficiently by _forward-backward algorithm_[23], which recursively computes the forward-backward variables \(\alpha(t,u)\) and \(\beta(t,u)\) for each node \((t,u)\) in the transducer lattice: \[\alpha(t,u)=\sum_{\pi\in\mathcal{V}^{\prime}(T+U),\mathcal{B}(\pi_{1:t+u})= \mathbf{l}_{1:u}}P(\pi|\mathbf{x}) \tag{3}\] \[\beta(t,u)=\sum_{\pi\in\mathcal{V}^{\prime}(T+U),\mathcal{B}(\pi_{t+u+1:T+U})= \mathbf{l}_{u+1:U}}P(\pi|\mathbf{x}) \tag{4}\] Subsequently, by decomposing each path \(\pi\) into partial paths \(\pi_{1:t+u}\) and \(\pi_{t+u+1:T+U}\) and using Eq.\(\{\)3, 4, the transducer objective is derived as3: Footnote 3: Note, for a path that go through a known node \((t,u)\) in transducer lattice, the partial path \(\pi_{t+u+1:T+U}\) is independent to partial path \(\pi_{1:t+u}\) so that the factorization of the path posterior is \(P(\pi|\mathbf{x})=P(\pi_{1:t+u}|\mathbf{x})\cdot P(\pi_{t+u+1:T+U}|\pi_{1:t+u} \mathbf{x})\). \[\mathbf{J}_{\text{transducer}}(\mathbf{l},\mathbf{x}) =-\log\sum_{\pi\in\mathcal{B}^{-1}(\mathbf{l})}P(\pi|\mathbf{x})\] \[=-\log\sum_{(t,u):t+u=n}\sum_{\begin{subarray}{c}\mathcal{B}(\pi _{1:t+u})=\mathbf{l}_{1:u}\\ \mathcal{B}(\pi_{t+u+1:T+U})=\mathbf{l}_{u+1:U}\end{subarray}}P(\pi|\mathbf{x})\] \[=-\log\sum_{(t,u):t+u=n}\alpha(t,u)\cdot\beta(t,u) \tag{5}\] where \(n\) is any known integer s.t. \(n\in[0,T+U]\). Finally, the path with the highest posterior is usually considered as the alignment prediction between \(\mathbf{x}\) and \(\mathbf{l}\): \(\text{ali}(\mathbf{x},\mathbf{l})=\arg\max_{\pi\in\mathcal{B}^{-1}(\mathbf{l} )}P(\pi|\mathbf{x})\). ### Bayes Risk Transducer As suggested in Eq.1, the formulation of the vanilla transducer has no prior preference among paths. This work intentionally selects the predicted alignment among the paths and thus attempts to achieve controllable alignment prediction. With this motivation, this work proposes Bayes Risk Transducer (BRT), which adopts a customizable Bayes risk function to express the preference for specific paths with desired properties. To preserve a similar format like Eq.1, we sets the risk function for each path as \(-r(\pi)\) so that minimizing the expected risk is equivalent to minimizing the BRT objective 4: Footnote 4: In the remainder part of this work, \(r(\pi)\) is termed as the Bayes risk function even though the real Bayes risk function is \(-r(\pi)\). Higher \(r(\pi)\) value represents a lower risk, a.k.a., preference. \[\mathbf{J}_{\text{BRT}}(\mathbf{l},\mathbf{x})\triangleq-\log\sum_{\pi\in \mathcal{B}^{-1}(\mathbf{l})}[P(\pi|\mathbf{x})\cdot r(\pi)] \tag{6}\] The computation of the proposed BRT objective still adopts the forward-backward variables and works in a divide-and-conquer approach. To express the preference for specific property of the paths, the Bayes risk function specifies 1) what property is concerned and 2) what values of the concerned property are preferred. To answer these questions, the concerned property of each path is defined by \(f(\pi)\). Then, all paths are divided into multiple exclusive groups s.t. paths with the identical concerned property value \(\tau\) are in the same groups. For paths in one identical group, it is reasonable to assign the same risk value as the concerned property is the same. Thus, the Bayes risk function \(r(\pi)\) is replaced by a group-level risk function \(r_{g}(\tau)\), which only depends on the group-level concerned property \(\tau\) rather than the path \(\pi\). Formally, this process is written as: \[\mathbf{J}_{\text{BRT}}(\mathbf{l},\mathbf{x}) =-\log\sum_{\tau}\sum_{\pi\in\mathcal{B}^{-1}(\mathbf{l}),f(\pi)= \tau}[P(\pi|\mathbf{x})\cdot r(\pi)]\] \[=-\log\sum_{\tau}\sum_{\pi\in\mathcal{B}^{-1}(\mathbf{l}),f(\pi )=\tau}[P(\pi|\mathbf{x})\cdot r_{g}(\tau)]\] \[=-\log\sum_{\tau}[r_{g}(\tau)\cdot\sum_{\pi\in\mathcal{B}^{-1}( \mathbf{l}),f(\pi)=\tau}P(\pi|\mathbf{x})] \tag{7}\] Please be aware of: 1) when splitting the path into groups, the groups are supposed to be mutually exclusive so that each path is considered once and only once; 2) by adopting the group-level risk function, we can avoid the complex weighted summation over all paths within each group. 3) the summed posterior of each path group should be fully tractable by the forward-backward variables so the computation remains efficient; 4) by pursuing the desired properties in the predicted alignment between \(\mathbf{x}\) and \(\mathbf{l}\) during training, these properties are expected to be preserved in the predicted alignments between the test speech and the textual hypotheses during decoding. Provided the general formulation of BRT in Eq.7, a naive example is in Eq.5, where the concerned property \(\tau\) represents the pair \((t,u)\) and \(r_{g}(\tau)=1\). Under this setting, the vanilla transducer is a special case of the proposed BRT. Alternatively, given the \(u\)-th non-blank token \(l_{u}\) in the transcription, another useful example is to set the concerned property \(\tau\) as the time stamp when \(l_{u}\) is emitted, a.k.a, \(\pi_{\tau+u}=l_{u}\). With a similar factorization like Eq.5 and considering Eq.\(\{\)2, 3, 4\(\}\), the BRT objective is further revised as: \[\mathbf{J}_{\text{BRT}}(\mathbf{l},\mathbf{x},u)=-\log\sum_{\tau}[ r_{g}(\tau)\cdot\sum_{\pi\in\mathcal{B}^{-1}(\mathbf{l}),\pi_{\tau+u}=l_{u}}P( \pi|\mathbf{x})]\] \[=-\log\sum_{\tau}[r_{g}(\tau)\cdot\sum_{\begin{subarray}{c} \mathcal{B}(\pi_{1:\tau+u-1})=\mathbf{l}_{1:u-1}\\ \mathcal{B}(\pi_{\tau+u+1:T+U})=\mathbf{l}_{u+1:U}\end{subarray}}P(\pi|\mathbf{ x})]\] \[=-\log\sum_{\tau}r_{g}(\tau)\cdot\underbrace{[\alpha(\tau,u-1)\cdot p (l_{u}|\mathbf{x}_{1:\tau},\mathbf{l}_{1:u-1})\cdot\beta(\tau,u)]}_{\triangleq \mathcal{G}(\tau,u)} \tag{8}\] Figure 1: _(a): Transducer lattice. \(\mathcal{G}(4,2)\) is the summed posterior of all paths that go through the vertical arrow (in green) from node \((4,1)\) to node \((4,2)\) and emit 2nd token at the 4th frame. The red path ends all non-blank predictions at \(\tau=3\) while the blue path ends at \(\tau=6\). The red path is preferred in Sec.3.2 (b) & (c): The heat maps for \(\log\mathcal{G}(t,u)\)._ Here is \(\mathcal{G}(\tau,u)\) the summed posterior of all paths that go through the vertical arrow from node \((\tau,u-1)\) to node \((\tau,u)\). Fig.1.a gives a demonstration of \(\mathcal{G}(\tau,u)\) in the lattice and Fig.1.{b, c} provides numerical examples of \(\mathcal{G}(\tau,u)\). \(\mathcal{G}(\tau,u)\) measures the summed probability of all valid paths that \(l_{u}\) is emitted at \(\tau\)-th frame, which indicates the alignment prediction. So far the group-level risk function \(r_{g}(\tau)\) is not defined. Below we show two applications of Eq.8 with different \(r_{g}(\tau)\) designs. ### Non-streaming Application: Efficient Decoding For frame-synchronized decoding algorithms of the transducer [1, 24], the inference cost highly depends on \(T\) as the decoding loop is conducted frame-by-frame. In Fig.1.b, the whole sequence **l** cannot be predicted until all frames are explored. By contrast, in Fig.1.c, all non-blank tokens are emitted before reaching the red line, which allows us to stop decoding at an early time (e.g., the red line) to save computation. To achieve the heat map like Fig.1.c, the concerned property is exactly the time stamp \(\tau\) when the last token \(l_{U}\), as well as the whole sequence, is emitted: \(\pi_{\tau+U}=l_{U}\). In addition, paths with smaller \(\tau\) are preferred (see Fig.1.a) since fewer frames are consumed to predict all tokens. Set \(u=U\) in Eq.8, the objective to minimize is: \[\mathbf{J}_{\text{BRT}}(\textbf{l},\textbf{x},U)=-\log\sum_{\tau}\text{min}(e ^{-\lambda\cdot(\tau-m\cdot U)/T},1)\cdot\mathcal{G}(\tau,U) \tag{9}\] where the risk function \(r_{g}(\tau)=\text{min}(e^{-\lambda\cdot(\tau-m\cdot U)/T},1)\) expresses the preference for \(\tau\in[1,m\cdot U]\) and shows exponentially decayed interest in \(\tau>m\cdot U\)5. \(\lambda\) and \(m\) are hyper-parameters. \(m\) is empirically set to 2 and \(\lambda\) varies according to datasets. Footnote 5: We do not express an extra preference for very small \(\tau\) so it is less likely that multiple non-blank tokens are emitted at a single frame. This work further provides an early-stop mechanism to reduce the number of decoding frames of BRT. First, assume we obtain a hypothesis \(\mathbf{\hat{1}}=[\hat{l}_{1},...,\hat{l}_{u}]\) at \(\tau\)-th frame during decoding. The hypothesis is considered complete if no additional non-blank tokens are expected to be emitted in the search process over the remaining frames after \(\tau\). In other words, for any possible path of the complete \(\mathbf{\hat{1}}\), its sub-path after the \(\tau\)-th frame only consists of continuous \(\varnothing\) (see the blue line in Fig.1.c). So \(\mathbf{\hat{1}}\) is considered complete only if the accumulated probability of the continuous \(\varnothing\) since the \(\tau\)-th frame are with high confidence: \(\sum_{t=\tau}^{T}\log p(\varnothing|\mathbf{x}_{\text{1:t}},\mathbf{\hat{1}}_ {\text{1:u}})>D\), where \(D=-10\) is a threshold value6. Secondly, for a search beam that contains multiple hypotheses, we terminate the search when 1) the top \(k=3\) blank-free hypotheses do not change for \(f=5\) frames and 2) all top \(k=3\) hypotheses are considered complete. Footnote 6: The computation for this condition cannot be considered as a search over the frames after \(\tau\) since the series \(\{p(\varnothing|\mathbf{x}_{\text{1:t}},\mathbf{\hat{1}}_{\text{1:u}})\}\) can be computed in parallel fashion and requires no loop. ### Streaming Application: Early Emission A streaming ASR system is expected to emit each token accurately and timely. The accuracy and latency, however, usually form a trade-off: better recognition accuracy requires longer context, which results in higher latency. For streaming ASR, BRT is designed to encourage all tokens to emit at early time stamps, even at the cost of slight performance degradation. By doing so, BRT achieves a better accuracy-latency trade-off than the vanilla transducer, which is further demonstrated in Sec.3.3. The vanilla transducer only attempts to transcribe the speech correctly but poses no constraint on when tokens would be emitted. By contrast, the proposed BRT can reduce the latency by enforcing the paths that emit each token at a smaller time stamp. Formally, with any non-blank token \(l_{u}\) and the exponentially decayed risk function \(r_{g}(\tau)=e^{-\lambda\cdot(\tau-\tau^{\prime})/T}\), a BRT objective is derived from Eq.8 with the goal of encouraging \(l_{u}\) to be emitted earlier: \[\mathbf{J}_{\text{BRT}}(\textbf{l},\textbf{x},u)=-\log\sum_{\tau}e^{-\lambda \cdot(\tau-\tau^{\prime})/T}\cdot\mathcal{G}(\tau,u) \tag{10}\] where \(\tau\) is the concerned property that specifies the time-stamp when \(l_{u}\) is emitted and is enforced to be smaller; \(\tau^{\prime}=\arg\max_{\tau}\mathcal{G}(\tau,u)\) is a bias term to ensure that the path group with the highest summed posterior \(\mathcal{G}(\tau,u)\) would always receive the risk value of \(r_{g}(\tau)=1\) so that the absolute value of \(\mathbf{J}_{\text{BRT}}(\textbf{l},\textbf{x},u)\) does not vary along with \(u\) significantly. Here \(\lambda\) is still an adjustable hyper-parameter varying with datasets. Subsequently, to guide every token \(l_{u}\) to be emitted earlier requires the consideration of all tokens. So we simply attempts to minimize the mean of \(\mathbf{J}_{\text{BRT}}(\textbf{l},\textbf{x},u)\) in Eq.10 over every \(u\): \[\mathbf{J}(\textbf{l},\textbf{x})=\frac{1}{U}\cdot\sum_{u=1}^{U}\mathbf{J}_{ \text{BRT}}(\textbf{l},\textbf{x},u) \tag{11}\] ## 3 Experiments ### Experimental Setup **Datasets:** Experiments are conducted on Aishell-l [25], Aishell-2 [26] and Librispeech-100 [27] datasets. The volumes of these datasets range from 100 hours to 1k hours. Librispeech-100 is in English and the others are in Mandarin. All data is augmented by SpecAugment [28] and speed perturbation. For English, tokens are 500 BPE units. **Evaluation Metrics:** CER / WER% is adopted to show the recognition accuracy. To compare the decoding efficiency of non-streaming ASR, the average number of decoding frames (DF) before the decoding termination and the real-time factor (RTF) over CPU7 are reported. For streaming ASR, the overall latency is defined as the sum of data collecting latency (DCL) and drift latency (DL) [21]8. DCL is the time to wait before the Figure 2: _Results of non-streaming ASR. (a) Recognition accuracy in CER / WER %; (b) Average decoding frames (DF); (c) Real time factor (RTF). With comparable recognition accuracy, the proposed BRT achieve more efficient decoding by decoding fewer frames._ input speech forms a chunk (a.k.a., the latency caused by chunk size and look-ahead length). DL is exemplified in Fig.3.a. and its reference9 is obtained by standard GMM-HMM systems10. Footnote 9: For an English word that consists of multiple BPE tokens, we only count the last BPE unit of that word. Footnote 10: Models are trained by Kaldi: [https://github.com/kaldi-asr/kaldi](https://github.com/kaldi-asr/kaldi) **Features & Models:** 80-dim Pbank features with the window size of 10ms are down-sampled by 4x using CNN before being fed into the encoder. The acoustic encoder is Conformer [29] for non-streaming ASR and Emformer [30] for streaming ASR. For streaming ASR, DCL is set to {160, 320, 480, 640}ms. The prediction network is a standard LSTM and the joint network is linear. For English tasks, an auxiliary CTC criterion is adopted on the top of the encoder to stabilize the training. Model sizes for non-streaming and streaming experiments are 95M and 57M respectively. **Training & Decoding:** For {Aishell-1, Aishell-2, Librispeech-100}, models are trained for {100, 100, 300} epochs with \(\lambda\) of {5, 5, 20} and {10, 10, 50} for non-streaming and streaming ASR respectively. The original decoding algorithm proposed in [1] is adopted with a beam size of 10. No language model is adopted in decoding. ### Results on non-streaming ASR This part evaluates the effectiveness of the proposed BRT method on non-streaming ASR. Our results are shown in Fig.2. Firstly, as shown in Fig.2.a, with datasets in varying scales and languages, the recognition accuracy achieved by the proposed BRT method and the vanilla transducer are comparable. Secondly, Fig.2.b demonstrates the effectiveness of the proposed BRT in reducing the decoding frames. E.g., by introducing the BRT criterion, the DF for Aishell-2 dataset is reduced from 71 to 16, which is a 77% reduction. Finally, for models trained by BRT, the overall inference cost (a.k.a., RTF) is reduced since the proposed early-stop mechanism allows the model not to explore all the frames. By adopting the BRT criterion and the early-stop mechanism, the RTF of Aishell-1 is reduced from 0.25 to 0.14, which is a 46% relative reduction. The reduction in Fig.2.c is not as considerable as that in Fig.2.b since the encoder inference accounts for a large part of the computation cost. ### Results on streaming ASR This part evaluates the effectiveness of the proposed BRT method on streaming ASR. Our results are shown in Fig.3. As discussed in Sec.2.4, the streaming ASR has a trade-off between the recognition accuracy and the overall system latency. As shown in Fig.3.3,{b,c,d}, on the three datasets, the curve of the proposed BRT (the red one) consistently lies in the lower-left direction of its baseline (vanilla transducer, the blue one), which suggested that the proposed BRT criterion achieves better accuracy-latency trade-off than vanilla transducer. In addition, BRT can build systems with extremely low latency that cannot be achieved by the vanilla transducer, even at the cost of recognition performance degradation. E.g., on the Aishell-2 dataset, the lowest overall latency achieved by the vanilla transducer and BRT is 430ms and 251ms respectively, which is a 41% relative reduction in latency, even with accuracy degradation. Further ablation study is conducted on Aishell-1 dataset. As shown in Fig.4.a, the transducer system with extremely low latency cannot be built by simply reducing the chunk size (a.k.a., small DCL) since the model is allowed to wait for very long context before emitting (a.k.a., larger DL). In addition, the adoption of BRT can effectively reduce the DL, which is aligned with our motivation in Sec.2.41. The BRT model has this strength of early emission since the paths that emit non-blank prediction earlier are enforced during training. Next, Fig.4.b shows that the vanilla transducer outperforms the proposed BRT in accuracy with all DCL settings, which is reasonable since the accessible right context is reduced if the non-blank tokens are emitted earlier. Combining Fig.4.{a,b} will reach Fig.3.b, which demonstrates that: BRT provides an alternative solution for streaming transducer, i.e., increasing DCL with a larger chunk size and reducing DL by using BRT to meet the latency budget so that a better overall accuracy-latency trade-off is achieved. Footnote 11: The DL can be negative due to the look-ahead of the model ## 4 Conclusion To achieve controllable alignment prediction in the transducer, this work proposes an extension of the transducer called Bayes Risk Transducer (BRT), which adopts a Bayes risk function to enforce specific paths with the desired properties. By designing different Bayes risk functions, the predicted alignment is enriched with task-specific properties, which provides practical benefits besides recognizing the speech accurately: efficient decoding for non-streaming ASR and early emission for streaming ASR. The claimed two applications are experimentally validated on multiple datasets and in multiple languages. Figure 4: Ablation study for streaming ASR (on Aishell-1) Figure 3: Results of streaming ASR. (a) A demonstration of drift latency (DL). Blue arrows stand for the reference duration of each token. The alignment of the hypothesis is represented by the red path. Token \(l_{2}\) starts at 1st frame but is predicted at 4th frame, which is a 3-frame drift latency. (b) & (c) & (d): the accuracy-latency trade-off achieved with varying data collecting latency (DCL)
2303.12462
Scalable Bayesian bi-level variable selection in generalized linear models
Motivated by a real-world application in cardiology, we develop an algorithm to perform Bayesian bi-level variable selection in a generalized linear model, for datasets that may be large both in terms of the number of individuals and the number of predictors. Our algorithm relies on the waste-free SMC Sequential Monte Carlo methodology of Dau and Chopin (2022), a new proposal mechanism to deal with the constraints specific to bi-level selection (which forbid to select an individual predictor if its group is not selected), and the ALA (approximate Laplace approximation) approach of Rossell et al. (2021). We show in our numerical study that the algorithm may offer reliable performance on large datasets within a few minutes, on both simulated data and real data related to the aforementioned cardiology application.
Younès Youssfi, Nicolas Chopin
2023-03-22T11:17:16Z
http://arxiv.org/abs/2303.12462v1
# Scalable Bayesian Bi-level variable selection in generalized linear models ###### Abstract Motivated by a real-world application in cardiology, we develop an algorithm to perform Bayesian bi-level variable selection in a generalized linear model, for datasets that may be large both in terms of the number of individuals and the number of predictors. Our algorithm relies on the waste-free SMC (Sequential Monte Carlo) methodology of Dau and Chopin (2022), a new proposal mechanism to deal with the constraints specific to bi-level selection (which forbid to select an individual predictor if its group is not selected), and the ALA (approximate Laplace approximation) approach of Rossell et al. (2021). We show in our numerical study that the algorithm may offer reliable performance on large datasets within a few minutes, on both simulated data and real data related to the aforementioned cardiology application. Approximate Laplace approximation Bi-level variable selection Sequential Monte Carlo waste-free Sequential Monte Carlo ## 1 Introduction ### Motivation While useful more generally, the approach developed in this paper was initially motivated by a public health dataset recording the medical history of a large number of individuals that may or may not have suffered from sudden cardiac death (SCD); this dataset will be described more fully later. One may use this data to determine whether consumption of medical drugs or hospitalization may increase the odds of an SCD event. Unfortunately, the number of potential drugs and diseases is very large, and their incidence in the studied population vary a lot. This makes it difficult to assess the impact of drugs and diseases that are rarely prescribed or observed. On the other hand, there are official nomenclatures for drugs and diseases, which can be classified into groups with similar properties. Hospital diagnoses are coded according to the International Classification of Diseases and drugs are coded according to the Anatomical Therapeutic Chemical system, that classifies them according to the organ or system on which they act and their therapeutic, pharmacological, and chemical properties. Therefore, there is clear medical interest in determining automatically
2307.00591
Observation of open Fermi surface in bismuth
Bismuth is a candidate material for three dimensional higher dimensional topological insulators. We performed electronic transport experiments on small diameter crystalline bismuth nanowires to clarify the role of the proposed hinge channels in the interlayer coupling. The magnetoresistance presents a sequence of peaks at Yamaji magic angles for which the interlayer group velocity is zero pointing to flat bands due to the layered structure. Furthermore, we observe a peak for high magnetic fields applied parallel to the layers, a definitive signature of interlayer coherence that enables deduction of the interlayer transfer integral of 8 meV. We demonstrate transport by a corrugated open Fermi surface of holes that is extended, with the high angular precision of around 0.016 rad in the interlayer direction. The observations indicate that coherence and the crystallographic direction set by hinges play a key role in the interlayer coupling of bismuth.
Tito E. Huber, Leonid Konopko, Albina Nikolaeva
2023-07-02T15:20:02Z
http://arxiv.org/abs/2307.00591v1
# Observation of open Fermi surface in bismuth ###### Abstract Bismuth is a candidate material for three dimensional higher dimensional topological insulators. We performed electronic transport experiments on small diameter (\(\sim\)50 nm) crystalline bismuth nanowires to clarify the role of the proposed hinge channels in the interlayer coupling. The magnetoresistance presents a sequence of peaks at Yamaji magic angles for which the interlayer group velocity is zero pointing to flat bands due to the layered structure. Furthermore, we observe a peak at high fields (10 T \(>B>\) 14 T) for \(B\) applied parallel to the layers, a definitive signature of interlayer coherence that enables deduction of the interlayer transfer integral (\(t\approx 8\) meV). We demonstrate transport by a corrugated open Fermi surface of holes that is extended, with the high angular precision of \(\sim\) 0.016 (rad) in the interlayer direction. The observations indicate that the coherent and directional interlayer coupling established by hinges play a key role in the coherent interlayer coupling of bismuth. ## I Introduction Topological insulators (TIs) have attracted considerable interest because of their unique condensed matter properties and their applications in electronics. The counter-intuitive behaviors of TIs are rooted in the fact that they exhibit bulk insulator properties, while their surfaces are metallic and topologically protected [1]. Bismuth consists of a stack of bismuth bilayers, which, in isolation, are two-dimensional (2D) TIs [2, 3, 4]. It has been proposed that bismuth is a higher-order TI (HOTI), in which the crystal hinges host topologically protected channels [5, 6]. Such modes appear when the facets of a bismuth crystal having gaps of opposite signs intersect. Previously, experimental scanning-tunneling microscope (STM) studies of bismuth (111) surfaces that have nanoscale islands addressed the role of hinges in electronic transport [1, 7, 8]. Hinge channels lend themselves to an unequivocal interpretation of the complex pattern of the currents observed around the islands. These studies were supplemented with innovative Superconducting Quantum Interference Device (SQUID) measurements that exploit proximity-induced superconductivity to reveal the hinge channels in crystalline bismuth nanowires surfaces. Bismuth thin films, nanowires, and ribbons, are low-dimensional forms of bismuth in which a large surface-to-volume ratio and quantum confinement favors surface electronic transport over bulk transport and that demonstrate extraordinarily high mobility [9, 10, 11, 12, 13] and even display quantized conductance [9, 12]. They are well suited for the evaluation of the effects of topological protection. The aim of this paper is to provide experimental characterization of interlayer transport in high quality, small diameter bismuth nanowires by employing a traditional electronic transport method. In our work, we apply the method of angle-dependent magnetoresistance (AMR) to bismuth nanowires. AMR employs high magnetic fields and is very often used in the study of layered materials such as quasi two-dimensional (2D) organic conductors [14]; however, this method has never been applied in the study of bismuth nanowires. Our studies are reveal highly-directional and coherent interlayer coupling in bismuth. We find that the AMR show Yamaji [14] oscillations that are periodic in \(\tan\alpha\), where \(\alpha\) is the angle between the magnetic field and the perpendicular to the bilayer or tilt angle. This oscillatory phenomenon was first recognized in layered quasi 2D organic conductors [14, 15, 16], it has also been observed in layered metals such as PdCoO\({}_{2}\)[17, 18], and explained in terms of a corrugated tube open FS. The FS corrugation is evident in the experiments because at Yamaji magic angles \(\alpha_{\rm n}\) between the (111) plane and the magnetic field, the AMR has a sequence of maxima when the electron is trapped in orbits defined by the corrugation that have no energy dispersion, \(v_{\rm z}={\rm d}E/{\rm d}k_{\rm z}=0\) or exist in a flat band. Here, \(v=\nabla E/\hbar\) is the electron's group velocity. As the conductivity is proportional to \(v_{\rm z}\), the MR exhibits maxima at the magic angles [16]. Also, in addition to Yamaji oscillations, the Bi nanowires exhibit an AMR feature consisting of peaks for angles corresponding to \(B\) along the layers. This is proof of coherent interlayer coupling [19, 20, 21] where the interlayer transfer integral \(t\) is less than the intralayer transfer integral \(t_{//}\). The identification of the FS topology, closed FS versus open FS, is key to our understanding of the metallic state [22, 23] of bismuth surface states. Furthermore, the identification of flat bands in bismuth due to the FS corrugation is intriguing because flat bands promote correlated electron behavior such as superconductivity [24, 25] in bilayer graphene and superconductivity has also been observed in low-dimensional bismuth [26]. The observation of an open Fermi surface, which can be interpreted in terms of hinges channels, was not anticipated as bulk bismuth crystals are characterized by electrons and holes in a closed FS with ellipsoidal pockets. These phenomena are discussed in the paper. ## II Experiment The nanowires were fabricated using a fiber-drawing process [12, 13]. This is a significant advance in the fabrication of nanowires. Briefly, in a first step, the Ulitovsky technique was used to prepare 200-nm wires. This technique of casting at the nanoscale, involved using a high-frequency induction coil to melt a 99.999% Bi boule within a borosilicate glass capsule while simultaneously softening the glass. Glass fibers containing the 200-nm nanowires were then pulled from the capsule with a mechanical puller and annealed. X-ray and AMR show that these bismuth wires are single crystal [12]. In the second step, fibers containing 200-nm wires were stretched with a capillary puller via the Taylor method to reduce their diameter. Subsequently, the nanowires were annealed at 200 C. The resultant Bi filament was not continuous, yet sections that are a fraction of a millimeter in length could be selected using an optical microscope. Electrical connections to the nanowires were performed using In\({}_{0.5}\)Ga\({}_{0.5}\) eutectic. This type of solder consistently makes good contacts, as compared to other low-melting-point solders, but it has the disadvantage that it diffuses at room temperature into the Bi nanowire rather quickly. Consequently, the nanowire has to be used in the low temperature experiment immediately after the solder is applied; otherwise the sequence of magnetoresistance peaks discussed here are not observed. This can be attributed to room temperature diffusion of Ga in the nanowire. The crystalline nanowires are believed to be highly faceted because they grow at an angle of 70\({}^{\rm o}\) from the (111). The length of the nanowires is 0.3 mm. The temperature-dependent resistance R(\(T\)) saturates at an intermediate temperature because of quantum confinement and surface scattering. The diameter could be precisely characterized because the nanowires, even the very small diameter ones, exhibit Aharonov-Bohm (AB) oscillation at 4 K with an applied magnetic field along the wirelength where the period in magnetic field is inversely proportional to the square of the diameter. The magnetoresistance (MR) measured when \(B\) is parallel to the length of the wire (LMR) and thermopower results for Bi nanowires [13] has been examined. Results for the 50 nm sample A96 have been presented [13]. It was observed that with the increase in magnetic fields along the wire axis, the wires exhibited a stepwise increase in conductance and oscillatory thermopower. AB oscillations are caused by an increased number of high-mobility spiral (helical) surface modes for increasing magnetic fields along the wirelength and shows that the dissipation is extremely low. The mean free path of spiral modes was estimated to exceed 10 \(\mu\)m at 10 T. The diameter \(d\) of the single-crystal Bi nanowires in the primary batch used in our experiment ranged from 45 to 55 nm. Additional batches of large diameter wires in the range of 75 nm to 340 nm were also employed in this investigation. Several experimental runs with four samples drawn from the primary batch were performed. We examined the dependency of the magnetoresistance (MR) upon the orientation of the nanowire with respect to the magnetic field (AMR) as a function of the tilt angle \(\alpha\), where \(\alpha\) is the angle between the magnetic field and the perpendicular of the (111) atomic plane. In these experiments the magnetic field \(B\), in the range of 0\(-\)14 T, is applied perpendicular to the wire length. The geometry is shown in the inset of Fig. 1 where C3 is the perpendicular to the bilayer and C2 is the binary axis. Hereafter we shall present the AMR results for a sample (A49) of the batch. Results of our experiments with the larger diameter nanowires, are presented in Appendix B. We observed an angular dependence that is symmetric around the AMR local minimum at \(\alpha=90^{\circ}\). Fig. 1(a) shows the AMR in the range of \(\alpha=20\)-\(100^{\circ}\). The AMR had a smooth component possessing the form \((1-\cos^{2}(\alpha))\) and is caused by Lorentz forces. This form of magnetoresistance has been observed in BiSb films [27] also. Figure 1: Yamaji oscillations. (a) Angular dependence of the magnetoresistance (AMR) versus \(\alpha\), for different magnetic fields \(B\) of our 50-nm Bi nanowire sample (A49). The experimental geometry is illustrated in the inset. The spacing between bilayers is \(c\). The magnetic field is perpendicular to the wire axis. The angle dependent magnetoresistance was observed experimentally by rotating \(B\) around the wire axis. If \(\theta\) is the angular position of \(B\), the angle between the normal to the bilayer, C3, and the magnetic field is \(\alpha\) = \(\theta\)+20\({}^{\circ}\) (90\({}^{\circ}\) -\(\theta\)/90\({}^{\circ}\). The resistance shows maxima at the magic angles indexed as \(n\) = 1, 2, 3 and 4. The angles indicated as \(J\), \(K\) and \(L\) are MR minima. For \(J\), \(\alpha\) = 54\({}^{\circ}\). For \(K\), \(\alpha\) = 57.4\({}^{\circ}\) and for \(L\), \(\alpha\) = 72\({}^{\circ}\). Side peaks are indicated with red arrows. (b) Illustration of corrugated tube showing bellies. The behavior of \(\nu_{z}\) = (2\(\pi\)/\(h\))(d\(E\)/d\(k_{z}\)) for the first Yamaji angle, as indicated. The orbit is represented with a red ellipse and the velocities are black arrows. In the latter case, the average of \(\nu_{z}\) in the orbit, called drift velocity is zero, which leads to zero conductance and causes the MR peaks. (c) Observation of the linear dependence of \(\tan(\alpha)\) versus index in accordance with Eq. 1; k\({}_{F}\) = 7.1\(\times\)10\({}^{8}\) m\({}^{\text{-1}}\). In addition, the AMR has an oscillatory component. The oscillations are also presented by all the other samples of the batch. AMR oscillations that are periodic in \(\tan\alpha\) were observed in a wide range of quasi-two dimensional materials and were successfully explained on the basis of a corrugated tube model of interlayer coupling, where the electronic states of layered materials obey the dispersion relation [14]: \[E=E(k_{x}\,,k_{y})-t\cos\,(ck_{z}) \tag{1}\] The first term represent the energy dispersion in the conducting layer where \(k_{x}\) and \(k_{y}\) are the in-plane components of \(k\), and \(k_{z}\) is the component perpendicular to the plane. The second term is the interlayer coupling, where \(t\) is the transfer integral and \(c\) is the interlayer distance. Here, the first term in Eq. 1 represents the 2D surface states of the Bi bilayer. These states are observed via angle-resolved photoemission spectroscopy (ARPES) [26] of the (111) surface of Bi crystals as well as thin films. It was found that the surface electrons are in a 7\(\times\)10-2 A-1 - radius ring centered on the bilayer normal axis (trigonal axis) of the 2D Fermi surface and surface holes are in six-fold ellipsoidal hole pockets with axis measured to be between 0.4 and 0.7 A-1. Electron and hole charges are located in a small fraction of the cross-section of the Brillouin zone perpendicular to the (111) plane. The bilayer electronic charge density \(\varSigma\,_{3}\) was measured in the ARPES experiments and was estimated to be \(8\times 10^{12}\)/cm\({}^{2}\). In discussing our data, we are concerned specifically with an effect of a 3D open FS that appears in a layered material such as bismuth. According to Eq. 1 the Fermi surface (FS), with \(E=E_{\text{F}}\), is a corrugated tube, where the tube diameter has an oscillatory component with a period of \(2\pi/c\) and amplitude \(t\). It was shown by Yamaji [14], on the basis of a semiclassical theory, that in an open FS there are closed orbits that satisfy \(ck_{\text{F}}\tan\alpha_{\text{n}}=(n\) - \((1/4))\pi\), where \(k_{\text{F}}\) is the projection of the Fermi wave number on the layer conduction plane, \(c\) is the interlayer distance and \(n\) is an integer. The angle \(a_{\rm n}\) predicted by this equation is independent of the magnetic field in contrast to Shubnikov de Haas (SdH) oscillations [11] that show a 1/\(B\) dependence. The close orbits predicted by Yamaji cause peaks of the AMR [16]. Such peaks are observed in our experiments and the phenomenon is evident in the linear relationship between tan\(\alpha_{\rm n}\). The peak order \(n\) is shown in Fig. 1(c). From the observed proportionality constant which is indicated with a dashed line and taking \(c=0.38\) nm [28], we find the Fermi surface tube radius is \(k_{\rm F}=0.67\) A-1. Therefore, these Yamaji orbits must involve holes since electrons could not contribute because their momenta is limited to about 0.1 A-1 In light of the simulations by Yagi \(et\)\(al.\)[16], the width of the \(n=2\) peak of the AMR at 10 T, that is \(\Delta\alpha\sim\)10\({}^{\rm o}\), indicates that \(\omega_{o}\tau\approx\) 1-3, where \(\omega_{o}=\) e_B/m_ and \(\tau\) is the relaxation time. Therefore \(\tau\) is estimated to be approximately 3\(\times\)10\({}^{11}\) sec-1. Comparable values for the Bi surface states have been reported in the past [12, 13]. Bi shows unusual interlayer coupling. For \(\alpha\sim\) 90\({}^{\rm o}\) the sample AMR is negative. A similar negative transverse MR has been observed in other layered materials. The phenomenon can be interpreted in terms of the axial anomaly [29] where \(E\) is aligned with \(B\). The emergence of the axial anomaly can be tied to the topological properties of the surface states. Pippard reviewed similar non-saturating effects associated with an open FS including the "whiskers" of the AMR in some metals like copper caused by the growth of the MR without saturation [30]. In close correspondence, bismuth nanowires also show non-saturating MR peaks for the tilt angles that we identify as Yamaji's. This data is presented in Appendix A. Further information about the characteristics of Bi interlayer coupling was obtained by examining the AMR of the A49 sample for \(\alpha\) between 70 and 110\({}^{\circ}\) (Fig. 2(a)). This sample is 50 nm in diameter. Other experiments with larger diameter nanowires are discussed in Appendix B. The MR is a local minimum for \(\alpha=90^{\circ}\) because the Lorentz forces are minimal for that tilt angle. However, at high magnetic fields, we observe a feature that includes a sharp MR peak at \(\alpha=90^{\circ}\) and side peaks, located at 87\({}^{\circ}\) and 92\({}^{\circ}\). The pattern of 90\({}^{\circ}\) peak and side peaks is well known in the field of layered materials [19, 20]. The side peaks, which are termed kinks by Figure 2: Coherence peak. (a) AMR of the 50-nm nanowire showing the coherence peak and sidebands for \(\alpha\sim 90^{\circ}\). The nanowire shows negative magnetoresistance in a range of from 75\({}^{\circ}\) to 115\({}^{\circ}\) for intermediate magnetic fields (3 T \(<B<\) 8 T) (b) Schematic diagram of the corrugated tube Fermi surface of the bismuth surface in \(k\)-space. The radius of the cylinder is \(k_{\rm F}\).Vertical dashed lines show the Brillouin zone boundaries at \(\pm\)r/\(c\). Open orbits are represented with solid black lines and the closed orbits, that give rise to the 90\({}^{\circ}\) peak (Ref. [20]) are represented with dashed red lines. Proposed schematic of the layer structure. The bilayers are illustrated as gray hexagons. Edge and hinge modes are illustrated with black and green lines, respectively. \(\Sigma_{3}\) is the surface charge on the bilayer and \(\Sigma_{2}\) represents the excess surface charge that was observed via SdH, Huber _et al_. (Ref. [13]). Hanasaki _et al_[20] are more robust than the peak at 90\({}^{\rm o}\) in our measurements. The coherence peak and side peaks are attributed to coherent electron transport along small closed orbits on the bulges of a corrugated cylindrical FS and also to open trajectories lying near self-crossing orbits [21]. Such orbits, which are illustrated in Fig. 2(b), are effective at increasing dissipation and lead to an increase in the MR for \(\alpha\sim 90^{\rm o}\). If \(B\) is tilted away from the in-plane direction by an angle \(\Delta\alpha\sim tck_{F}/E_{\rm F}\), such that the small closed orbits about the bulges cease to be possible, then according to these results the magnetoresistance should decrease. In our nanowires, the full-width at half maximum (FWHM) of the peak was 6\({}^{\rm o}\). Assuming the values of the mass \(m_{3}=0.22\) from ARPES [26] and using \(E_{\rm F}=\hbar^{2}(k_{\rm F})^{2}/2m_{3}\) we find \(E_{\rm F}=76\) meV. Therefore, we estimate that \(t\)\(=8\) meV. In comparison, the in-plane nearest neighbour hopping energy \(E_{\rm i}\) is 1.4 eV [28]. The inequality \(t<<\)\(E_{\rm i}\) indicates that the electrons scatter more frequently than they tunnel between layers and therefore it makes sense to consider a FS that extends in the interlayer direction. Also, the surface of bismuth is found to be very anisotropic (\(t<<\)\(E_{\rm i}\) ) and the small value of \(t\) is the reason that STM measurements [3, 4] show that the bilayer is isolated from the bulk bismuth substrate. Clearly, the FS is found to be aligned with the (111) direction, perpendicular to the bilayers, parallel to the hinges with a \(\Delta a\), that is only a few degrees. The side peaks, a common occurrence in the observation of the 90\({}^{\rm o}\) peak, are attributed to dissipative processes in the closed orbit [21] and their observation supports the observation of the coherence peak. It is important to note that the value of \(\Delta\alpha\) that we observe is typically a factor of four larger than in other cases of coherence peaks. For example, in the case of PdCoO\({}_{2}\), Kikukawa reports that \(\Delta\alpha\sim 1.5^{\rm o}\). This is because \(k_{\rm F}\) is especially small for bismuth compared with the other layered materials that have been studied in the past. Now, we will discuss further effects in the data presented in Fig. 2(a). The oscillations observed between 80\({}^{\circ}\) and 100\({}^{\circ}\) for increasing \(B\) are periodic in 1/\(B\), and can be interpreted as SdH oscillations owing to the filling and emptying of Landau levels (LL); therefore, they cannot be interpreted in terms of the semiclassical model presented by Yamaji [14] and Yagi [16]. We have observed these oscillations in other small diameter samples and reported about them in a prior publication [13]. We assigned these SdH oscillations to the 2D LL of the surface carriers present on the nanowire surface, which have orbits perpendicular to the binary axis. Analysis of the temperature and magnetic field dependence of the SdH oscillations in our prior work showed that \(m_{z}\)= 0.25 \(\pm\) 0.03. The charge density per unit area \(\Sigma\) 2 was estimated from the SdH period (\(P\) = 0.060 \(T^{\cdot\cdot 1}\)) using \(\Sigma\) 2 = _f_(_Ph/e_), where \(f\) is the 2D Landau level degeneracy, which is two on account of the two-fold spin degeneracy, and where _h/e_ is the quantum flux. We found that \(\Sigma\) 2 = 8.1 \(\times\) 10\({}^{11}\)/cm\({}^{2}\), which is substantial, just an order of magnitude smaller than \(\Sigma\) 3, the bilayer charge density [28]. Also, the SdH method has been applied to the measurement of surface states by Ning _et al_[11]. They report a low value of surface charge smaller than \(\Sigma\) 3, and negative MR. Observation of \(\Sigma\) 2 is very significant since the three-dimensional TI or strong TI (STI) would manifest itself with protected surface states at all surfaces of bismuth, including those perpendicular to the layers. Ning et al also report about the non-trivial properties of the surface states. However in our experiment, analysis of the data leads to a determination of the Berry phase \(\gamma\) that has significant errors (\(\Delta\gamma\approx\) 0.2 \(\pi\)) and we cannot arrive to a conclusion regarding the topological nature of \(\Sigma\) 2. ## III Discussion Our observations of an open Fermi surface suggest a commentary about the hinges. It is encouraging that HOTI predicts a highly directional transmission of longitudinal modes at the surface and that we observe a tubular open Fermi surface that correspond to this expectation. It is observed that the tube has a diameter determined by \(k_{\rm F}\) and therefore, there is back-and-forth exchange of momentum and energy, or strong hybridization, tending to an equilibration between hinge modes and the intralayer modes that is, the 2D modes. This is a mechanism that should be included in HOTI in order to interpret the FS tube diameter. In sharp contrast to the observation of open FS, the bulk bismuth Fermi surface consists of ellipsoidal, closed, electron, and hole pockets [31]. AMR studies have been performed and it has been observed that the MR of bulk bismuth exhibits the angular dependency that semi-classic transport theory predicts for a multi-valley system with anisotropic mobility [32]. In these experiments, Yamaji oscillations are not observed. This confirms that the origin of the Yamaji angles and the corresponding flat bands is the bismuth hole surface states that populate the nanowires, with the underlying cause being the layered structure of bismuth. In comparison, the well-known magic angles in twisted bilayer graphene appear because of the moire pattern in the atomic positions of carbon atoms, a geometric property that is engineered by twisting that leads to flat bands [35]. To our knowledge, there is no analog in bismuth nanowires. However, since higher-order topology have been recently extended to include new families of layered compounds, such as BiI, MoTe\({}_{2}\) and WTe\({}_{2}\)[36-38], our discovery of an open Fermi surface in bismuth creates approaches for the observation of flat bands (Fig. 1), an axial anomaly (Fig. 2), and superconductivity in the novel higher-order topological candidates. Hofmann has discussed the observation of superconductivity in Bi nanoclusters [26]. New properties and new applications may follow. In a recent and exciting proposal, crystalline Bi nanowires provide a platform for realizing Majorana modes for quantum computing [39]. In conclusion recent theoretical approaches, HOTI and TCI, consider bismuth to be a stack of bilayers, joined by van der Waals forces and predict that bismuth interlayer electrical coupling in the stack involve topologically protected one-dimensional edge and hinge states. We investigated electronic transport in small-diameter single-crystal bismuth nanowires. We found strong evidence of transport between bilayers, indicating coherent electronic transport via an open Fermi surface--a corrugated tube. The FS is found to be aligned with the (111) direction, perpendicular to the bilayers, parallel to the hinges. Therefore, we postulate that the strong coherent and directional interlayer coupling established by hinges plays a major role in the coherent interlayer coupling of bismuth. Also, our study reveals that bismuth presents an unique opportunity for studying open Fermi surfaces since it has flat bands that give rise to a sequence of magnetoresistance peaks at Yamaji magic angles. Also, we uncovered remarkable similarities between bismuth and the traditional layered materials. **ACKNOWLEDGEMENTS** This work was supported by the project ANCD 20.80009.5007.02 in Moldova. In the U.S., the work was sponsored by the U.S. National Science Foundation STC Center for Integrated Quantum Materials, Grant 1231319, The Boeing Company, and the Keck Foundation. ## Appendix A More information on the AMR near the Yamaji angles In this section we discuss, further evidence of Yamaji angles that consists of the observation of an interplay of saturated and unsaturated MR. This is shown in Fig. 3 Unsaturated MR is a result of open electron trajectories, which require a description in terms of an extended open FS. Figure 4 shows the nanowire resistance for various Yamaji magic angles maxima as well as three minima labeled \(J\), \(K\) and \(L\), also seen in Fig. 1. It is noted that the magnetoresistance does not saturate for the tilt angles corresponding to the Yamaji angles, \(n=1\) to 4, that correspond to MR maxima. In contrast, for the minima, \(J\), \(F\), \(K\) the resistance MR saturates at low magnetic fields (2 T). This property that is reminiscent of the effect of unsaturated MR, or "whiskers", in metals, has been reviewed by Ziman [23] and by Pippard [30]. These effects in bulk copper have been reviewed by Klauder _et al_[41]. It has been shown by Yagi _et al_. [16], that this effect is a property of a conductor with a tube Fermi surface. He presented a study of the MR of a conductor with such a corrugated FS, based on the calculation within the semiclassical approximation, of the Boltzmann equation using Shockley tube integral. He found that this effect is reproduced in the calculations for various \(\omega\tau\), including those in the range of \(1-3\) that is relevant here. several angles, that is Yamagi angles of order \(n=\)1 to \(n=4\), and for AMR minima \(J\), \(F\), and \(K\). ## Appendix B Coherence in Large Diameter Bi Nanowires In this appendix, we present data of large diameter nanowires. The fiber-drawing process [12, 13] is a significant advance in the fabrication of Bi nanowires. It allowed us to fabricate nanowires of various diameters. The diameter is estimated using the Aharonov-Bohm oscillation of the MR under an applied field B along the wirelength. Nanowires in the range of diameters \(d\) between 90 nm and 210 nm show the coherence feature, consisting of a coherence peak and side peaks at 90\({}^{\mathrm{o}}\). These cases are in addition to the case of 50-nm nanowires shown in Figs. 1 and 2. In contrast, nanowires of diameter 340 nm and above, show a minimum at 90\({}^{\mathrm{o}}\) and no coherence feature. This is because the bulk electron and holes dominate the transport and their FS is closed. Figure 3: Nanowire resistance as a function of \(B\) for various AMR maxima and minima The identification of the FS topology, closed FS versus open FS, is key to our understanding of the metallic state in the surface states of Bi. It has been shown that a three-dimensional Fermi surface, such as shown in Fig. 1(b) is not necessary for the observation of the Yamaji oscillations shown in Fig. 1(a). Perez Moses _et al_ presents a review and discussion of the experimental proof of open FS [40]. Based on this discussion, we focus on the observation of the 90\({}^{\mathrm{o}}\) coherence peak and side peaks. The data is presented in Fig. 4. The side peaks at 84\({}^{\mathrm{o}}\) and 95\({}^{\mathrm{o}}\) are especially prominent in the 140-nm nanowire. In contrast, the 340 nm nanowire show a parabolic minimum and no coherence maximum or side peaks. The trend towards a 90\({}^{\mathrm{o}}\) minimum is observed in the 260 nm nanowires, also. From these observations we infer that the local maximum at 90\({}^{\mathrm{o}}\) can be linked to the dominance of surface states in electronic transport of small diameter nanowires. We also find that the width of the coherence maximum is independent of the wire diameter indicating that it is an intrinsic property of bismuth. Figure 4: Coherence feature in Bi nanowires. AMR of Bi nanowires in the range 80-100 degrees of samples with four representative diameters from 90 nm to 340 nm. The coherence peak is the 90\({}^{\mathrm{o}}\) maximum that is observed in the 90 nm and 140 nm nanowires. Among the various nanowires, the side peaks are most prominent in the 140 nm bismuth nanowires.
2307.01857
Resonance $X(7300)$: excited $2S$ tetraquark or hadronic molecule $χ_{c1}χ_{c1}$?
We explore the first radial excitation $X_{\mathrm{4c}}^{\ast}$ of the fully charmed diquark-antidiquark state $X_{\mathrm{4c}}=cc\overline{c}\overline{c} $ built of axial-vector components, and the hadronic molecule $\mathcal{M} =\chi_{c1}\chi_{c1}$. The masses and current couplings of these scalar states are calculated in the context of the QCD two-point sum rule approach. The full widths of $X_{\mathrm{4c}}^{\ast}$ and $\mathcal{M}$ are evaluated by taking into account their kinematically allowed decay channels. We find partial widths of these processes using the strong couplings $g_i^{\ast}$ and $G_i^{(\ast)}$ at the $X_{\mathrm{4c}}^{\ast}$($\mathcal{M}$ )-conventional mesons vertices computed by means of the QCD three-point sum rule method. The predictions obtained for the parameters $m=(7235 \pm 75)~ \mathrm{MeV}$, $\Gamma=(144 \pm 18)~\mathrm{MeV}$ and $\widetilde{m}=(7180 \pm 120)~\mathrm{MeV}$, $\widetilde{\Gamma}=(169 \pm 21)~\mathrm{MeV}$ of these structures, are compared with the experimental data of the CMS and ATLAS Collaborations. In accordance to these results, within existing errors of measurements and uncertainties of the theoretical calculations, both the excited tetraquark and hadronic molecule may be considered as candidates to the resonance $X(7300)$. Detailed analysis, however, demonstrates that the preferable model for $X(7300)$ is an admixture of the molecule $\mathcal{M}$ and sizeable part of $X_{\mathrm{4c}}^{\ast}$.
S. S. Agaev, K. Azizi, B. Barsbay, H. Sundu
2023-07-04T18:00:59Z
http://arxiv.org/abs/2307.01857v2
Resonance \(X(7300)\): excited \(2s\) tetraquark or hadronic molecule \(\chi_{\rm c1}\chi_{\rm c1}\)? ###### Abstract We explore the first radial excitation \(X_{4\rm c}^{*}\) of the fully charmed diquark-antidiquark state \(X_{4\rm c}=c\overline{c}\overline{c}\) built of axial-vector components, and the hadronic molecule \({\cal M}=\chi_{c1}\chi_{c1}\). The masses and current couplings of these scalar states are calculated in the context of the QCD two-point sum rule approach. The full widths of \(X_{4\rm c}^{*}\) and \({\cal M}\) are evaluated by taking into account their kinematically allowed decay channels. We find partial widths of these processes using the strong couplings \(g_{i}^{*}\) and \(G_{i}^{(*)}\) at the \(X_{4\rm c}^{*}({\cal M})\)-conventional mesons vertices computed by means of the QCD three-point sum rule method. The predictions obtained for the parameters \(m=(7235\pm 75)\) MeV, \(\Gamma=(144\pm 18)\) MeV and \(\widetilde{m}=(7180\pm 120)\) MeV, \(\widetilde{\Gamma}=(169\pm 21)\) MeV of these structures, are compared with the experimental data of the CMS and ATLAS Collaborations. In accordance with this analysis, the radially excited tetraquark \(X_{4\rm c}^{*}\) is promising candidate to the resonance \(X(7300)\), though we do not exclude the molecule or mixed tetraquark-molecule model for this state. ## I Introduction The multiquark hadrons composed of exclusively heavy quarks were in agenda of researches from first years of the parton model and QCD. During past decades a lot of was done to investigate features of such particles, calculate their parameters in the context of different models, study production and decay mechanisms of these hadrons. Reports of the LHCb, ATLAS and CMS Collaborations on the scalar \(X\) resonances in the 6.2-7.3 GeV mass range became one of important experimental achievements in the physics of fully charmed four-quark mesons [1; 2; 3]. The structures \(X(6200)\), \(X(6600)\), \(X(6900)\) and \(X(7300)\) observed by these experiments in the di-\(J/\psi\) and \(J/\psi\psi^{\prime}\) mass distributions provide useful information and allow one to compare numerous theoretical models and predictions with the masses and widths of these states. These discoveries generated new theoretical activities to explain observed states, reveal their internal structures [4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16]. The fully heavy \(X\) resonances were considered as scalar four-quark mesons with diquark-antidiquark or hadronic molecule organizations [4; 5; 6; 7; 8]. For example, the resonance \(X(6900)\) may be a diquark-antidiquark state with pseudoscalar irgrediors, or hadronic molecule \(\chi_{c0}\chi_{c0}\)[5]. The structure \(X(6200)\) was interpreted as a ground-level tetraquark with the spin-parities \(J^{\rm PC}=0^{++}\) or \(1^{+-}\), whereas \(X(6600)-\) as its first radial excitation [6]. The four structures \(X(6200)-X(7300)\) were assigned to be different excited tetraquark states [7; 8]. Alternative scenarios explain appearance of the \(X\) resonances by coupled-channel effects. Thus, using this approach the authors of Ref. [11] predicted existence of the near-threshold state \(X(6200)\) with \(J^{\rm PC}=0^{++}\) or \(2^{++}\) in the di-\(J/\psi\) system. Coupled-channel effects may also generate a pole structure identified in Ref. [13] with \(X(6900)\), and lead to emergence of a bound state \(X(6200)\), and resonances \(X(6680)\) and \(X(7200)\), which can be classified as broad and narrow structures, respectively. Production mechanisms of fully heavy tetraquarks in different processes became topics for interesting investigations [17; 18]. Thus, inclusive production of fully charmed \(S\)-wave four-quark mesons at the LHC energies was studied in the nonrelativistic QCD factorization framework in Ref. [17]. Production of fully-heavy tetraquark states in \(pp\) and \(pA\) collisions through the double parton scattering mechanism was considered in Ref. [18], in which it was shown that a search for such states is feasible in the future runs of LHC and in Future Circular Collider. The fully heavy four-quark mesons were studied also in our articles [19; 20; 21]. The scalar tetraquarks \(X_{4\rm c}=c\overline{c}\overline{c}\) and \(X_{4\rm b}=b\overline{b}\overline{b}\) built of axial-vector diquarks were explored in Ref. [19]. It was demonstrated that \(X_{4\rm c}\) with the mass \((6570\pm 55)\) MeV and full width \((110\pm 21)\) MeV is nice candidate to the resonance \(X(6600)\). The fully beauty state \(X_{4\rm b}\) has the mass \((18540\pm 50)\) MeV that is smaller than the \(\eta_{b}\eta_{b}\) threshold therefore it cannot be seen in \(\eta_{b}\eta_{b}\) or \(\Upsilon(1S)\Upsilon(1S)\) mass distributions. The \(X_{4\rm b}\) can decay to open-beauty mesons through \(\overline{b}\overline{b}\) annihilation to gluon(s) that triggers \(X_{4\rm b}\to B^{+}B^{-}\) and other decays [9]. Another way of transformations to conventional mesons are leptonic and nonleptonic decays of \(X_{4\rm b}\). The scalar tetraquarks \(T_{4\rm c}\) and \(T_{4\rm b}\) composed of pseudoscalar diquarks were explored in Ref. [20], in which we computed their masses and widths. The parameters \(m=(6928\pm 50)\) MeV and \(\widetilde{\Gamma}_{4\rm c}=(128\pm 22)\) MeV of \(T_{4\rm c}\) are in excellent agreements with relevant CMS data, therefore we interpreted it as the resonance \(X(6900)\). The exotic meson \(T_{4\rm b}\) decays to \(\eta_{b}\eta_{b}\) pairs and can be detected in the mass distribution of these mesons. It is interesting that the hadronic molecule \(\chi_{c0}\chi_{c0}\) (a brief form of \(\chi_{c0}(1P)\chi_{c0}(1P)\)) has similar parameters and is another candidate to \(X(6900)\)[21]. Hence, \(X(6900)\) may be con sidered as a linear superposition of the molecule \(\chi_{c0}\chi_{c0}\) and diquark-antidiquark state \(T_{\rm 4c}\). The lowest lying structure among \(X\) states is the resonance \(X(6200)\), that may be interpreted as the molecule \(\eta_{c}\eta_{c}\). In fact, the mass \((6264\pm 50)\) MeV and full width (\(320\pm 72)\) MeV of the molecule \(\eta_{c}\eta_{c}\) agree with the LHCb-ATLAS-CMS data [21]. The last position in the list of new \(X\) structures is held by the resonance \(X(7300)\). This state was detected in both the di-\(J/\psi\) and \(J/\psi\psi^{\prime}\) mass distributions. In Ref. [19], we used this fact to make assumptions about its nature, and argued that \(X(7300)\) maybe is the \(2S\) radial excitation of the exotic meson \(X(6600)\). Another option for \(X(7300)\) is the hadronic molecule model \(\chi_{c1}(1P)\chi_{c1}(1P)\) (in what follows \(\chi_{c1}\chi_{c1}\)) that may have close parameters. In the present article, we address problems connected with the resonance \(X(7300)\) in attempts to describe its parameters in the four-quark model. To this end, we calculate the mass and width of the first radial excitation \(X_{\rm 4c}^{*}\) of the diquark-antidiquark state \(X_{\rm 4c}\). The full width of \(X_{\rm 4c}^{*}\) is evaluated using its kinematically allowed decays to \(J/\psi J/\psi\), \(J/\psi\psi^{\prime}\), \(\eta_{c}\eta_{c}\), \(\eta_{c}\eta_{c}(2S)\), \(\eta_{c}\chi_{c1}\), \(\chi_{c0}\chi_{c0}\), and \(\chi_{c1}\chi_{c1}\) mesons. We are also going to perform the similar analysis in the case of the molecule \(\mathcal{M}=\)\(\chi_{c1}\chi_{c1}\). We will compare predictions for parameters of \(X_{\rm 4c}^{*}\) and \(\mathcal{M}\) with experimental data, and each other to make decision about the nature of \(X(7300)\). This article is organized in the following form: In Sec. II, we explore the excited tetraquark \(X_{\rm 4c}^{*}\) and compute its mass and full width. The same analysis for the molecule \(\mathcal{M}\) is carried out in Sec. III. In the last Section IV, we present our brief conclusions. Appendix contains expressions some of correlation functions used in the present analysis. ## II Radially excited state \(X_{\rm 4c}^{*}\) In this section, we explore the first radial excitation \(X_{\rm 4c}^{*}\) of the scalar tetraquark \(X_{\rm 4c}\) built of axial-vector diquarks. The mass and current coupling of this state are computed by means of the QCD two-point sum rule (SR) approach [22; 23]. To evaluate partial widths of the kinematically allowed decay channels of \(X_{\rm 4c}^{*}\), we are going to employ the three-point sum rule method, which is necessary to find strong couplings at corresponding three-particle vertices. ### Mass \(m\) and coupling \(f\) of \(X_{\rm 4c}^{*}\) The sum rules for the mass \(m\) and current coupling \(f\) of the tetraquark \(X_{\rm 4c}^{*}\) can be extracted from analysis of the correlation function \[\Pi(p)=i\int d^{4}xe^{ipx}\langle 0|\mathcal{T}\{J(x)J^{\dagger}(0)\}|0\rangle, \tag{1}\] where \(\mathcal{T}\) is the time-ordered product of two currents, and \(J(x)\) is the interpolating current for the states \(X_{\rm 4c}\) and \(X_{\rm 4c}^{*}\). We model \(X_{\rm 4c}\) and \(X_{\rm 4c}^{*}\) as tetraquarks built of the axial-vector diquark \(c^{T}C\gamma_{\mu}c\) and axial-vector antidiquark \(\overline{c}\gamma_{\mu}C\overline{c}^{T}\). Then, the interpolating current is determined by the expression \[J(x)=c_{a}^{T}(x)C\gamma_{\mu}c_{b}(x)\overline{c}_{a}(x)\gamma^{\mu}C \overline{c}_{b}^{T}(x), \tag{2}\] with \(a\) and \(b\) being color indices. In Eq. (2) \(c(x)\) is \(c\)-quark fields, and \(C\) is the charge conjugation matrix. The current \(J(x)\) describes the diquark-antidiquark states with spin-parities \(J^{\rm PC}=0^{++}\). The ground-level particle with this quark content and quantum numbers is the tetraquark \(X_{\rm 4c}\) which was investigated in our paper [19]. We computed its mass \(m_{0}\) and coupling \(f_{0}\) by employing the two-point SR approach. We took into account explicitly only the ground-state term and included all other contributions to a class of "higher resonances and continuum states". We refer to this standard treatment as "ground-state+continuum" approximation. To derive sum rules for \(m\) and \(f\), we express the correlation function \(\Pi(p)\) in terms of \(X_{\rm 4c}\) and \(X_{\rm 4c}^{*}\) tetraquarks' masses and couplings. Having inserted a complete set of intermediate states with the same content and quantum numbers of these tetraquarks, and carried out integration over \(x\), we get \[\Pi^{\rm Phys}(p)=\frac{\langle 0|J|X_{\rm 4c}(p)\rangle\langle X_{ \rm 4c}(p)|J^{\dagger}|0\rangle}{m_{0}^{2}-p^{2}}\] \[+\frac{\langle 0|J|X_{\rm 4c}^{*}(p)\rangle\langle X_{\rm 4c}^{*}(p)|J^{ \dagger}|0\rangle}{m^{2}-p^{2}}\cdots\,. \tag{3}\] This expression contains two terms corresponding to the ground-state particle \(X_{\rm 4c}\) with the mass \(m_{0}\) and a contribution coming from the first radially excited state, i.e., from \(2S\) level tetraquark \(X_{\rm 4c}^{*}\). Here, the ellipses stand for the effects of higher resonances and continuum states. This approach is "ground-level+first excited state +continuum" approximation. The \(\Pi^{\rm Phys}(p)\) can be simplified using the matrix elements \[\langle 0|J|X_{\rm 4c}(p)\rangle=f_{0}m_{0},\ \langle 0|J|X_{\rm 4c}^{*}(p) \rangle=fm, \tag{4}\] where \(f_{0}\) and \(f\) are current couplings of the \(X_{\rm 4c}\) and \(X_{\rm 4c}^{*}\), respectively. Then, we get \[\Pi^{\rm Phys}(p)=\frac{f_{0}^{2}m_{0}^{2}}{m_{0}^{2}-p^{2}}+\frac{f^{2}m^{2} }{m^{2}-p^{2}}+\cdots\,. \tag{5}\] This function contains only the Lorentz structure proportional to I, hence the invariant amplitude \(\Pi^{\rm Phys}(p^{2})\) necessary for our analysis is defined by rhs of Eq. (5). The QCD side of the sum rules is formed by the correlation function \(\Pi(p)\) expressed using \(c-\)quark propagators and calculated in the operator product expansion (OPE) with some accuracy. In the case under discussion, \(\Pi^{\rm OPE}(p)\) and corresponding amplitude \(\Pi^{\rm OPE}(p^{2})\) were computed in Ref. [19]. There, we also found the parameters \(m_{0}\) and \(f_{0}\) of the ground-state particle \(X_{4{\rm c}}\), which appear in the present analysis as input quantities. After the Borel transformation and continuum subtraction the SR equality takes the form \[f^{2}m^{2}e^{-m^{2}/M^{2}}=\Pi(M^{2},s_{0})-f_{0}^{2}m_{0}^{2}e^{-m_{0}^{2}/M^{ 2}}, \tag{6}\] which in conjunction with the derivation of Eq. (6) over \(d/d(-1/M^{2})\), can be utilized to find sum rules for \(m\) and \(f\). Here, \(\Pi(M^{2},s_{0})\) is the amplitude \(\Pi^{\rm OPE}(p^{2})\) after the Borel transformation and subtraction operations, and \(M^{2}\) and \(s_{0}\) are corresponding parameters. The \(\Pi(M^{2},s_{0})\) is given by the formula \[\Pi(M^{2},s_{0})=\int_{16m_{e}^{2}}^{s_{0}}ds\rho^{\rm OPE}(s)e^{-s/M^{2}}. \tag{7}\] where \(\rho^{\rm OPE}(s)\) is a two-point spectral density. It consists of the perturbative contribution \(\rho^{\rm pert.}(s)\) and the dimension-4 nonperturbative term \(\sim\langle\alpha_{s}G^{2}/\pi\rangle\): The explicit expression of \(\rho^{\rm pert.}(s)\) can be found in Ref. [19]. To carry out numerical computations, one needs the gluon vacuum condensate \(\langle\alpha_{s}G^{2}/\pi\rangle=(0.012\pm 0.004)\) GeV\({}^{4}\) and \(c\)-quark mass \(m_{c}=(1.27\pm 0.02)\) GeV. Another important problem to be clarified, is a choice of the parameters \(M^{2}\) and \(s_{0}\). The regions in which they can be changed should meet known restrictions of SR computations. Stated differently, \(M^{2}\) and \(s_{0}\) have to be fixed in a such way that to ensure dominance of the pole contribution (PC) and perturbative term over a nonperturbative one. Another important constraints are convergence of OPE and a stability of extracted observables against variations of the Borel parameter \(M^{2}\). Because, \(\Pi(M^{2},s_{0})\) does not contain quark and mixed condensates the dominance of PC and stability of extracted quantities play key role in choosing of the parameters \(M^{2}\) and \(s_{0}\). In the first phase of computations, we fix the regions for \(M^{2}\) and \(s_{0}\) in a such manner that to consider only the ground-state term in Eq. (3). This task was fulfilled in Ref. [19], where \(M^{2}\) and \(s_{0}\) were varied inside of the regions \[M^{2}\in[5.5,7]~{}{\rm GeV}^{2},~{}s_{0}\in[49,50]~{}{\rm GeV}^{2}. \tag{8}\] As a result, we evaluated the mass \(m_{0}\) and coupling \(f_{0}\) of the ground-state tetraquark \(X_{4{\rm c}}\) \[m_{0} = (6570\pm 55)~{}{\rm MeV},\] \[f_{0} = (5.61\pm 0.39)\times 10^{-2}~{}{\rm GeV}^{4}. \tag{9}\] At the second stage of studies, we use \(m_{0}\) and \(f_{0}\) in Eq. (6) as input parameters and calculate the mass \(m\) and coupling \(f\) of the excited state \[m = (7235\pm 75)~{}{\rm MeV},\] \[f = (8.0\pm 0.9)\times 10^{-2}~{}{\rm GeV}^{4}. \tag{10}\] To compute Eq. (10), we use the working regions \[M^{2}\in[5.5,7]~{}{\rm GeV}^{2},~{}s_{0}^{*}\in[55,56]~{}{\rm GeV}^{2}, \tag{11}\] which obey all constraints imposed on \(\Pi(M^{2},s_{0})\) by the SR analysis. In fact, the pole contribution changes inside limits \(0.93\geq{\rm PC}\geq 0.71\), at the minimum of \(M^{2}=5.5~{}{\rm GeV}^{2}\) the nonperturbative term is negative and constitutes only 1.4% part of the correlation function. The extracted quantities \(m\) and coupling \(f\) bear residual dependence on the parameters \(M^{2}\) and \(s_{0}^{*}\) which is a main source of theoretical uncertainties. These effects are equal to \(\pm 1\%\) in the case of \(m\), and to \(\pm 11\%\) for \(f\) staying within limits acceptable for the SR computations. The behavior of the mass \(m\) under variations of \(M^{2}\) and \(s_{0}^{*}\) is shown in Fig. 1. Because we consider two terms in Eq. (5), and find parameters of the ground-level and radially excited tetraquarks, there is a necessity to check a self-consistency of performed studies. Indeed, the parameters \(s_{0}\) and \(s_{0}^{*}\) separate contributions of interest from ones which are modeled using the assumption about quark-hadron duality. Therefore, in these studies the inequalities \(m_{0}^{2}<s_{0}\) and \(s_{0}<m^{2}<s_{0}^{*}\) should be held: With results of the numerical analysis at hand, it is not difficult to verify these relations. The prediction for the mass \(m=7235\) MeV of the \(2S\) excited tetraquark \(X_{4{\rm c}}^{*}\) within uncertainties of calculations and errors of experiments is consistent with values \(m^{\rm ATL}=7220\pm 30^{+20}_{-30}\) MeV and \(m^{\rm CMS}=7287^{+20}_{-18}\pm 5~{}{\rm MeV}\), respectively. In our article [19] we supposed that the resonance \(X(7300)\) is \(2S\) excited state of \(X(6600)\). This assumption based on the fact that the ATLAS Collaboration detected the resonances \(X(6600)\) and \(X(7300)\) in \(J/\psi J/\psi\) and \(J/\psi\psi^{\prime}\) mass distributions, respectively. Because the mass difference between mesons \(\psi^{\prime}\) and \(J/\psi\) is around 590 MeV, and a comparable mass splitting \((600-735)\) MeV exists in the \(X(7300)-X(6600)\) system, it is natural to assume that \(X(7300)\) is excitation of \(X(6600)\). Our results for the masses of \(X_{4{\rm c}}\) and \(X_{4{\rm c}}^{*}\) differ by amount 665 MeV and seem support this scenario. ### The full width of \(X_{4{\rm c}}^{*}\) The mass \(m\) of the excited tetraquark \(X_{4{\rm c}}^{*}\) allow us to determine its decay channels, and evaluate full width of this state. It is clear, that decays to \(J/\psi J/\psi\), \(J/\psi\psi^{\prime}\), \(\eta_{c}\eta_{c}\), \(\eta_{c}\eta_{c}(2S)\), \(\eta_{c}\chi_{c1}\), \(\chi_{c0}\chi_{c0}\), and \(\chi_{c1}\chi_{c1}\) mesons are among such allowed channels. It is worth noting that decay \(X_{4c}^{*}\to\eta_{c}\chi_{c1}\) is the \(P\)-wave process, whereas the remaining ones are \(S\)-wave decays. We are going to explain in a detailed form only processes \(X_{4c}^{*}\to J/\psi J/\psi\) and \(X_{4c}^{*}\to J/\psi\psi^{\prime}\), and provide final results for other channels. The partial widths of these decays are governed by the strong couplings \(g_{i}^{*}\) at the vertices \(X_{4c}^{*}J/\psi J/\psi\), and \(X_{4c}^{*}J/\psi\psi^{\prime}\). These couplings can be evaluated using the following three-point correlation function \[\Pi_{\mu\nu}(p,p^{\prime})=i^{2}\int d^{4}xd^{4}ye^{ip^{\prime}y} e^{-ipx}\langle 0|{\cal T}\{J_{\mu}^{\psi}(y)\] \[\times J_{\nu}^{\psi}(0)J^{\dagger}(x)\}|0\rangle, \tag{12}\] where \(J_{\mu}^{\psi}(x)\) is the interpolating current for the mesons \(J/\psi\) and \(\psi^{\prime}\) \[J_{\mu}^{\psi}(x)=\overline{c}_{i}(x)\gamma_{\mu}c_{i}(x), \tag{13}\] with \(i=1,2,3\) being the color indices. We apply usual recipes of the sum rule method and express the correlation function \(\Pi_{\mu\nu}(p,p^{\prime})\) in terms of physical parameters of particles. Because the tetraquark \(X_{4c}^{*}\) decays both to \(J/\psi J/\psi\) and \(J/\psi\psi^{\prime}\) pairs, we isolate in \(\Pi_{\mu\nu}(p,p^{\prime})\) contributions of the mesons \(J/\psi\) and \(\psi^{\prime}\) from ones of higher resonances and continuum states. But the current \(J(x)\) also couples to the ground-state tetraquark \(X_{4c}\). Therefore, for the physical side of the sum rule \(\Pi_{\mu\nu}^{\rm Phys}(p,p^{\prime})\), we get \[\Pi_{\mu\nu}^{\rm Phys}(p,p^{\prime})=\sum_{I=1,2}\frac{\langle 0|J _{\mu}^{\psi}|J/\psi(p^{\prime})\rangle}{p^{\prime 2}-m_{J}^{2}}\frac{ \langle 0|J_{\nu}^{\psi}|J/\psi(q)\rangle}{q^{2}-m_{J}^{2}}\] \[\times\langle J/\psi(p^{\prime})J/\psi(q)|X_{4c}^{I}(p)\rangle \frac{\langle X_{4c}^{I}(p)|J^{\dagger}|0\rangle}{p^{2}-m_{I}^{2}}\] \[+\sum_{I=1,2}\frac{\langle 0|J_{\mu}^{\psi}|\psi(p^{\prime}) \rangle}{p^{\prime 2}-m_{\psi}^{2}}\frac{\langle 0|J_{\nu}^{\psi}|J/\psi(q) \rangle}{q^{2}-m_{J}^{2}}\] \[\times\langle\psi(p^{\prime})J/\psi(q)|X_{4c}^{I}(p)\rangle \frac{\langle X_{4c}^{I}(p)|J^{\dagger}|0\rangle}{p^{2}-m_{I}^{2}}\cdots, \tag{14}\] where \(m_{J}=(3096.900\pm 0.006)\) MeV and \(m_{\psi}=(3686.10\pm 0.06)\) MeV are the masses of \(J/\psi\) and \(\psi^{\prime}\)mesons [24]. To write down \(\Pi_{\mu\nu}^{\rm Phys}(p,p^{\prime})\) in a compact form, we use in Eq. (14) notations \(X_{4c}^{1}=X_{4c}\), \(X_{4c}^{2}=X_{4c}^{*}\) and \(m_{1}^{2}=m_{0}^{2}\), \(m_{2}^{2}=m^{2}\). The function \(\Pi_{\mu\nu}^{\rm Phys}(p,p^{\prime})\) can be expressed in terms of mesons and tetraquarks masses and decay constants (couplings). To this end, one should use the matrix elements of the tetraquarks Eq. (4), as well as the matrix elements \[\langle 0|J_{\mu}^{\psi}|J/\psi(p)\rangle = f_{J}m_{J}\varepsilon_{\mu}(p),\] \[\langle 0|J_{\mu}^{\psi}|\psi^{\prime}(p)\rangle = f_{\psi}m_{\psi}\widetilde{\varepsilon}_{\mu}(p), \tag{15}\] and \[\langle J/\psi(p^{\prime})J/\psi(q)|X_{4c}(p)\rangle=g_{1}(q^{2}) \left[q\cdot p^{\prime}\varepsilon^{*}(p^{\prime})\cdot\varepsilon^{*}(q)\right.\] \[\left.-q\cdot\varepsilon^{*}(p^{\prime})p^{\prime}\cdot\varepsilon^{ *}(q)\right],\] \[\langle\psi(p^{\prime})J/\psi(q)|X_{4c}(p)\rangle=g_{2}(q^{2}) \left[q\cdot p^{\prime}\widetilde{\varepsilon}^{*}(p^{\prime})\cdot\varepsilon^{ *}(q)\right.\] \[\left.-q\cdot\widetilde{\varepsilon}^{*}(p^{\prime})p^{\prime} \cdot\varepsilon^{*}(q)\right]. \tag{16}\] Here, \(f_{J}=(409\pm 15)\) MeV, \(f_{\psi}=(279\pm 8)\) MeV and \(\varepsilon_{\mu}\), \(\widetilde{\varepsilon}_{\mu}\) are the decay constants and polarization vectors of the mesons \(J/\psi\) and \(\psi^{\prime}\)[24; 25], respectively. In the Figure 1: Mass of the tetraquark \(X_{4c^{*}}\) as a function of the Borel \(M^{2}\) (left), and the continuum threshold \(s_{0}^{*}\) parameters (right). vertices with the excited tetraquark \(X_{4{\rm c}}^{*}(p)\) one should write form factors \(g_{1}^{*}(q^{2})\) and \(g_{2}^{*}(q^{2})\). Having used these matrix elements and carried out simple calculations, we find for \(\Pi_{\mu\nu}^{\rm Phys}(p,p^{\prime})\) \[\Pi_{\mu\nu}^{\rm Phys}(p,p^{\prime})=g_{1}(q^{2})f_{0}m_{0}f_{2}^ {2}m_{J}^{2}F_{\mu\nu}(m_{0},m_{J})\] \[+g_{1}^{*}(q^{2})fmf_{2}^{2}m_{J}^{2}F_{\mu\nu}(m,m_{J})\] \[+g_{2}(q^{2})f_{0}m_{0}f_{J}m_{J}f_{\psi}m_{\psi}F_{\mu\nu}(m_{0}, m_{\psi})+\] \[+g_{2}^{*}(q^{2})fmf_{J}m_{J}f_{\psi}m_{\psi}F_{\mu\nu}(m,m_{\psi })+\cdots, \tag{17}\] where \[F_{\mu\nu}(a,b)=\frac{\left[\left(a^{2}-b^{2}-q^{2}\right)g_{\mu\nu}-2q_{\mu}p _{\nu}^{\prime}\right]}{2\left(p^{2}-a^{2}\right)\left(p^{\prime 2}-b^{2} \right)\left(q^{2}-m_{J}^{2}\right)}. \tag{18}\] As is seen, there are two structures in \(\Pi_{\mu\nu}^{\rm Phys}(p,p^{\prime})\) which can be used for SR analysis. To derive the sum rules for the form factors \(g_{i}^{(*)}(q^{2})\), we work with the Lorentz structure \(g_{\mu\nu}\), and corresponding invariant amplitude \(\Pi^{\rm Phys}(p^{2},p^{\prime 2},q^{2})\). After the double Borel transformation of the function \(\Pi^{\rm Phys}(p^{2},p^{\prime 2},q^{2})\) over the variables \(-p^{2}\) and \(-p^{\prime 2}\), we get \[{\cal B}\Pi^{\rm Phys}(p^{2},p^{\prime 2},q^{2})=g_{1}(q^{2})f_{0}m _{0}f_{2}^{2}m_{J}^{2}F(m_{0},m_{J})\] \[+g_{1}^{*}(q^{2})fmf_{J}^{2}m_{J}^{2}F(m,m_{J})\] \[+g_{2}(q^{2})f_{0}m_{0}f_{J}m_{J}f_{\psi}m_{\psi}F(m_{0},m_{\psi})\] \[+g_{2}^{*}(q^{2})fmf_{J}m_{J}f_{\psi}m_{\psi}F(m,m_{\psi})+\cdots, \tag{19}\] with \(F(a,b)\) being equal to \[F(a,b)=\frac{\left(a^{2}-b^{2}-q^{2}\right)}{2(q^{2}-m_{J}^{2})}e^{-a^{2}/M_{ 1}^{2}}e^{-b^{2}/M_{2}^{2}}. \tag{20}\] The second component of the sum rules is the same correlation function \(\Pi_{\mu\nu}^{\rm OPE}(p,p^{\prime})\), but calculated using the \(c\)-quark propagators. The function \(\Pi_{\mu\nu}^{\rm OPE}(p,p^{\prime})\) and invariant amplitude \(\Pi^{\rm OPE}(p^{2},p^{\prime 2},q^{2})\) were computed in Ref. [19]. Having equated \({\cal B}\Pi^{\rm Phys}(p^{2},p^{\prime 2},q^{2})\) and the doubly Borel transformation of the amplitude \(\Pi^{\rm OPE}(p^{2},p^{\prime 2},q^{2})\), and performed the continuum subtractions, we find the sum rule equality, right-hand side of which is determined by the function \[\Pi({\bf M}^{2},{\bf s}_{0},q^{2})=\int_{16m_{c}^{2}}^{s_{0}}ds \int_{4m_{c}^{2}}^{s_{0}^{\prime}}ds^{\prime}\rho(s,s^{\prime},q^{2})\] \[\times e^{-s/M_{1}^{2}}e^{-s^{\prime}/M_{2}^{2}}. \tag{21}\] where \({\bf M}^{2}=(M_{1}^{2},M_{2}^{2})\) and \({\bf s}_{0}=(s_{0},s_{0}^{\prime})\) are the Borel and continuum threshold parameters, respectively. A spectral density \(\rho(s,s^{\prime},q^{2})\) is found as an imaginary part of \(\Pi^{\rm OPE}(p^{2},p^{\prime 2},q^{2})\). Let us note that parameters \((M_{1}^{2},{\bf s}_{0})\) and \((M_{2}^{2},s_{0}^{\prime})\) correspond to \(X_{4{\rm c}}-X_{4{\rm c}}^{*}\) and \(J/\psi-\psi^{\prime}\) channels, respectively. The equality Eq. (19) obtained by this way contains four unknown form factors \(g_{1(2)}^{(*)}(q^{2})\). One of possible methods to extract them from this equality is to calculate its derivatives over \(-1/M_{1}^{2}\) and \(-1/M_{2}^{2}\). But then final expressions for \(g_{1(2)}^{(*)}(q^{2})\) become rather complicated, which may reduce an accuracy of numerical analyses. Here, we pursue the alternative policy: By choosing appropriate subtraction parameters in \(X_{4{\rm c}}-X_{4{\rm c}}^{*}\) and \(J/\psi-\psi^{\prime}\) channels, we include in analysis terms from Eq. (19) one by one. These operations change number of components in \({\cal B}\Pi^{\rm Phys}\) and integration limits in \(\Pi({\bf M}^{2},{\bf s}_{0},q^{2})\). At each new stage, we take into account results obtained in previous steps, and solve subsequent equations with only one unknown form factor. First of all, let us note that the form factor \(g_{1}(q^{2})\) was evaluated in Ref. [19]. It corresponds to the vertex \(X_{4{\rm c}}J/\psi J/\psi\) and is necessary to compute the partial width of the decay \(X_{4{\rm c}}\to J/\psi J/\psi\). To calculate \(g_{1}(q^{2})\), we fixed parameters \((M_{1}^{2},s_{0})\) as in Eq. (8), whereas for \((M_{2}^{2},s_{0}^{\prime})\) used \[M_{2}^{2}\in[4,5]~{}{\rm GeV}^{2},\,s_{0}^{\prime}\in[12,13]~{}{\rm GeV}^{2}, \tag{22}\] where \(s_{0}^{\prime}\) is limited by the mass \(m_{\psi}^{2}\) of the next state in the \(J/\psi-\psi^{\prime}\) channel, i.e., \(s_{0}^{\prime}<m_{\psi}^{2}\). Afterwards, we choose \((M_{1}^{2},s_{0})\) in accordance with Eq. (11), but do not modify \((M_{2}^{2},s_{0}^{\prime})\). By this way, we include into consideration \(g_{1}^{*}(q^{2})\) and obtain the equation containing \(g_{1}(q^{2})\) and \(g_{1}^{*}(q^{2})\). This means that remaining terms in Eq. (19) are included in "higher resonances and continuum states" and their effects are implicitly taken into account in \(\Pi({\bf M}^{2},{\bf s}_{0},q^{2})\) through the quark-hadron duality. Then, using results for \(g_{1}(q^{2})\), we calculate the form factor \(g_{1}^{*}(q^{2})\) that determines the width of the process \(X_{4{\rm c}}^{*}\to J/\psi J/\psi\). At the new stage of studies, we consider the equation for the form factors \(g_{1}(q^{2})\) and \(g_{2}(q^{2})\). The latter corresponds to the vertex \(X_{4{\rm c}}J/\psi\psi^{\prime}\), and formally describes the channel \(X_{4{\rm c}}\to J/\psi\psi^{\prime}\). This decay mode of \(X_{4{\rm c}}\) is kinematically forbidden, because the threshold 6737 MeV for production of the \(J/\psi\psi^{\prime}\) pair exceeds the mass of the tetraquark \(X_{4{\rm c}}\). But \(g_{2}(q^{2})\) is required to determine the form factor \(g_{2}^{*}(q^{2})\) of interest. To extract \(g_{2}(q^{2})\), we fix \((M_{1}^{2},s_{0})\) by means of Eq. (8), but choose \((M_{2}^{2},s_{0}^{\prime\prime})\) in the form \[M_{2}^{2}\in[4,5]~{}{\rm GeV}^{2},~{}s_{0}^{\prime\prime}\in[15,16]~{}{\rm GeV}^{2}, \tag{23}\] where \(s_{0}^{\prime\prime}<m_{\psi(3S)}^{2}\). Finally, using for the \(X_{4{\rm c}}-X_{4{\rm c}}^{*}\) and \(J/\psi-\psi^{\prime}\) channels Eqs. (11) and (23), we calculate the last form factor \(g_{2}^{*}(q^{2})\). The SR method allows one to calculate the form factors in the deep-Euclidean region \(q^{2}<0\). All functions \(g_{i}^{(*)}(q^{2})\) in the present work are calculated in the region \(q^{2}=-(1-10)~{}{\rm GeV}^{2}\). But partial widths of the decays under consideration are determined by values of these form factors at the mass shell \(q^{2}=m_{J}^{2}\). To solve this problem, we introduce a new variable \(Q^{2}=-q^{2}\) and denote the obtained functions by \(g_{i}^{(*)}(Q^ equal to the SR's results, but can be extrapolated to the domain of \(Q^{2}<0\). In present article, we use functions \({\cal G}_{i}(Q^{2})\) \[{\cal G}_{i}(Q^{2})={\cal G}_{i}^{0}\mbox{exp}\left[c_{i}^{1}\frac{Q^{2}}{m^{2}} +c_{i}^{2}\left(\frac{Q^{2}}{m^{2}}\right)^{2}\right], \tag{24}\] with parameters \({\cal G}_{i}^{0}\), \(c_{i}^{1}\) and \(c_{i}^{2}\). It is worth noting that in the case of \(g_{1}^{*}(q^{2})\) and \(g_{2}^{*}(q^{2})\) the parameter \(m\) in Eq. (24) is the mass of the tetraquark \(X_{4c}^{*}\), whereas for the intermediate functions \(g_{1}(q^{2})\) and \(g_{2}(q^{2})\), we use the mass \(m_{0}\) of \(X_{4c}\). Results obtained for \(g_{1}^{*}(q^{2})\) and \(g_{2}^{*}(q^{2})\) are plotted in Fig. 2. Computations demonstrate that \({\cal G}_{1}^{0*}=0.68\) GeV\({}^{-1}\), \(c_{1}^{1*}=3.93\), and \(c_{1}^{2*}=-4.33\) lead to nice agreement with the sum rule's data for \(g_{1}^{*}(Q^{2})\). At the mass shell \(q^{2}=m_{J}^{2}\) the function \({\cal G}_{1}^{*}(Q^{2})\) is equal to \[g_{1}^{*}\equiv{\cal G}_{1}^{*}(-m_{J}^{2})=(3.1\pm 0.5)\times 10^{-1}\mbox{ GeV}^{-1}. \tag{25}\] The width of the decay \(X_{4c}^{*}\to J/\psi J/\psi\) can be obtained by employing the expression \[\Gamma\left[X_{4c}^{*}\to J/\psi J/\psi\right]=g_{1}^{*2}\frac{\lambda_{1}}{8 \pi}\left(\frac{m_{J}^{4}}{m^{2}}+\frac{2\lambda_{1}^{2}}{3}\right), \tag{26}\] where \(\lambda_{1}=\lambda(m,m_{J},m_{J})\) and \[\lambda(m_{1},m_{2},m_{3})=\frac{\left[m_{1}^{4}+m_{2}^{4}+m_{3} ^{4}\right.}{2m_{1}}\] \[\left.-2(m_{1}^{2}m_{2}^{2}+m_{1}^{2}m_{3}^{2}+m_{2}^{2}m_{3}^{2} )\right]^{1/2}. \tag{27}\] Then it is not difficult to find that \[\Gamma\left[X_{4c}^{*}\to J/\psi J/\psi\right]=(30.1\pm 8.3)\mbox{ MeV}. \tag{28}\] In the case of \(g_{2}^{*}(Q^{2})\), similar investigations give for the parameters of the function \({\cal G}_{2}^{*}(Q^{2})\) following results: \({\cal G}_{2}^{0*}=0.54\) GeV\({}^{-1}\), \(c_{2}^{1*}=3.28\), and \(c_{2}^{2*}=-4.26\). The strong coupling \(g_{2}^{*}\) equals to \[g_{2}^{*}\equiv{\cal G}_{2}^{*}(-m_{J}^{2})=(2.5\pm 0.5)\times 10^{-1}\mbox{ GeV}^{-1}. \tag{29}\] Partial width of the process \(X_{4c}^{*}\to J/\psi\psi^{\prime}\) is given by the formula \[\Gamma\left[X_{4c}^{*}\to J/\psi\psi^{\prime}\right]=g_{2}^{*2}\frac{\lambda_ {2}}{8\pi}\left(\frac{m_{\psi}m_{J}^{2}}{m^{2}}+\frac{2\lambda_{2}^{2}}{3} \right), \tag{30}\] where \(\lambda_{2}=\lambda(m,m_{\psi},m_{J})\). This leads to the prediction \[\Gamma\left[X_{4c}^{*}\to J/\psi\psi^{\prime}\right]=(11.5\pm 3.3)\mbox{ MeV}. \tag{31}\] The results obtained for these two decay channels are collected in Table 1. The decays \(X_{4c}^{*}\to\eta_{c}\eta_{c}\) and \(X_{4c}^{*}\to\eta_{c}\eta_{c}(2S)\) can be explored in the context of this scheme as well. In this case, the double Borel transformation of the amplitude \(\Pi_{\eta_{c}}^{\rm Phys}(p^{2},p^{\prime 2},q^{2})\) equals to \[{\cal B}\Pi_{\eta_{c}}^{\rm Phys}(p^{2},p^{\prime 2},q^{2})=g_{3}(q^{2}) \frac{f_{0}m_{\eta_{c}}^{2}m_{\eta_{c}}^{4}}{4m_{c}^{2}}R(m_{0},m_{\eta_{c}})\] \[+g_{3}^{*}(q^{2})\frac{fm_{\eta_{c}}^{2}m_{\eta_{c}}^{4}}{4m_{c}^{ 2}}R(m,m_{\eta_{c}})+g_{4}(q^{2})\frac{f_{0}m_{\eta_{c}}m_{\eta_{c}}^{2}}{4m_{c} ^{2}}\] \[\times f_{\eta_{c}(2S)}m_{\eta_{c}(2S)}^{2}R\left(m_{0},m_{\eta_{c} (2S)}\right)+g_{4}^{*}(q^{2})\frac{fmf_{\eta_{c}}m_{\eta_{c}}^{2}}{4m_{c}^{2}}\] \[\times f_{\eta_{c}(2S)}m_{\eta_{c}(2S)}^{2}R(m,m_{\eta_{c}(2S)})+\cdots, \tag{32}\] where \(m_{\eta_{c}}=(2983.9\pm 0.4)\) MeV, \(f_{\eta_{c}}=(398.1\pm 1.0)\) MeV and \(m_{\eta_{c}(2S)}=(3637.5\pm 1.1)\) MeV, \(f_{\eta_{c}(2S)}=331\) MeV are the spectroscopic parameters of the \(\eta_{c}\) and \(\eta_{c}(2S)\) mesons [24; 26]. The function \(R(a,b)\) is defined by the formula \[R(a,b)=\frac{\left(a^{2}+b^{2}-q^{2}\right)}{2(q^{2}-m_{\eta_{c}}^{2})}e^{-a^{2} /M_{1}^{2}}e^{-b^{2}/M_{2}^{2}}. \tag{33}\] The invariant amplitude \(\Pi_{\eta_{c}}^{\rm OPE}(p^{2},p^{\prime 2},q^{2})\) was calculated in our article [19]. Here, one should take into account that the regions \((M_{2}^{2},s_{0}^{\prime})\) and \((M_{2}^{2},s_{0}^{\prime\prime})\) for \(\eta_{c}-\eta_{c}(2S)\) channel are given by the expressions \[M_{2}^{2}\in[3.5,4.5]\mbox{ GeV}^{2},\ s_{0}^{\prime}\in[11,12]\mbox{ GeV}^{2}, \tag{34}\] and \[M_{2}^{2}\in[3.5,4.5]\mbox{ GeV}^{2},\ s_{0}^{\prime}\in[13,14]\mbox{ GeV}^{2}, \tag{35}\] respectively. In the case of \(g_{3}^{*}(Q^{2})\), our studies lead for the parameters of the function \({\cal G}_{3}^{*}(Q^{2})\) to predictions: \({\cal G}_{3}^{0*}=0.39\mbox{ GeV}^{-1}\), \(c_{3}^{1*}=4.01\), and \(c_{3}^{2*}=-4.99\). Then the coupling \(g_{3}^{*}\) is equal to \[g_{3}^{*}\equiv{\cal G}_{3}^{*}(-m_{\eta_{c}}^{2})=(1.7\pm 0.4)\times 10^{-1}\mbox{ GeV}^{-1}. \tag{36}\] Figure 2: The QCD results and fit functions for the form factors \(g_{1}^{*}(Q^{2})\) (dashed curve) and \(g_{2}^{*}(Q^{2})\) (solid curve). The red diamond and green star denote the point \(Q^{2}=-m_{J}^{2}\), where the strong couplings \(g_{1}^{*}\) and \(g_{2}^{*}\) are evaluated. The width of the decay \(X_{4c}^{*}\to\eta_{c}\eta_{c}\) can be found by means of the formula \[\Gamma\left[X_{4c}^{*}\to\eta_{c}\eta_{c}\right]=g_{3}^{*2}\frac{m_{\eta_{c}}^{2 }\lambda_{3}}{8\pi}\left(1+\frac{\lambda_{3}^{2}}{m_{\eta_{c}}^{2}}\right), \tag{37}\] where \(\lambda_{3}=\lambda(m,m_{\eta_{c}},m_{\eta_{c}})\). Numerical computations yield \[\Gamma\left[X_{4c}^{*}\to\eta_{c}\eta_{c}\right]=(30.6\pm 10.5)\ \mathrm{MeV}. \tag{38}\] For the second decay \(X_{4c}^{*}\to\eta_{c}\eta_{c}(2S)\), we get \[g_{4}^{*}\equiv\mathcal{G}_{4}^{*}(-m_{\eta_{c}}^{2})=(1.4\pm 0.3)\times 10^{-1}\ \mathrm{GeV}^{-1},\] \[\Gamma\left[X_{4c}^{*}\to\eta_{c}\eta_{c}(2S)\right]=(16.6\pm 5. 5)\ \mathrm{MeV}, \tag{39}\] where \(\mathcal{G}_{4}^{*}(Q^{2})\) is the function with parameters \(\mathcal{G}_{4}^{0*}=0.32\ \mathrm{GeV}^{-1}\), \(c_{4}^{1*}=4.06\), and \(c_{4}^{2*}=-5.02\). Treatment of the channels \(X_{4c}^{*}\to\eta_{c}\chi_{c1}\), \(\chi_{c0}\chi_{c0}\), and \(\chi_{c1}\chi_{c1}\) is done by taking into account vertices of the tetraquarks \(X_{4c}\) and \(X_{4c}^{*}\) with these meson pairs. Therefore, the physical side of the sum rules consists of two terms. In the case of the \(\eta_{c}\chi_{c1}\) mesons, both the ground-level tetraquark \(X_{4c}\) and its excited state \(X_{4c}^{*}\) decays to this meson pair. Therefore, to find the partial decay width of the process \(X_{4c}^{*}\to\eta_{c}\chi_{c1}\), we use the form factor \(g_{5}(q^{2})\) studied in Ref. [19], and extract \(g_{5}^{*}(q^{2})\) necessary to compute the coupling \(g_{5}^{*}\) at the mass shell \(q^{2}=m_{\eta_{c}}^{2}\). The corresponding fit function \(\mathcal{G}_{5}^{*}(Q^{2})\) has the parameters: \(\mathcal{G}_{5}^{0*}=3.46\), \(c_{5}^{1*}=3.59\), and \(c_{5}^{2*}=-4.72\). The remaining processes \(X_{4c}^{*}\to\chi_{c0}\chi_{c0}\) and \(\chi_{c1}\chi_{c1}\) are investigated by the same manner, the difference being that decays of \(X_{4c}\) to mesons \(\chi_{c0}\chi_{c0}\), and \(\chi_{c1}\chi_{c1}\) are not kinematically allowed channels, but we compute relevant form factors to find strong couplings \(g_{6}^{*}\) and \(g_{7}^{*}\) of interests. The related correlation functions are calculated in the present work for the first time and given by the expressions (11) and (12). The final results of analysis are collected in Table 1. Let us note only that in numerical computations, we employ the SR predictions for the decay constants \(f_{\chi_{c1}}=(344\pm 27)\) MeV and \(f_{\chi_{c0}}=343\) MeV [27; 28]. Having used results for the partial widths of the excited \(X_{4c}^{*}\) tetraquark's decay channels, we estimate its full width \[\Gamma=(144\pm 18)\ \mathrm{MeV}. \tag{40}\] ## III Hadronic Molecule \(\chi_{c1}\chi_{c1}\) Here, we investigate the hadronic molecule \(\mathcal{M}=\chi_{c1}\chi_{c1}\) and calculate the mass and current coupling of this structure, which will be used to determine its kinematically allowed decay channels. Decays of the molecule \(\mathcal{M}\) and its full width are also studied in this section. ### Mass and current coupling The sum rules for the mass \(\widetilde{m}\) and current coupling \(\widetilde{f}\) of the molecule \(\mathcal{M}\) can be extracted by exploring the correlation function \[\Pi(p)=i\int d^{4}xe^{ipx}\langle 0|\mathcal{T}\{\widetilde{J}(x)\widetilde{J}^{ \dagger}(0)\}|0\rangle. \tag{41}\] Here, \(\widetilde{J}(x)\) is the interpolating current for \(\mathcal{M}\) \[\widetilde{J}(x)=\overline{c}_{a}(x)\gamma_{5}\gamma_{\mu}c_{a}(x)\overline{c} _{b}(x)\gamma_{5}\gamma^{\mu}c_{b}(x), \tag{42}\] with \(a\), and \(b\) being the color indices. We are going to calculate spectroscopic parameters of the ground-level molecule \(\mathcal{M}\), therefore the physical side of the SRs is given by only one term \[\Pi^{\mathrm{Phys}}(p)=\frac{\widetilde{f}^{2}\widetilde{m}^{2}}{\widetilde{m }^{2}-p^{2}}+\cdots. \tag{43}\] It is calculated by taking into account the matrix element \[\langle 0|\widetilde{J}|{\cal M}\rangle=\widetilde{f}\widetilde{m}. \tag{44}\] The invariant amplitude that is required for following analysis is \(\Pi^{\rm Phys}(p^{2})=\widetilde{f}^{2}\widetilde{m}^{2}/(\widetilde{m}^{2}-p^ {2})\). The correlation function \(\Pi^{\rm OPE}(p)\) in terms of the \(c\)-quark propagators is determined by Eq. (45) \[\Pi^{\rm OPE}(p)=i\int d^{4}xe^{ipx}\left\{{\rm Tr}\left[\gamma_{ 5}\gamma_{\mu}S_{c}^{ba^{\prime}}(x)\gamma_{\nu}\gamma_{5}S_{c}^{a^{\prime}b}(- x)\right]\right.\] \[\times{\rm Tr}\left[\gamma_{5}\gamma^{\mu}S_{c}^{ab^{\prime}}(x) \gamma^{\nu}\gamma_{5}S_{c}^{b^{\prime}a}(-x)\right]-{\rm Tr}\left[\gamma_{5} \gamma_{\mu}S_{c}^{bb^{\prime}}(x)\gamma_{\nu}\right.\] \[\times\gamma_{5}S_{c}^{b^{\prime}a}(x)\gamma_{5}\gamma^{\mu}S_{c }^{aa^{\prime}}(x)\gamma^{\nu}\gamma_{5}S_{c}^{a^{\prime}b}(-x)\right]-{\rm Tr }\left[\gamma_{5}\gamma_{\mu}\right.\] \[\left.\times S_{c}^{ba^{\prime}}(x)\gamma_{\nu}\gamma_{5}S_{c}^{ a^{\prime}a}(-x)\gamma_{5}\gamma^{\mu}S_{c}^{ab^{\prime}}(x)\gamma^{\nu} \gamma_{5}S_{c}^{b^{\prime}b}(-x)\right]\] \[+{\rm Tr}\left[\gamma_{5}\gamma_{\mu}S_{c}^{bb^{\prime}}(x) \gamma_{\nu}\gamma_{5}S_{c}^{b^{\prime}b}(-x)\right]{\rm Tr}\left[\gamma_{5} \gamma^{\mu}S_{c}^{aa^{\prime}}(x)\gamma^{\nu}\right.\] \[\left.\times\gamma_{5}S_{c}^{a^{\prime}a}(-x)\right]\right\}. \tag{45}\] It is convenient to denote the invariant amplitude of the QCD side by \(\Pi^{\rm OPE}(p^{2})\). Then, the sum rules for the mass and current coupling take simple forms \[\widetilde{m}^{2}=\frac{\Pi^{\prime}(M^{2},s_{0})}{\Pi(M^{2},s_{0})} \tag{46}\] and \[\widetilde{f}^{2}=\frac{e^{\widetilde{m}^{2}/M^{2}}}{\widetilde{m}^{2}}\Pi(M^ {2},s_{0}), \tag{47}\] where \(\Pi^{\prime}(M^{2},s_{0})=d\Pi(M^{2},s_{0})/d(-1/M^{2})\). Here, \(\Pi(M^{2},s_{0})\) is the amplitude \(\Pi^{\rm OPE}(p^{2})\) obtained after the Borel transformation and continuum subtraction operations. Computations lead to the following constraints on the parameters \(M^{2}\) and \(s_{0}\) \[M^{2}\in[6,8]\;{\rm GeV}^{2},\ s_{0}\in[63,65]\;{\rm GeV}^{2}. \tag{48}\] It is not difficult to check that PC meets usual requirements of SR computations. In Fig. 3, we plot dependence of the pole contribution on the Borel parameter. It is seen, that expect for a small region, PC is larger than 0.5. On average in \(s_{0}\) the PC exceeds 0.5 for all values of \(M^{2}\). The mass and current coupling of the molecule \({\cal M}\) are \[\widetilde{m} = (7180\pm 120)\;{\rm MeV},\] \[\widetilde{f} = (1.06\pm 0.13)\times 10^{-1}\;{\rm GeV}^{4}, \tag{49}\] respectively. It is worth to note that \(\widetilde{m}\) and \(\widetilde{f}\) in Eq. (49) are mean values of the mass and current coupling averaged over the working regions (48). It overshoots the mass 7022 MeV of two \(\chi_{c1}\) mesons by 160 MeV and is unstable against decays to these particles. In Fig. 4, we plot the mass \(\widetilde{m}\) as a function of \(M^{2}\) and \(s_{0}\), in which its residual dependence on these parameters is clear. It is also useful to estimate a gap between the ground-state \({\cal M}\) and excited molecules \({\cal M}^{*}\). The mass \(\widetilde{m}^{*}\) of the state \({\cal M}^{*}\) should obey the constraint \(\widetilde{m}^{*}\geq\sqrt{s_{0}}\), i.e., \(\widetilde{m}^{*}\geq 8\;{\rm GeV}\), which implies an approximately 800 MeV mass splitting between these molecules. ### Width of \({\cal M}\) Decay channels of the hadronic molecule \({\cal M}\) do not differ from that of the tetraquark \(X_{4c}^{*}\). A difference appears in treatment of these processes. Indeed, the molecule \({\cal M}\) is ground-state particle it its class, therefore physical side of relevant sum rules in \({\cal M}\) channel contains terms connected only with its decays. Because the resonances under investigation were detected in the di-\(J/\psi\) and \(J/\psi\psi^{\prime}\) mass distributions, we concentrate on the decays \({\cal M}\to J/\psi J/\psi\) and \({\cal M}\to J/\psi\psi^{\prime}\). The correlation function required for this analysis is given by the formula \[\widetilde{\Pi}_{\mu\nu}(p,p^{\prime}) = i^{2}\int d^{4}xd^{4}ye^{ip^{\prime}y}e^{-ipx}\langle 0|{\cal T}\{J_{ \mu}^{\psi}(y) \tag{50}\] \[\times J_{\nu}^{\psi}(0)\widetilde{J}^{\dagger}(x)\}|0\rangle.\] As usual, we express \(\widetilde{\Pi}_{\mu\nu}(p,p^{\prime})\) in terms of the physical parameters of particles involved in the decay process. To Figure 3: Dependence of PC on the Borel parameter \(M^{2}\). The horizontal line shows the border \({\rm PC}=0.5\). The red triangle fix the position, where the mass of the molecule \(\chi_{c1}\chi_{c1}\) has been evaluated. this end, we write it in the following form \[\widetilde{\Pi}^{\rm Phys}_{\mu\nu}(p,p^{\prime})=\frac{\langle 0|J^{ \psi}_{\mu}|J/\psi(p^{\prime})\rangle}{{p^{\prime}}^{2}-m_{J}^{2}}\frac{\langle 0 |J^{\psi}_{\nu}|J/\psi(q)\rangle}{q^{2}-m_{J}^{2}}\] \[\times\langle J/\psi(p^{\prime})J/\psi(q)|{\cal M}(p)\rangle\frac {\langle{\cal M}(p)|\widehat{J}^{\dagger}|0\rangle}{p^{2}-\widetilde{m}^{2}}\] \[+\frac{\langle 0|J^{\psi}_{\mu}|\psi(p^{\prime})\rangle}{{p^{ \prime}}^{2}-m_{\psi}^{2}}\frac{\langle 0|J^{\psi}_{\nu}|J/\psi(q)\rangle}{q^{2}-m_{J}^{ 2}}\] \[\times\langle\psi(p^{\prime})J/\psi(q)|{\cal M}(p)\rangle\frac {\langle{\cal M}(p)|\widehat{J}^{\dagger}|0\rangle}{p^{2}-\widetilde{m}^{2}}+ \cdots. \tag{51}\] We have already defined the matrix elements of the hadronic molecule \({\cal M}\) and mesons \(J/\psi\) and \(\psi^{\prime}\). The vertices \({\cal M}J/\psi J/\psi\) and \({\cal M}J/\psi\psi^{\prime}\) after some substitutions are given by Eq. (16). As in previous section, we use the amplitude \(\widetilde{\Pi}^{\rm Phys}(p^{2},p^{\prime 2},q^{2})\) which in \(\widetilde{\Pi}^{\rm Phys}_{\mu\nu}(p,p^{\prime})\) corresponds to a term proportional to \(g_{\mu\nu}\). The double Borel transformation of the function \(\widetilde{\Pi}^{\rm Phys}(p^{2},p^{\prime 2},q^{2})\) over the variables \(-p^{2}\) and \(-p^{\prime 2}\) is equal to \[{\cal B}\widetilde{\Pi}^{\rm Phys}(p^{2},p^{\prime 2},q^{2})=G_{1}(q^ {2})\widetilde{f}\widetilde{m}f_{J}^{2}m_{J}^{2}F(\widetilde{m},m_{J})\] \[+G_{1}^{*}(q^{2})\widetilde{f}\widetilde{m}f_{J}m_{J}f_{\psi}m_{ \psi}F(\widetilde{m},m_{\psi})+\cdots. \tag{52}\] The correlation function \(\widetilde{\Pi}^{\rm OPE}_{\mu\nu}(p,p^{\prime})\) is given by the formula \[\widetilde{\Pi}^{\rm OPE}_{\mu\nu}(p,p^{\prime})=2i^{2}\int d^{4 }xd^{4}ye^{-ipx}e^{ip^{\prime}y}\left\{{\rm Tr}\left[\gamma_{\nu}S^{ib}_{c}(-x )\right.\right.\] \[\left.\times\gamma^{\alpha}\gamma_{5}S^{bj}_{c}(x)\right]{\rm Tr }\left[\gamma_{\mu}S^{ia}_{c}(y-x)\gamma_{\alpha}\gamma_{5}S^{ai}_{c}(x-y) \right]\] \[-{\rm Tr}\left[\gamma_{\mu}S^{ia}_{c}(y-x)\gamma_{\alpha}\gamma_{ 5}S^{aj}_{c}(x)\gamma_{\nu}S^{ib}_{c}(-x)\gamma^{\alpha}\right.\] \[\left.\left.\times\gamma_{5}S^{bi}_{c}(x-y)\right]\right\}. \tag{53}\] The QCD side of the sum rule and amplitude \(\widetilde{\Pi}^{\rm OPE}(p^{2},p^{\prime 2},q^{2})\) are extracted from this expression. The strategy pursued in our study of these processes repeats one used in Sec. II while considering decays of the tetraquark \(X^{*}_{4c}\). We first determine the form factor \(G_{1}(q^{2})\) utilizing the "ground-state + continuum" scheme. The parameters \((M_{1}^{2},s_{0})\) are universal for all decays of \({\cal M}\) and are presented in Eq. (48). The second pair of the parameters \((M_{2}^{2},s_{0}^{\prime})\) corresponding to \(J/\psi J/\psi\) decay can be found in Eq. (22). Once determined \(G_{1}(q^{2})\), in the second stage of computations we choose \((M_{2}^{2},s_{0}^{\prime*})\) from Eq. (23) and employ information on \(G_{1}(q^{2})\) to find the form factor \(G_{1}^{*}(q^{2})\), responsible for the process \({\cal M}\to J/\psi\psi^{\prime}\). The functions \({\cal G}_{8}(Q^{2})\) and \({\cal G}_{8}^{*}(Q^{2})\) are formed by the parameters \[{\cal G}_{8}^{0} = 0.76\ {\rm GeV}^{-1}\,\ c_{8}^{1}=3.32,\ c_{8}^{2}=-4.19,\] \[{\cal G}_{8}^{0*} = 0.68\ {\rm GeV}^{-1},c_{8}^{1*}=3.20,\ c_{8}^{2*}=-4.11. \tag{54}\] The strong couplings \(G_{1}\) and \(G_{1}^{*}\) are extracted from these functions at the mass shells \(Q^{2}=-m_{J}^{2}\). This approach is also valid for the channels \({\cal M}\to\eta_{c}\eta_{c}\) and \({\cal M}\to\eta_{c}\eta_{c}(2S)\). The correlation function required for these decays is written down below \[\Pi^{\rm OPE}(p,p^{\prime})=2\int d^{4}xd^{4}ye^{-ipx}e^{ip^{ \prime}y}\left\{{\rm Tr}\left[\gamma_{5}S^{ia}_{c}(y-x)\right.\right.\] \[\left.\times\gamma_{\alpha}\gamma_{5}S^{ai}_{c}(x-y)\right]{\rm Tr }\left[\gamma_{5}S^{ib}_{c}(-x)\gamma_{\alpha}\gamma_{5}S^{bj}_{c}(x)\right]\] \[-{\rm Tr}\left[\gamma_{5}S^{ia}_{c}(y-x)\gamma_{\alpha}\gamma_{5}S^ {aj}_{c}(x)\gamma_{5}S^{ib}_{c}(-x)\gamma^{\alpha}\right.\] \[\left.\gamma_{5}S^{bi}_{c}(x-y)\right]\right\}. \tag{55}\] The functions \({\cal G}_{9}(Q^{2})\) and \({\cal G}_{9}^{*}(Q^{2})\) needed to extrapolate the form factors \(G_{2}(q^{2})\) and \(G_{2}^{*}(q^{2})\) are determined by the parameters: \({\cal G}_{9}^{0}=0.46\ {\rm GeV}^{-1}\,\ c_{9}^{1}=3.93,\ c_{9}^{2}=-4.83\) and \({\cal G}_{9}^{0*}=0.30\ {\rm GeV}^{-1},c_{9}^{1*}=3.90,\ c_{9}^{2*}=-4.81\). These functions at the mass shells \(Q^{2}=-m_{J}^{2}\) fix the couplings \(G_{2}\) and \(G_{2}^{*}\), respectively. The decays \({\cal M}\to\eta_{c}\chi_{c1}\), \(\chi_{c0}\chi_{c0}\), and \(\chi_{c1}\chi_{c1}\) are investigated directly in the context of the "ground-state + continuum" approach. Corresponding functions \(\Pi^{\rm OPE}_{\mu}(p,p^{\prime})\), \(\Pi^{\rm OPE}(p,p^{\prime})\) and \(\widehat{\Pi}^{\rm OPE}_{\mu\nu}(p,p^{\prime})\) can found in Appendix as Eqs. (A.3) -(A.5). Predictions obtained for the partial widths of different modes of the hadronic molecule \({\cal M}\), strong couplings and related parameters are presented in Table 1. It should be noted that, to collect results obtained in this work in the framework of a single Table, the couplings \(G_{1}^{*}\), \(G_{2}\) and \(G_{2}^{*}\) are placed there under numbers \(G_{2}^{*}\), \(G_{3}\) and \(G_{4}^{*}\), respectively. For the full width of the hadronic molecule, we get \[\widetilde{\Gamma}=(169\pm 21)\ {\rm MeV}, \tag{56}\] which characterizes it as a wide structure. ## IV Summing up In the present work, we have explored radially excited tetraquark \(X_{4c}^{*}\) and hadronic molecule \({\cal M}=\chi_{c1}\chi_{c1}\). We have computed their masses and full widths using the QCD sum rule method intending to confront obtained results with available data of the ATLAS-CMS Collaborations for the heaviest resonance \(X(7300)\). The LHCb fixed this state at 7.2 GeV, but did not provide other information. The CMS measured parameters of this resonance and found that \[m^{\rm CMS} = 7287^{+20}_{-18}\pm 5\ {\rm MeV},\] \[\Gamma^{\rm CMS} = 95^{+59}_{-40}\pm 19\ {\rm MeV}. \tag{57}\] The ATLAS Collaboration observed \(X(7300)\) in the \(J/\psi\psi^{\prime}\) mass distribution and also reported the mass and width of this state \[m^{\rm ATL} = 7220\pm 30^{+20}_{-30}\ {\rm MeV},\] \[\Gamma^{\rm ATL} = 100^{+130+60}_{-70-50}\ {\rm MeV}. \tag{58}\] As is seen, experimental data suffer from big errors, which are relatively small in the case of Eq. (57). Comparing our findings \(m=(7235\pm 75)\ {\rm MeV}\) and \(\widetilde{m}=(7180\pm 120)\ {\rm MeV}\) with corresponding experimental data and taking into account errors of calculations and measurements, we conclude that masses of the excited tetraquark \(X_{4c}^{*}\) and hadronic molecule \({\cal M}\) are compatible with \(m^{\rm CMS}\) and \(m^{\rm ATL}\). Using only the central values of \(m\) and \(\widetilde{m}\), we can state closeness of \(m\) and \(\widetilde{m}\) to the ATLAS datum. Therefore, at this phase of analysis, it is difficult to make assignment for the resonance \(X(7300)\). The full widths of the structures \(X_{4c}^{*}\) and \({\cal M}\) provide very important information for this purpose. It is interesting that \(X(7300)\) is narrowest fully charmed state detected by the ATLAS and CMS experiments provided one ignores errors of measurements. Among the four-quark structures explored in the present work, the tetraquark \(X_{4c}^{*}\) has the width close to the experimental data. Therefore, we consider it as a natural candidate to the observed state \(X(7300)\). The molecule \({\cal M}\), due to large theoretical and experimental uncertainties, may also be interpreted as the resonance \(X(7300)\), or some part of this state in a tetraquark-molecule mixing model. It is known that in the framework of the sum rule method physical observables can be evaluated with some accuracy. At the same time, this method allows us to estimate uncertainties of relevant analysis. The ambiguities fixed for the masses and widths of structures \(X_{4c}^{*}\) and \({\cal M}\) are typical for such kind of investigations, and can be hardly reduced. Therefore, for more credible conclusions about a nature of \(X(7300)\), one needs more precise experimental data. This is true not only for \(X(7300)\), but also for other fully charmed \(X\) resonances. ## Appendix A Different correlation functions Here, we collect expressions of correlation functions, which are employed to calculate some of the strong couplings. In the case of the decay \(X_{4c}^{*}\to\chi_{c0}\chi_{c0}\) the correlation function \(\Pi^{\rm OPE}(p,p^{\prime})\) is given by the formula \[\Pi^{\rm OPE}(p,p^{\prime})=2i^{2}\int d^{4}xd^{4}ye^{-ipx}e^{ip^ {\prime}y}\left\{{\rm Tr}\left[S_{c}^{ia}(y-x)\gamma_{\mu}\widetilde{S}_{c}^{ jb}(-x)\widetilde{S}_{c}^{bj}(x)\gamma^{\mu}S_{c}^{ai}(x-y)\right]\right.\] \[\left.-{\rm Tr}\left[S_{c}^{ia}(y-x)\gamma_{\mu}\widetilde{S}_{c }^{jb}(-x)\widetilde{S}_{c}^{aj}(x)\gamma^{\mu}S_{c}^{bi}(x-y)\right]\right\}.\] (A.1) The fit function \({\cal G}_{6}^{*}(Q^{2})\) used to find the strong coupling \(g_{6}^{*}\) is fixed by the parameters \({\cal G}_{6}^{0*}=0.51\ {\rm GeV}^{-1}\), \(c_{6}^{1*}=3.11\), and \(c_{6}^{2*}=-3.57\). For the decay \(X_{4c}^{*}\to\chi_{c1}\chi_{c1}\) the function \(\Pi^{\rm OPE}_{\mu\nu}(p,p^{\prime})\) has the following form: \[\Pi^{\rm OPE}_{\mu\nu}(p,p^{\prime})=2i^{2}\int d^{4}xd^{4}ye^{- ipx}e^{ip^{\prime}y}\left\{{\rm Tr}\left[\gamma_{\mu}\gamma_{5}S_{c}^{ia}(y-x) \gamma_{\alpha}\widetilde{S}_{c}^{jb}(-x)\gamma_{5}\gamma_{\nu}\widetilde{S}_ {c}^{aj}(x)\gamma^{\alpha}S_{c}^{bi}(x-y)\right]\right.\] \[\left.-{\rm Tr}\left[\gamma_{\mu}\gamma_{5}S_{c}^{ia}(y-x)\gamma_ {\alpha}\widetilde{S}_{c}^{jb}(-x)\gamma_{5}\gamma_{\nu}\widetilde{S}_{c}^{bj}( x)\gamma^{\alpha}S_{c}^{ai}(x-y)\right]\right\}.\] (A.2) In this case, the function \({\cal G}_{7}^{*}(Q^{2})\) has the parameters: \({\cal G}_{7}^{0*}=0.74\) GeV\({}^{-1}\), \(c_{7}^{1*}=2.48\), and \(c_{7}^{2*}=-3.01\). The correlation functions for the decays of the hadronic molecule \({\cal M}\) and functions to calculate the relevant strong couplings: Decay \({\cal M}\to\eta_{c}\chi_{c1}\) \[\Pi_{\mu}^{\rm OPE}(p,p^{\prime})=2i^{3}\int d^{4}xd^{4}ye^{-ipx} e^{ip^{\prime}y}\left\{{\rm Tr}\left[\gamma_{\mu}\gamma_{5}S_{c}^{ia}(y-x) \gamma_{\alpha}\gamma_{5}S_{c}^{ai}(x-y)\right]{\rm Tr}\left[\gamma_{5}S_{c}^{ jb}(-x)\gamma^{\alpha}\gamma_{5}S_{c}^{bj}(x)\right]\right.\] \[\left.-{\rm Tr}\left[\gamma_{\mu}\gamma_{5}S_{c}^{ia}(y-x)\gamma_ {\alpha}\gamma_{5}S_{c}^{aj}(x)\gamma_{5}S_{c}^{jb}(-x)\gamma^{\alpha}\gamma_{ 5}S_{c}^{bi}(x-y)\right]\right\},\] (A.3) and the fit function \({\cal G}_{10}(Q^{2})\) for \(G_{5}(Q^{2})\): \({\cal G}_{10}^{0}=3.85\), \(c_{10}^{1}=3.51\), and \(c_{10}^{2}=-4.56\). Decay \({\cal M}\to\chi_{c0}\chi_{c0}\) \[\Pi^{\rm OPE}(p,p^{\prime})=-2i^{2}\int d^{4}xd^{4}ye^{-ipx}e^{ip^{\prime}y}{ \rm Tr}\left[S_{c}^{ia}(y-x)\gamma_{\alpha}\gamma_{5}S_{c}^{aj}(x)S_{c}^{jb}(- x)\gamma^{\alpha}\gamma_{5}S_{c}^{bi}(x-y)\right],\] (A.4) and \(G_{6}(Q^{2})\): \({\cal G}_{11}^{0}=0.55\) GeV\({}^{-1}\), \(c_{11}^{1}=3.06\), and \(c_{11}^{2}=-3.46\). Decay \({\cal M}\to\chi_{c1}\chi_{c1}\) \[\widehat{\Pi}_{\mu\nu}^{\rm OPE}(p,p^{\prime}) = 2i^{2}\int d^{4}xd^{4}ye^{-ipx}e^{ip^{\prime}y}\left\{{\rm Tr} \left[\gamma_{\mu}\gamma_{5}S_{c}^{ia}(y-x)\gamma_{\alpha}\gamma_{5}S_{c}^{ai}( x-y)\right]{\rm Tr}\left[\gamma_{\nu}\gamma_{5}S_{c}^{jb}(-x)\gamma^{\alpha}\gamma_{5}S_{c}^{bj}(x )\right]\right.\] (A.5) \[\left.-{\rm Tr}\left[\gamma_{\mu}\gamma_{5}S_{c}^{ia}(y-x)\gamma_ {\alpha}\gamma_{5}S_{c}^{aj}(x)\gamma_{\nu}\gamma_{5}S_{c}^{jb}(-x)\gamma^{ \alpha}\gamma_{5}S_{c}^{bi}(x-y)\right]\right\},\] and the parameters \({\cal G}_{12}^{0}=0.86\) GeV\({}^{-1}\), \(c_{12}^{1}=2.41\), and \(c_{12}^{2}=-2.89\) to compute \(G_{7}(Q^{2})\).
2303.14007
'Team-in-the-loop': Ostrom's IAD framework 'rules in use' to map and measure contextual impacts of AI
This article explores how the 'rules in use' from Ostrom's Institutional Analysis and Development Framework (IAD) can be developed as a context analysis approach for AI. AI risk assessment frameworks increasingly highlight the need to understand existing contexts. However, these approaches do not frequently connect with established institutional analysis scholarship. We outline a novel direction illustrated through a high-level example to understand how clinical oversight is potentially impacted by AI. Much current thinking regarding oversight for AI revolves around the idea of decision makers being in-the-loop and, thus, having capacity to intervene to prevent harm. However, our analysis finds that oversight is complex, frequently made by teams of professionals and relies upon explanation to elicit information. Professional bodies and liability also function as institutions of polycentric oversight. These are all impacted by the challenge of oversight of AI systems. The approach outlined has potential utility as a policy tool of context analysis aligned with the 'Govern and Map' functions of the National Institute of Standards and Technology (NIST) AI Risk Management Framework; however, further empirical research is needed. Our analysis illustrates the benefit of existing institutional analysis approaches in foregrounding team structures within oversight and, thus, in conceptions of 'human in the loop'.
Deborah Morgan, Youmna Hashem, John Francis, Saba Esnaashari, Vincent J. Straub, Jonathan Bright
2023-03-24T14:01:00Z
http://arxiv.org/abs/2303.14007v2
# Team-in-the-loop' - organisational oversight of high-stakes AI ###### Abstract Oversight is rightly recognised as vital within high-stakes public sector AI applications, where decisions can have profound individual and collective impacts. Much current thinking regarding forms of oversight mechanisms for AI within the public sector revolves around the idea of human decision makers being 'in-the-loop' and thus being able to intervene to prevent errors and potential harm. However, in a number of high-stakes public sector contexts, operational oversight of decisions is made by expert teams rather than individuals. The ways in which deployed AI systems can be integrated into these existing operational team oversight processes has yet to attract much attention. We address this gap by exploring the impacts of AI upon pre-existing oversight of clinical decision-making through institutional analysis. We find that existing oversight is nested within professional training requirements and relies heavily upon explanation and questioning to elicit vital information. Professional bodies and liability mechanisms also act as additional levers of oversight. These dimensions of oversight are impacted, and potentially reconfigured, by AI systems. We therefore suggest a broader lens of 'team in the loop' to conceptualise the system-level analysis required for adoption of AI within high-stakes public sector deployment. Artificial Intelligence, machine learning, human-in-the-loop, oversight, healthcare, Institutional Development and Analysis Framework (IAD), decision-making + Footnote †: journal: Computer Vision and Pattern Recognition 1 ## 1 Introduction Artificial Intelligence (AI) systems in public sector decision-making have immense potential to transform public services through enhanced predictive modelling, early detection, and the use of'real-time transactions data' [68]. The distinct context of public sector deployment of AI necessitates reliable operational procedures, clear epistemic criteria, and alignment with normative expectations of society and the context of deployment [104]. The need for explainable, reliable and transparent systems is particularly vital within the context of 'high-stakes' public sector decisions such as medical diagnosis, service allocation or within educational assessment and admissions [95, 16, 103] where mistakes can result in serious harm towards individuals [96]. The transparent use of AI systems in all contexts encompasses both process transparency, to justify the process underpinning the design and implementation, and outcome transparency, to clearly explain and justify the outcome [62]. High-stakes operational decisions made in the public sector are frequently overseen and reviewed by teams of domain experts often drawing upon specific 'norms' of practice. Examples of such processes may include the additional questions and review which may be made by an experienced medical consultant of fellow clinicians, or a social care team meeting to review cases before consequential recommendations are agreed and implemented. Such oversight processes represent a 'normative expectation' [104] developed to support safe deployment and to engender public trust. These oversight processes may be highly contextual and have developed gradually, potentially as responses to previous oversight failures, societal change, and scientific progress. We broadly define human oversight by applying an adapted version of Dignum's definition of human control as 'the inclusion of a means to ensure shared awareness of the situation, such that the person making the decisions has sufficient information at the time she must intervene' [31]. Establishing the boundaries of such'shared awareness' and defining'sufficient information' or an 'appropriate level of confidence' are significant challenges within all forms of oversight and are often highly context specific. The need for oversight of operational decision-making is enhanced by the technological progress of AI and machine learning (ML) methods which are increasingly emerging, or proposed, as an element within the decision-making process in many public sector contexts. Much of the current thinking around AI oversight thus far has, necessarily, been structured within broader discussions of the responsible development of AI systems and adequate mechanisms of institutional accountability and oversight (see planning for 'accountability by design', [62]). We are focused here upon oversight of AI systems within high-risk contexts which operate within pre-existing structures of human oversight. The developing concept of 'human in the loop' is defined as 'human judgement at the moment an algorithm renders a specific prediction or decision' [41]. This definition accords with that of 'human control' in that the ability and willingness of humans to exercise this 'human judgement' is aligned to the extent of shared awareness of a situation. The importance of time is also clear within these two definitions as a human must be able to intervene at a specific crucial point or'moment' within the decision-making process to constrain or prevent a specific action. If a system augments or supports decision-making, this point in time may relate to the retrospective ability to review or assess prior inputs, data, and the results of prior testing. Forms of 'human in the loop' oversight of automated systems have been explored within a variety of contexts. These include defining the boundaries of'meaningful control within defence [97], the scope and implementation of restrictions upon automated decision-making within Article 22 of the General Data Protection Regulations (GDPR) [8] and current debates regarding mechanisms of oversight of AI systems [41, 90]. While this work is undoubtedly useful, one gap thus far is the extent to which it engages with the idea of operational oversight as a team activity, rather than something that can be carried out by an individual. Existing oversight within high-stakes public sector environments is often nested within well-established structures of 'downstream' oversight of decisions by domain experts. The United States National Institute of Standards and Technology (NIST) highlighted the risks of simply relying upon such 'downstream professionals' to act as 'governors' of decisions made by automated systems [100]. In this paper, we seek to help fill this gap. Our research question is therefore, _how do AI systems impact team oversight of high-stakes decisions in the public sector?_ We focus on clinical settings, as this is the area of the public sector which has seen perhaps the highest development of AI systems to support decision-making. Our focus is upon how AI systems impact existing team oversight of high-stakes decisions made by 'downstream professionals'. We conceptualise oversight as an 'interaction setting' [60] and use the Institutional Analysis and Development (IAD) Framework, developed by Elinor Ostrom [81] to understand, and analyse the many dimensions of these interactions between clinical professionals. Our paper is structured in the following way. We begin with summaries of work examining the dimensions of oversight of AI within healthcare and theories of hierarchical decision-making and oversight. We then move on to undertake an institutional analysis of oversight within clinical decision-making through application of the 'rules in use' of the IAD framework. We problematise and explore the potential impacts of AI upon the rules identified. We conclude with a discussion of the implications of this work and suggest a broader lens of 'team in the loop' as an appropriate way to conceptualise this system-level analysis. ## 2 Related works: Delegated decision-making and AI oversight High-stakes decisions made within the public sector are frequently made and overseen within an institutional team, which functions in accordance with both formal and informal rules of behaviour (e.g., customs, traditions, and shared moral beliefs). Here, we consider the effect of AI systems on both but focus primarily on the former, as the presence of formalised rules (i.e., contracts, legal codes of conduct, governance structures etc.) characterise the unique nature and accountability of the public sector [54]. Perhaps the most dominant institution in this respect, and the one we focus on in the rest of this paper, is of a system which relies upon a form of authority, that is, a 'hierarchical structure'. More specifically, this can be defined as involving the enforcement of rules, conformity, and attention to technical matters [30]. We focus upon decisions at the operational, or micro, level of public service delivery within the healthcare domain, and particularly on 'high-stakes' contexts. We follow Sambasivan et al. [96] in defining 'high-stakes' domains as those with safety impacts to living beings. Examples of such decisions made and overseen by delegated structures within healthcare include initial diagnosis and subsequent modifications, or suspensions, of treatment plans. The boundary is necessarily fuzzy as potentially all aspects of patient treatment and welfare involving medical and administrative staff present potential safety impacts to patients. However, we focus upon decisions relating to diagnosis or treatment by teams of clinicians as these more clearly encompass direct patient safety impacts. Within both commercial and public sector hierarchies, delegated decision-making is a well-established norm of practice. Delegation of decision-making provides benefits in utilisation of time for senior managers or professionals to deliver a greater volume of outcomes [46] especially in dynamic environments [89]. High-stakes operational decisions frequently involve layers of delegation and oversight. As Simon explores, the reasons for existing hierarchical structures of delegation, or of'vertical specialisation', include the need for coordination amongst team members, and the integration of professional expertise to permit 'the operative personnel to be held accountable for their decisions' [102]. This analysis examines hierarchical teams of clinical professionals operating in the same specialism. Increasingly medical decisions are discussed, and treatment agreed within multidisciplinary teams (MDTs); however, the decision-making structures, efficacy, and levels of input from different clinical professionals within these teams can vary [34]. We focus upon hierarchical teams also as existing liability structures attribute liability to clinical position holders rather than to team decision-making bodies [94, 101]. We should highlight that our focus is upon hierarchical clinical oversight systems as representative of the dominant pre-existing team structure, rather than as an optimum oversight process or team dynamic. Diverse fields have explored the various dimensions of hierarchical-based processes and the potential significant limitations of them to produce optimal societal [10, 73, 107] or business outcomes [21]. Within a healthcare context, specific concerns have been raised regarding the flow of information, for example the impact of 'authority gradient' upon junior doctors speaking out [27, 87]. Nevertheless, they remain a dominant paradigm and understanding the impact of AI systems on this type of oversight structure is necessary for analysis of future professional practice within the public sector. Rapid increases in computing power, available data, and improvements in algorithmic approaches have advanced AI technologies from 'an academic endeavour to a daily fixture of our lives' [3, 13]. We adopt a definition of AI systems which can be defined as 'narrow AI' which refers to systems that perform at levels equal to, or beyond, human performance, on predetermined tasks, and with abilities limited to such tasks [3]. Such systems may use a combination of machine learning (ML), data, and automation to undertake a variety of tasks. We acknowledge the increasing capabilities of generative models and their potential within healthcare to enhance human-AI collaboration and to handle multimodal data [117]. However, the use of such systems within healthcare may also present significant challenges, particularly within high-stakes domains, including the reliability of outputs, bias, robustness, and transparency [119, 45, 9]. Within the clinical context, advances have been predominately within the field of machine learning, defined as systems which perform tasks through a 'learning process' relying upon often large volumes of historical data to make decisions and recommendations. Examples of such systems within decision-making include medical imaging classification within haematology to assign identities to cell types [92], retrieval of visually similar images for diagnosis support with new patients using deep learning and computer vision [18] and risk prediction models to identify high-risk patients [23, 38]. The _Software and AI as a Medical Device Change Programme - Roadmap_ published by the Medicines and Healthcare products Regulatory Agency [71] outlines UK workstreams exploring approaches to address the challenges of adaptivity and software change management. The specific project to consider AI Adaptivity is referenced as 'Project Ship of Theseus' [71], recognising the potential of the technology to change completely all the component parts of the health system. Adaptive systems, which improve accuracy through incorporating learning from real-world use and data, challenge the 'traditional paradigm' of current medical device regulation which may require premarket review for such learning-driven software changes [48]. Such 'traditional' paradigms have provided mechanisms of reliability for clinical professionals for clinical use through regulatory approvals and clinical trial data. Processes and theories of human oversight or of'meaningful' control in relation to automated processes originated within safety critical systems approaches and analysis of Human Factors within accident management [17, 98]. More recent advances in digital technologies have necessarily generated literature examining the question of oversight, particularly within the context of accountability [63] and the use of'reviewability' frameworks [24]. As explored, we define human in the loop as 'human judgement at the moment an algorithm renders a specific prediction or decision' [41]. The challenges of control systems dependent upon human intervention or oversight have been explored within autopilot and driver assistance control systems. In such contexts, the human provides a fail-safe mechanism in the event of a failure or malfunction incident [65]. However, as recent work by the Law Commission highlights, the challenges of intermittent and rarely used oversight, in the context of requirements for automated vehicles, may necessitate new forms of legal accountability and licensing structures [120]. Within healthcare, definitions of human in the loop and algorithmic audits have been explored and mapped in detail by Liu et al., [66]. They highlight the need for shared responsibility and assessment of deployed systems between developers, health-care decision makers, and users. We introduce the term 'team in the loop' to reflect both the operational reality of much existing clinical oversight and as reflective of the need for diverse 'teams' with specific expertise to conduct algorithmic audits and to oversee AI systems in use within high-stakes domains [55, 90, 66, 50]. Emerging guidance within clinical fields that are beginning to deploy algorithms operationally is developing and, as can be seen from the field of radiology, beginning to outline standards for integration into established clinical systems (see the Royal College of Radiologists guidance,[121]). Reliance upon AI systems is highlighted within such guidance, in which it is noted that, 'Each time an AI package is adopted, it must be accompanied by a clinically specific document that describes what the specificity and sensitivity means in the context of the particular pathology'. The methodologies and forms of such documentation are not yet standardised, and recent qualitative survey work has found concerns amongst radiologists in Australia surrounding medico-legal, ethical, diversity and privacy issues, but no significant concerns surrounding AI efficiency and potential clinical use [29]. Such findings support the review undertaken by Smith [103] who found multiple wider clinical concerns regarding opacity, accountability and responsibility, and liability within the creation and use of AI within clinical decision-making. Oversight of automated decision-making systems within data protection law has also recognised the need for responsible review and assessment of decision outputs as seen within Article 22 of the GDPR. This provision enshrines the right not to be subject to a decision solely based on automated processing. However, in practice such provisions may be limited in scope and depth of the protection provided, for example the ambiguity within the definition regarding'solely based on' [111]. As proposals for formal AI regulation have begun to be instantiated, the boundaries, form, and definitions of human in the loop oversight of AI systems are being developed and debated within emerging governance proposals (see Article 14 of the proposed European Union Artificial Intelligence Act [116]). As noted, we define human oversight applying an adapted version of Dignum's definition of human control as 'the inclusion of a means to ensure shared awareness of the situation, such that the person making the decisions has sufficient information at the time she must intervene' [31]. The emphasis upon shared awareness is particularly useful in acknowledging that relevant, yet hidden, information from different stages of the eco-system of AI development (e.g., the origin and extent of the dataset [86]) are all of relevance in establishing the level of'sufficiency' required to intervene. Questions of transparency and cross-disciplinary understanding may all, therefore, restrict the ability of a designated human within the loop of deployment to establish a level of sufficiency of information necessary to intervene or to raise questions. As noted within the NIST report, 'without significant procedural and cultural support optimistic expectations about how humans can serve in this administrative capacity are not borne out in practice' [100]. The extent of such procedural and cultural support within the clinical workstream is unclear and we aim to understand existing systems and potential impacts through institutional analysis. ## 3 Methodology Building on the gap in the literature identified above, the aim of this paper is to understand how AI systems impact team oversight of high-stakes decisions in the public sector. To address this question, an institutional analysis approach is applied using the Institutional Analysis and Development (IAD) framework developed by Ostrom [82] to examine clinical oversight. We make use of this approach to understand the structure of the 'human-interaction situation' [84] of hierarchical team oversight of high-stakes decisions to consider how AI systems may impact these clinical processes. We apply one element of the IAD, the 'rules in use' [82] as an analytical tool to understand the structure of the interaction situation within which clinical teams operate, and within which decisions are overseen. We then apply the 'rules in use' to analysis of AI impacts upon these processes. The intention is to understand how the existing 'rules' of oversight, which have evolved over time and derive from a range of institutions, are potentially impacted. In this methodology section, we describe in more detail this approach and how we employ it in this article. We also comment on the scope of our analysis and the implications of focusing on a clinical context. ### Institutional Analysis and Development Framework and the 'rules of use' Elinor Ostrom developed the IAD Framework to analyse and understand institutions, defined broadly as 'the prescriptions humans use to organise all forms of repetitive and structured interactions'[82]. We apply the IAD framework to the analysis of clinical oversight as representative of a structured 'interaction situation' [84] between professional team members. The focal point of analysis of the IAD framework consists of an action arena, comprising actors within action situations affected by external variables [82, 22]; the action arena is therefore the core of the framework, where actors engage with one another, establishing patterns of interaction to create outcomes. The IAD framework is recognised as a leading framework within policy analysis [25] and has been adopted by a broad range of fields to evaluate the impacts of institutional arrangements and interactions to achieve outcomes. The framework has been adapted and developed to analyse complex social structures including healthcare provision [78], governance processes [15], and decision-making [5]. It provides a'multi-level taxonomy of the underlying components of the [institutional] situations human actors face' [82]. Within the IAD, actors enter positions with various capacities, preferences, information, and strategies, all of which are to varying degrees shaped by existing contextual conditions, the attributes of the community in which they live, and the 'rules' [82]. The 'rules in use' of a situation condition the social interactions which occur within the action arena [26]. As Ostrom explained, 'rules-in-use are the set of rules to which participants make reference if asked to explain and justify their actions to fellow participants. They are the 'dos and don'ts' that one learns on the ground that may not exist in any written document' [83]. The rules of oversight are the focus of this analysis as they structure complex 'interaction situations' within which participants act to make and oversee clinical decisions. The IAD framework classifies rules into seven distinct categories, [58, 80, 82]. Definitions of these rules used in the analysis below are adapted from Ostrom's work exploring them in detail, _Understanding Institutional Diversity_[82]. We aim to understand and identify the existing rules of clinical oversight to consider how these may then be impacted by deployment of clinical AI systems. In developing the framework, Ostrom highlighted its applicability to the analysis of different settings, including team structures [79] and it has been subsequently applied to analysis of software developer teams [108] and digital workers [88]. Isolating each distinct 'rule' of oversight can foreground dimensions which may have causal properties; for example, particular norms may be localised to positions or be highly contextualised to specific actions. ### Clinical teams and oversight in the UK healthcare system We situate this work primarily within England and Wales and refer to the institutions within and linked to healthcare provision throughout as the National Healthcare Service (NHS). The NHS represents a large national public sector operational and policy environment for analysis. Policy making bodies and adjacent regulators are also developing relevant guidance and workstreams to integrate AI systems within the NHS which provide valuable contextual literature for the analysis. While we have chosen healthcare as our application area due to its advanced use of AI and burgeoning grey literature on its application, we believe our findings are of relevance to a variety of other types of public sector decision-making that may also often involve hierarchical teams taking high-stakes front line decisions, such as defence, social work, and education. Our focus is also upon medical doctors operating within a hospital or clinical environment within England and Wales as representative of professional clinical experts. We recognise the multiplicity of healthcare professionals involved within healthcare provision; however, doctors were selected as an object of study primarily due to existing liability attribution structures within healthcare which allocate a duty of reasonable skilled care to clinicians [122]. Within the UK, medical doctors develop as medical students through degree-level training to the level of consultant, GP (general practitioner) or SAS (staff grade, associate specialist, and speciality) doctor [12]. All doctors in practice must follow the General Medical Council (GMC) 'good medical practice standards of professional behaviour', hold a registration with a licence to practice [37] and are subject to disciplinary assessment in the event of a complaint by the GMC and Medical Practitioners Tribunal Service (MPTS). The delegated team relationships that we explore are one element within complex systems and overlapping alternate team structures operating at polycentric multiple levels of decision-making [19]. Multidisciplinary teams (MDTs) are groups 'of health and care staff who are members of different organisations and professions (e.g., GPs, social workers, nurses), that work together to make decisions regarding the treatment of individual patients and service users' [118]. We have focused upon single discipline teams of doctors primarily due to existing liability mechanisms. However, the influence of MDTs and role of data professionals within these is explored within our analysis in Section 5 as such teams may be reconfigured by the expanded use of AI systems. To understand and explain the 'rules in use' of the interaction situation of clinical oversight and potential AI impacts we draw upon existing scholarship and a range of sources including grey literature exploring AI impacts and perceptions of healthcare workers. Recent work undertaken by Health Education England (HEE) describes the challenge for clinical users in using AI to make 'context-dependent value judgements and continuously ascertain the appropriate level of confidence in AI-derived information, balancing AI-derived information against conventional clinical information' [49]. As explored, ascertaining such 'appropriate levels of confidence' and the boundaries of'sufficient information' for clinical oversight is the focus for our analysis. Rules are defined as 'generally agreed-upon and enforced prescriptions that require, forbid or permit specific actions for more than a single individual' [99]. Cole highlights the significance of formal legal rules in structuring a situation [26] and many of the illustrative examples we use in the institutional analysis would fall within this category; however, we also define 'rules in use' to incorporate standards, guidelines, and norms of professional practice which structure and impact delegated oversight processes. Examples of the type and origin of the rules we consider include legislation, medical case law, regulatory codes and guidance, medical curricula, professional standards, and tribunal decisions. The IAD rules may have utility as a potential future design mechanism to isolate and analyse impacts of the deployment of AI systems in high-stakes operational contexts. We hypothesise that they could support more granular discussion and assessment of impacts at the operational level. ## 4 The 'rules of use' of clinical oversight As shown in the methods section, there are seven types of interaction rule identified within the IAD framework. As a first step, we review each of these rules and describe how they function within the scope of the clinical setting in England and Wales that we are studying. The full results of this process are presented in Appendix A.1. As can be seen below, we consider position, boundary, aggregation, payoff, and information rules to be the most relevant in structuring the interaction situation of clinical oversight. ### Position rules The position rules detail the positions filled by participants and to which specific actions sets are assigned [82]. The positions within the team context explored are nested within a hierarchy ranging from clinicians in training through to senior consultants who may also hold internal and external clinical management or research positions. The assigned 'action sets' for each position are, initially, defined in relation to training requirements. This training must be balanced with public safety and can be a challenge for senior doctors to manage [77][51]. Advancement may also be competitive as participants directly or indirectly compete against others to advance into a more senior position. This may be through a competitive application process or through a limited number of senior positions available. Liability is attributed to a standard of'skilled care' appropriate to a particular role. The test of clinical negligence is that of a reasonable body of opinion of other practitioners in that field [122]. Therefore, junior doctors could be held to the standard of a reasonable junior doctor and the level of oversight they request may be a potential contributing factor within liability attribution in the event of an adverse outcome. The impact of title and rank is a significant variable which impacts how oversight is conducted and perceived. Seniority is an important signifier for all participants of necessary experience. Junior members may aspire to move their position to that of the more senior party in the team. The absence of previous adverse liability actions, or disciplinary notes, alongside a range of indicators defined by a particular institution or specialism are additional requirements for maintenance of a position for all parties. The power of senior members of the team to impact this through assessment is a significant element of the impact of position rules within oversight. An example can be seen within the postgraduate foundation year of training, during this time junior doctors are required to complete a series of recorded Supervised Learning Events (SLEs) [112] as a record of feedback from a supervisor of direct observation of procedure or clinical decisions. ### Boundary rules Often called the 'entry and exit rules' boundary rules define the conditions for entry into a particular role, the requirements to maintain the position and the mechanisms through which participants leave a position [82]. Boundary rules for entry into clinical teams of doctors vary across different disciplines with training pathways and requirements impacted by bodies including the GMC and specific Royal College requirements for postgraduate progression (see example [109]). A degree, combined with an extensive and defined period of training, is a core entry requirement for doctors trained within the UK. Following qualification, doctors are licensed within an open register maintained by the GMC [37]. This complexity endows senior team members significant power not only in relation to oversight of decisions, but also due to the potential importance of such decisions to the boundary rules of junior participants. Calling upon a senior member to oversee a decision too much or too infrequently could significantly impact the ability of the junior team member to remain in that position (Lewis et al., [64] explore the risks of reluctance to obtain oversight within prescribing errors made by junior doctors). However, clinicians may also have multiple boundary rules within different action situations to manage, including continuing professional development, maintenance of training records, and completing a minimum number of hours of work or attending/providing training sessions to meet boundary conditions around professional knowledge (see the GMC 'Excellence by Design' [123]). These conditions are in turn defined and established by a range of bodies which clinicians may interact with in many ways. Therefore, a range of polycentric decision-making bodies including professional bodies, medical trade unions, Royal Colleges, institutional, regional, and national training bodies and tribunal disciplinary bodies all potentially influence the entry and exit of clinical professionals. ### Aggregation rules Aggregation rules relate to the number of participants involved and the manner in a which a final decision is determined. The hierarchical situation is a 'nonsymmetric aggregation rule' [82] in that participants are treated differently according to seniority and position held. The accountability for the final decision may often lie with a senior party; however, as explored, the junior party may still hold liability as a member of a professional body with clear professional obligations. In addition, the impact of an error may impact promotion and future advancement for both parties. Within a healthcare context, each admitted patient has a named consultant which potentially provides an example of aggregation of decisions contained within a single named senior individual responsible within the team [115] However, best practice 'norms', good management practice and boundary rules regarding ethical and legal actions all limit the range of actions and way in which aggregation is conducted in practice. The increasing use of multi-disciplinary teams and vital role of shared decision-making are all inputs into a final aggregated clinical decision [33, 67],[76]. The interaction of the rules is also illustrated here as the senior doctor may have authority to decide an outcome in their domain but another rule, such as the impact of choice rules relating to performance targets or changes to junior doctor training requirements, may diminish their authority and capacity. ### Information rules A core element within oversight is the level of 'information available to participants about the overall structure of that situation'[82]. Information rules indicate how information flows from and between participants. Within hierarchical team structures, achieving optimal information flow can be a challenging area as authority differentiation may impact the ability of junior team members to share opinions, and even facts, relevant for necessary oversight of delegated actions [11, 70, 36]. Understanding the norms of information flow in a particular context does present a significant training and adaptation challenge for junior team members (see [35, 93]. Dialogue and verbal communication are also vital to elicit relevant information within a high-stakes environment, particularly within critical situations (see work exploring the'medical handover' [6, 106]). More senior members will often need to question junior team members thoroughly to test the information and recommendations provided. The manner and style of questioning, such as 'interrogation' of more junior subordinates, can be critiqued as it may contribute to low inclusivity and reluctance for junior team members to provide reciprocal oversight of senior decisions [70]. However, the ability to provide information and answer questions in verbal or written forms outlining the variables and reasoning underpinning a decision is a fundamental component of clinical oversight. Additional aspects of information rules also include the frequency and accuracy of communication - there may need to be regular 'check ins' and limited time available in which to discuss a case with a more senior colleague. Decisions may also be inherited but then enacted by a junior doctor [14, 32]. Information rules are also particularly relevant as audit procedures and the need to log decision flows, including what 'evidence' was used at the time, comprise a key aspect of tracking oversight of decisions. ### Payoff rules Payoff rules assign external rewards or sanctions to actions or variables. Failure to oversee a decision correctly may incur sanctions to a senior member of the team both in relation to their work within a given setting, their institutional reputation and, in a wider sense, may result in professional investigations regarding their conduct by professional bodies or external discipline/tribunal entities. The role of the media and public perception is of note here and may represent a highly contextual norm of high-stakes environments. However, an oversight mistake may also, depending on severity, result in investigations and negative publicity for the institution which may name or impact the senior, and potentially junior, decision makers. Medical commitments from doctors to patients have a long history of high standards of care and 'oaths' forming a core element of clinical motivation and duty (for example, in 2017, 70% of medical schools used some form of acknowledgement of an oath [40]). The GMC's 'Good Medical Practice' commitments also make such duties explicit through statements including: 'Make the care of your patients your first concern', 'Contribute to and comply with systems to protect patients' and 'Work collaboratively with colleagues to maintain or improve patient care [37]. At an individual level, the junior member of the team may also be directly affected by how they conduct oversight requests and actions as their training and progression are partially dependent upon senior leaders' perceptions and assessments of them [14]. Again, learning the nature of oversight and assimilation within decision-making contextual 'norms' are important elements for a particular action and career development of participants. The payoff rules of clinical oversight are therefore closely aligned to boundary rules relating to how participants enter and leave a position. ### Choice and Scope rules Scope rules are those which affect a known outcome of an action situation and could include institutional or service level performance requirements which may impact practical clinical opportunities for junior team members and the resultant scope of oversight. Choice rules outline what a participant can or cannot do in a specific decision-situation and could impact oversight if a more senior party was not available to oversee decisions or may be affected by the extent to which a junior colleague is 'trusted' by a senior party [56]. As described above, whilst the choice and scope rules contribute to shaping the boundaries and dimensions of an oversight interaction situation, they are potentially less determinative of the final decision outcome. ### Summary Mapping and analysis of the various rules of the interaction situation of clinical oversight highlights how existing oversight is nested within professional training requirements and relies heavily upon explanation and questioning to elicit necessary information. Existing oversight structures rely significantly upon the ability to engage in questioning and confirmatory checks to reassure the senior party that a decision is or was made correctly. This questioning elicits weaknesses and provides multiple forms of data to generate'sufficient information' for oversight. Oversight is also learnt through practical training and experience to develop necessary skills and understanding. Delegation of decisions and treatment to junior colleagues provide the necessary delegated practice to gain such experience and develop the profession more broadly within supervised boundaries. Professional bodies, norms and expectations of professional practice and liability mechanisms also act as additional oversight mechanisms of high-risk decisions. All these core components of oversight are potentially impacted and reconfigured by AI systems. ## 5 Mapping and analysis of the impacts of AI systems on oversight in clinical teams Within Section 4, we mapped the rules in use of hierarchical clinical oversight to understand oversight as an interaction situation. We now use the rules identified as significant to situate the potential impacts of AI systems. ### Position rules AI systems are arguably a tool, but in other senses bring and provide advanced expertise beyond that of existing medical products. An illustrative example is the rapid development of medical imaging systems which utilise ML across a variety of specialisms from diabetic eye disease to diagnosis support within pathology [18, 44]. The delegation to, and development of, appropriate confidence in such systems requires additional expertise alongside assurance and governance mechanisms. However, it is not yet clear what position, or role, AI systems, or data scientists, should or could occupy within a team [50] Given the highly structured and formal processes of entry routes into medical professions, this presents a challenge for resulting oversight of AI systems. New clinical positions, or workforce 'archetypes' [50] may provide such vital expertise to support oversight. Whilst clinicians clearly already work together to reach a decision, these existing teams are complementary teams of fellow clinical professionals operating within similar hierarchical teams and with shared norms of clinical oversight and review. The NHS AI Lab and HEE report highlighted that retention of digital professionals within the NHS was a challenge in part due to the remuneration levels classifying their pay scale at 'administrative/clerical' staff level. They also noted that there are few individuals in senior positions, and these are not represented within high-level decision-making [50]. Determining the scope and integration of data scientists and informaticians into senior positions to develop, respond, and support oversight of AI systems is a significant implementation challenge. ### Boundary rules The rise of multi-disciplinary teams and a greater emphasis upon shared decision-making may all provide mechanisms to support change and AI implementation. As seen within the clinical context, medical liability is defined as'skilled care' appropriate to a particular role [122]. Pharmaceuticals and medical devices are also regulated and externally verified and monitored by external agencies within the NHS. Various proposals exist and are in development for medical audits and potential clinical trials of AI systems [28]. These aim to contribute to engendering the 'appropriate confidence' in AI systems necessary to drive deployment. However, at the deployment level, significant questions remain regarding the boundaries and integration of 'hybrid' medical and data professionals who would provide such services, their professional responsibilities, liability, and perception of their role by medical staff [66]. Progression through the stages of medical training provide opportunities to delegate and oversee decisions from junior staff. The risks of clinical deskilling are explored further within analysis of the payoff rules below. However, the removal of aspects of experience to assessment and decision-making through automation or augmentation may impact access to positions. It may also reduce the ability for 'downstream professionals' [100] to oversee a system in use if the ability to have previously undertaken that task is abstracted away, for example visual assessment or risk assessment opportunities without the support of systems. Automation or reduction in a particular role may not be significant for the performance of the decision itself, if clinical performance is verified, but could significantly impact operational oversight to detect outliers or model drift. If a task has not been conducted in training or in practice then potentially the confidence of clinicians to challenge and explain, to patients and wider team members, how a decision was made and reviewed is impacted. ### Aggregation rules The form in which a decision is aggregated can be highly context and domain specific. Each decision will combine varying levels of individual, team, and institutional elements. Different medical specialities will also interact with colleagues within multi-disciplinary teams and work alongside patients to reach shared decisions regarding treatment [76]. This complexity places significant responsibility upon the senior party to oversee a decision, in practice this may be the'responsible' consultant under whom the patient was admitted [76]. Understanding who the responsible party is, and why a decision was made is a critical requirement of oversight. The ability to check, review and 'disaggregate' the processes underpinning a decision through information exchange with junior practitioners is integral. Whilst AI systems clearly do not prevent such processes, they may contribute to a premature closing of options through algorithmic defence or automation bias [2, 4, 105]. This artificial reduction of information exchange and discussion may then impact oversight even if clinical processes are expedited and costs reduced. Work exploring interaction with decision-recommendation systems has found some evidence of such algorithmic deference and of struggles to evaluate systems and risk assessments [41, 43, 100]. Individual and collective clinical involvement in the design and development of systems for operational use may support longer-term development and deployment of AI in healthcare. The archetypes, four specific 'roles' needed to implement and oversee AI systems at all levels of healthcare, described by HEE [50], are positive developments which aim to systematically consider how the NHS could change to incorporate AI systems. Whilst welcome, a sole focus upon individual roles may partially obscure the existing system level'shared awareness' and structures that oversight currently relies upon. However, such roles will necessarily play a significant part in generating 'appropriate confidence' and would need to be embedded into healthcare systems at a level commensurate with their potential responsibility and value. The short-term costs in time and of phased integration and co-development may be substantially less than the significant, longer-term clinical and economic costs of misaligned or poorly specified systems. The liability of multidisciplinary teams has been explored by Klemm and Lehman [59] who highlight that each practitioner potentially retains liability for their own disciplinary area within a collective decision. Disentangling liability for the decision to deploy AI clinical decision support tools is a challenging task with a wider team of potentially influential actors (see the 'forgettance stack' of actors within AI development, [74]). The level at which a clinical decision maker has appropriate confidence in a support tool may differ materially from that of the test usually assigned to a'reasonable' position holder in their position. Establishing this level in the absence of guiding precedent, and in relation to a potentially adaptive system, may raise questions in defining the boundaries of oversight required from professionals. Liability for oversight extends beyond legal sanctions and impacts the ability of a clinician to practice, and their reputation. The boundaries of 'appropriate confidence' may also necessarily vary considerably across specialist and use, for example a decision support imaging application within ophthalmology [1] when compared to a fully closed-loop insulin management system for the control of diabetes [114]. ### Information rules The information available to participants is closely aligned to the structure of oversight. The team hierarchy itself may impact the information flow and the form of information provided. As discussed, information is elicited through questioning and dialogue between professionals. Whilst this can be a challenging environment for junior colleagues, the ability to question a decision, or the decision pathway, is a key method of oversight and in demonstrating scrutiny of decisions. In recent empirical work exploring the form of explanation that clinical practitioners require from machine learning models, interactive natural language explanations were preferred to treat a system as 'another colleague' [61]. As discussed in Section 4, this preference for explanations through dialogue underlines the importance of continuous questioning within healthcare oversight to understand why a decision has been made. Information flow is also frequently dynamic, situational, and adaptive between members of a clinical team. A system integrated within decision-making should be explainable to users but, as Lakkaraju et al., [61] also highlight, responses should follow dialogue conventions to leverage the context of multi-stage discussions to provide richer responses. It may also be possible that conversational 'chatbot' agents may play a role as group facilitators to enhance and capture existing team decision-making. As Kim et al. [57] explored, such systems may support a wider diversity in opinion exchange. Oversight may be requested due to uncertainty or a novel clinical situation. As Paek and Horvitz note 'While people manage these multiple uncertainties with almost effortless ease, automated dialog systems often break down on account of them' [85]. Whilst the hierarchical context may prevent challenges in demonstrating uncertainty to some extent, the ability to seek further advice from senior colleagues regarding a challenging situation is integral within medicine, and more broadly within professional contexts. The need for systems that defer to senior decision-makers and clearly represent uncertainty appears a particularly necessary task for the assessment by clinical professionals of appropriate confidence. Clinical data scientists within the operational team could significantly support explainability. Access to such data professionals is identified by HEE as a priority area of development [50]. However, the impacts of perceptions of the status of data professionals and of their domain expertise could potentially present a barrier to their integration into existing operational teams. Medical professionals hold appropriate confidence in a pharmaceutical or medical device as they have confidence in the existing clinical evidential norms and standards of assurance. A recent report exploring the factors influencing healthcare workers confidence in AI technologies noted that 'clinician interviewees for this research perceived that AI technologies used in patient care should be held to the same evidence standards as other medical interventions, such as pharmaceuticals. This includes systematic reviews, randomised controlled trials (RCTs) and peer review research' [49]. Various methods of trials and testing have been proposed and are in development for AI systems (see recent work by Cruz Rivera et al. [28]). As these processes mature and themselves become standardised, this may support oversight in generating confidence amongst professional in the information provided. The ability of medical professionals to balance clinical judgement against AI inputs is an open question. The risks of 'infobesity' [72], whereby a human's cognitive ability is overloaded, algorithmic deference, and the challenges of human interaction with risk assessments [42] require assessment to test such risks in simulated operational contexts. Such work may also support further analysis of the significant factors within algorithmic aversion for expert users [53] alongside the practical questions of upskilling time and economics. Aligned to this work is the need for contextual explanations of AI-assisted decisions. Leslie outlines six explanation types, which are rationale, responsibility, data, fairness, safety and performance and impacts [63], which can provide the basis for understandable reasoning and oversight of system behaviour. Recording the evidence and reasoning contained within a decision is key to oversight and is a core element of available information. The information collected in medical decision-making spans a variety of quantitative and qualitative forms including bio signal data, verbal, and written reports alongside observational notes. Not all aspects of the information required and used for diagnosis can be processed or used by AI systems; however, some of this information may then be processed to levels beyond human capabilities. The way in which these varied forms of information are then balanced, aggregated, and overseen may present risks of overreliance upon AI or human factors. ### Payoff rules An incentive for delegation within a hierarchical team is to manage the volume of work undertaken. For junior team members, this context, ideally, provides training, oversight, and support as they progress in the profession. The incentives to incorporate, 'train' and oversee novel AI systems within a team may not be as prevalent for individual clinicians when compared to the pre-existing institutional and professional incentives to train and support junior colleagues. Mistakes and errors may also be less tolerated and harder to understand, without additional support, from AI outputs than from oversight of a junior colleagues. Questioning and shared context with junior colleagues may also provide easily understood clues regarding the reasons for mistakes. They may be then easier to overcome promptly than an error made through incorrect specification, flawed data, or model drift from AI systems. Oversight of junior team members is also linked closely with professional advancement. If systems are used to automate or change the extent of foundational level tasks this could run the risk of deskilling and access to training opportunities 'in situ'. This may in turn affect oversight and the opportunity to develop resulting skills in balancing appropriate confidence. A reduction or change to these opportunities may also face challenges from professionals concerned with longer-term impacts upon the role and autonomy. Deskilling could potentially be mitigated with simulation and supervised 'virtual' practice to ensure clinical skills are not reduced (see review of skills assessment in neurosurgery using virtual reality [20]). The use of medical auditors within routine clinical practice may also support the identification of individual and collective errors from an AI system. However, such processes are not currently routine and collection of aggregate data for assessment may prove challenging. Recent algorithmic audit work highlighted the need for human level performance data for comparison [66]. ## 6 Conclusion The two-stage process we have applied implements the 'rules in use' from the IAD framework to understand the core dimensions of hierarchical team clinical oversight and to evaluate potential AI impacts. Using the 'rules in use' of the IAD as a tool of institutional analysis is a promising area of development to understand the complexity of institutional systems of oversight. This approach has much to offer as a tool of both institutional analysis and, potentially, in developing policy for oversight of AI systems within public services. Such work is an increasing priority to investigate how 'human-AI teams should be configured to reduce likelihood of negative impacts/ harms to individuals, groups, communities, and society' [75]. Institutional analysis provides a useful assessment tool of existing structures when considering the development of AI systems. We have situated this analysis within a clinical case study example; however, elements of the analysis may highlight shared areas of challenge in balancing expert knowledge with AI inputs to provide oversight within professional teams. Examples may include education, defence, social care provision and public administration. These shared areas of challenge include the risks of deskilling, the primary role of dialogue and questioning, and questions of liability. All of these highlight potential operational challenges for public sector professionals in balancing team expertise with decision support or input from AI systems to develop and demonstrate 'appropriate confidence'. Existing clinical oversight systems rely upon the ability to engage in questioning and confirmatory checks to reassure the senior party that a decision is correct. AI systems widen the range of people involved in this team to data professions who can provide advice, answer questions and model inputs to clinical decisions when a system is used. As explored, this may present a challenge to existing professional structures and processes of information flow. Recent work [50] explores the potential for healthcare professionals to co-develop systems and this approach is a promising area of development to embed contextual and domain expertise and norms within the development of AI systems. The extent of their subsequent involvement in deployment and the development of 'communities of practice' [91] to address and, vitally, to share known issues and concerns with wider systems would potentially support later oversight by distributed review and assessment. Research exploring contextual and expert factors within the field of algorithmic aversion is a promising further research direction to explore the relationship between oversight and aversion amongst expert users. Alongside this, the need for empirical data regarding integration and oversight of AI, and data scientists more broadly, within existing professional clinical teams is a necessary area of development. Our focus within this work has been upon hierarchical clinical teams within the same specialist; however, the impacts of AI upon oversight within multi-disciplinary teams and for shared decision-making with patients are future research directions to examine and test system integration. We consider that the IAD framework has much to offer in such complex system analysis. In this paper we found that within high-stakes public sector environments oversight of clinical service delivery is necessarily a priority and is a collective endeavour within bounded hierarchical team structures. The collective team intelligence, processes and layers of supervised delegation provide trusted mechanisms of service delivery and established processes of oversight. Our analysis highlights that understanding and mapping of these mechanisms, and the systems and context within which they operate, is an essential aspect of design and planning for AI systems within clinical decision-making. The IAD provides a potential tool to conduct such work both at an operational level and to highlight broader generic challenges for the oversight of AI systems within high-stakes contexts. A focus upon how 'appropriate confidence' is balanced and the level of'sufficient information' required highlights the need to understand existing institutional structures of oversight. As Binns highlights, 'these issues will have to be worked out as algorithmic systems are deployed in context; if individual justice is worth protecting, we cannot assume that it will be secured by simply putting a human in the algorithmic loop' [7]. Recognising and mapping existing structures of oversight within high-stakes contexts highlights the primacy of teams of professionals operating within existing rules that structure and bound clinical oversight. Oversight of AI systems within such processes is more accurately and necessarily a 'team in the loop' endeavour. Recognising and understanding of such complexity is a vital step to prepare and suggest future workstreams to develop the professionals, standards, systems, and training needed to develop such teams. ## Acknowledgments This work was supported by Towards Turing 2.0 under the EPSRC Grant EP/W037211/1 & The Alan Turing Institute. DM is also a PhD researcher within the UKRI Centre for Accountable, Responsible and Transparent AI at the University of Bath supported by UKRI Grant EP/S023437/1.
2309.01337
Three-body strong decays of the $Y(4230)$ via the light-cone QCD sum rules
We tentatively assign the $Y(4230)$ as the vector tetraquark state with a relative P-wave between the scalar diquark pair, and explore the three-body strong decays $Y(4230) \to \bar{D}^{*-}D^{*0}\pi^+$, $\bar{D}^{*-}D^0\pi^+$, $J/\psi\pi^+\pi^- $ and $ J/\psi K^+K^-$ with the light-cone QCD sum rules by assuming contact four-meson coupling constants. The resulting partial decay widths are too small to account for the experimental data, and we expect those decays take place through an intermediate meson. We can search for the intermediate states and precisely measure the branching fractions to diagnose the nature of the $Y$ states.
Zhi-Gang Wang
2023-09-04T03:45:35Z
http://arxiv.org/abs/2309.01337v2
###### Abstract ###### Abstract We tentatively assign the \(Y(4230)\) as the vector tetraquark state with a relative P-wave between the scalar diquark pair, and explore the three-body strong decays \(Y(4230)\to\bar{D}^{*-}D^{*0}\pi^{+}\), \(\bar{D}^{*-}D^{0}\pi^{+}\), \(J/\psi\pi^{+}\pi^{-}\) and \(J/\psi K^{+}K^{-}\) with the light-cone QCD sum rules by assuming contact four-meson coupling constants. The resulting partial decay widths are too small to account for the experimental data, and we expect those decays take place through an intermediate meson. We can search for the intermediate states and precisely measure the branching fractions to diagnose the nature of the \(Y\) states. Three-body strong decays of the \(Y(4230)\) via the light-cone QCD sum rules [2mm] Zhi-Gang Wang 1 Footnote 1: E-mail: [email protected]. Department of Physics, North China Electric Power University, Baoding 071003, P. R. China PACS number: 12.39.Mk, 12.38.Lg Key words: Tetraquark state, QCD sum rules ## 1 Introduction In the past years, several vector charmonium-like states have been observed, they cannot be accommodated comfortably in the traditional charmonia. In 2005, the BaBar collaboration investigated the initial-state radiation (ISR) process \(e^{+}e^{-}\to\gamma_{ISR}\,\pi^{+}\pi^{-}J/\psi\) and observed the \(Y(4260)\) in the \(\pi^{+}\pi^{-}J/\psi\) mass spectrum [1], subsequently, the \(Y(4260)\) was confirmed by the Belle and CLEO collaborations [2, 3]. In 2006, the BaBar collaboration observed a broad structure at \(4.32\,\)GeV in the \(\pi^{+}\pi^{-}\psi^{\prime}\) mass spectrum in the process \(e^{+}e^{-}\to\pi^{+}\pi^{-}\psi^{\prime}\)[4]. In 2007, the Belle collaboration studied the process \(e^{+}e^{-}\to\gamma_{ISR}\pi^{+}\pi^{-}\psi^{\prime}\), and observed two structures \(Y(4360)\) and \(Y(4660)\) in the \(\pi^{+}\pi^{-}\psi^{\prime}\) mass spectrum [5, 6]. In 2008, the Belle collaboration explored the process \(e^{+}e^{-}\to\gamma_{ISR}\Lambda_{c}^{+}\Lambda_{c}^{-}\) and observed the \(Y(4630)\) in the \(\Lambda_{c}^{+}\Lambda_{c}^{-}\) mass spectrum [7]. The \(Y(4360)\) and \(Y(4660/4630)\) were confirmed by the BaBar collaboration [8]. In 2014, the BESIII collaboration searched for the process \(e^{+}e^{-}\to\omega\chi_{c0/1/2}\), and observed a resonance \(Y(4220)\) in the \(\omega\chi_{c0}\) cross section, the measured mass and width are \(4230\pm 8\pm 6\,\)MeV and \(38\pm 12\pm 2\,\)MeV, respectively [9]. In 2016, the BESIII collaboration measured the cross sections of the process \(e^{+}e^{-}\to\pi^{+}\pi^{-}h_{c}\), and observed two resonances, the \(Y(4220)\) has a mass of \(4218.4^{+5.5}_{-4.5}\pm 0.9\,\)MeV and a width of \(66.0^{+12.3}_{-8.3}\pm 0.4\,\)MeV, respectively, and the \(Y(4390)\) has a mass of \(4391.6^{+6.3}_{-6.8}\pm 1.0\,\)MeV and a width of \(139.5^{+16.2}_{-20.6}\pm 0.6\,\)MeV, respectively [10]. Also in 2016, the BESIII collaboration precisely measured the cross section of the process \(e^{+}e^{-}\to\pi^{+}\pi^{-}J/\psi\) and observed two resonances, which are consistent with the \(Y(4230)\) and \(Y(4360)\), respectively [11]. In 2018, the BESIII collaboration measured the cross section of the process \(e^{+}e^{-}\to\pi^{+}D^{0}\bar{D}^{*-}\) and observed two enhancements around \(4.23\) and \(4.40\) GeV, respectively, the lower enhancement has a mass of \(4228.6\pm 4.1\pm 6.3\,\)MeV and a width of \(77.0\pm 6.8\pm 6.3\,\)MeV, and it is compatible with the \(Y(4230)\)[12]. In 2022, the BESIII collaboration explored the \(e^{+}e^{-}\to K^{+}K^{-}J/\psi\) cross sections and observed two resonant structures, one is consistent with the well-known \(Y(4230)\); the other was observed for the first time and denoted as the \(Y(4500)\)[13]. Recently, the BESIII collaboration explored the Born cross sections of the process \(e^{+}e^{-}\to\bar{D}^{*-}D^{*0}\pi^{+}\) and observed three enhancements, whose masses are \(4209.6\pm 4.7\pm 5.9\,\)MeV, \(4469.1\pm 26.2\pm 3.6\,\)MeV and \(4675.3\pm 29.5\pm 3.5\,\)MeV, respectively, and widths are \(81.6\pm 17.8\pm 9.0\,\)MeV, \(246.3\pm 36.7\pm 9.4\,\)MeV and \(218.3\pm 72.9\pm 9.3\,\)MeV, respectively, and they are consistent with the \(Y(4230)\), \(Y(4500)\) and \(Y(4660)\) states, respectively [14]. There have been several assignments for those \(Y\) states, such as the tetraquark states [15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29], hybrid states [30, 31, 32, 33], hadro-charmonium states [34, 35], molecular states [36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46], kinematical effects [47, 48, 49, 50], baryonium states [51], etc. The \(Y(4260)\), which is the milestone of the \(Y\) states, has been extensively studied. In the present work, we will focus on the scenario of tetraquark states. In Ref.[15], L. Maiani et al assign the \(Y(4260)\) as the first orbital excitation of a (scalar)diquark-(scalar)antidiquark state \([cs][\bar{c}\bar{s}]\) based on the spin-spin and spin-orbit interactions. In Ref.[17], A. Ali et al investigate the hidden-charm P-wave tetraquarks and the newly observed excited charmed \(\Omega_{c}\) states in the diquark model using the spin-spin, spin-orbit and tensor interactions, and observe that the preferred assignments of the ground state tetraquark states with \(L=1\) are the \(Y(4220)\), \(Y(4330)\), \(Y(4390)\), \(Y(4660)\) rather than the \(Y(4008)\), \(Y(4260)\), \(Y(4360)\), \(Y(4660)\). However, the observation of the process \(Y(4260)\to Z_{c}(3900)^{\pm}\pi^{\mp}\to J/\psi\pi^{+}\pi^{-}\) disfavors assigning the \(Y(4230)\) as tetraquark state with the symbolic quark constituents \(cs\bar{c}\bar{s}\)[52, 53]. In Ref.[20], we introduce an explicit P-wave between the diquark and antidiquark to construct the four-quark currents, and study the vector tetraquark states with the QCD sum rules systematically, and obtain the lowest vector tetraquark masses up to now. The predictions support assigning the \(Y(4220/4260)\), \(Y(4320/4360)\) and \(Y(4390)\) as the vector tetraquark states with a relative P-wave between the diquark (\(qc\)) and antidiquark (\(\bar{q}\bar{c}\)) pair. In Ref.[23], we take the scalar, pseudoscalar, axialvector, vector and tensor (anti)diquarks as the basic building blocks to construct vector and tensor four-quark currents without introducing explicit P-waves, as the P-waves are implied in negative-parity of the (anti)diquarks, and explore the mass spectrum of the vector hidden-charm tetraquark states via the QCD sum rules comprehensively, and obtain the lowest tetraquark mass about \(4.35\,\)GeV and revisit the assignments of the \(Y\) states. At the energy about \(4.5\,\)GeV, we obtain three hidden-charm tetraquark states with the \(J^{PC}=1^{--}\), the tetraquark states with the symbolic structures \([uc]_{\bar{\zeta}}[\overline{dc}]_{A}-[uc]_{A}[\overline{dc}]_{\bar{\zeta}}\), \([uc]_{\bar{A}}[\overline{dc}]_{V}+[uc]_{V}[\overline{dc}]_{\bar{A}}\) and \([uc]_{S}[\overline{dc}]_{\bar{V}}-[uc]_{\bar{V}}[\overline{dc}]_{S}\) have the masses \(4.53\pm 0.07\,\)GeV, \(4.48\pm 0.08\,\)GeV and \(4.50\pm 0.09\,\)GeV, respectively, thus we have three candidates for the newly observed \(Y(4500)\)[14]. At the energy about \(4.7\,\)GeV, we obtain the mass \(4.69\pm 0.08\,\)GeV for the \([uc]_{A}[\overline{dc}]_{A}\) tetraquark state with the \(J^{PC}=1^{--}\), which is in very good agreement with the newly observed \(Y(4708)\)[54]. It is not necessary that the \(Y\) states below and above \(4.4\,\)GeV to have the same structures. We cannot assign a hadron unambiguously with the mass alone, we have to explore the decay width to make more robust assignment. In this work, we tentatively assign the \(Y(4230)\) as the tetraquark state \(|S_{qc},S_{\bar{q}\bar{c}};S,L;J\rangle=|0,0;0,1;1\rangle\) with the \(J^{PC}=1^{--}\) according to the calculations in Ref.[20], and extend our previous works to study the three-body strong decays \(Y(4230)\to J/\psi\pi^{+}\pi^{-}\), \(J/\psi K^{+}K^{-}\), \(\bar{D}^{*-}D^{0}\pi^{+}\) and \(\bar{D}^{*-}D^{*0}\pi^{+}\) with the light-cone QCD sum rules [55]. In Ref.[55], we tentatively assign the \(Y(4500)\) as the \([uc]_{\bar{A}}[\overline{uc}]_{V}+[uc]_{V}[\overline{uc}]_{\bar{A}}+[dc]_{\bar {A}}[\overline{dc}]_{V}+[dc]_{V}[\overline{dc}]_{\bar{A}}\) tetraquark state with the \(J^{PC}=1^{--}\), and suggest to calculate the four-meson coupling constants via the light-cone QCD sum rules directly based on rigorous quark-hadron duality, then study three-body decay \(Y(4500)\to\bar{D}^{*-}D^{*0}\pi^{+}\). The article is arranged as follows: we obtain the light-cone QCD sum rules for the four-meson coupling constants in section 2; in section 3, we present numerical results and discussions; section 4 is reserved for our conclusion. Light-cone QCD sum rules for the four-meson coupling constants Firstly, we write down the three-point correlation functions \(\Pi_{\mu\alpha\beta}(p,q)\), \(\Pi^{1}_{\mu\alpha}(p,q)\) and \(\Pi^{2}_{\mu\alpha}(p,q)\) in the light-cone QCD sum rules, \[\Pi_{\mu\alpha\beta}(p,q) = i^{2}\int d^{4}xd^{4}y\,e^{-ip\cdot x}e^{-iq\cdot y}\left\langle 0 |T\left\{J^{Y}_{\mu}(0)J^{D^{*+}}_{\alpha}(x)J^{\widetilde{D}^{*0}}_{\beta}(y) \right\}|\pi(r)\right\rangle,\] \[\Pi^{1}_{\mu\alpha}(p,q) = i^{2}\int d^{4}xd^{4}y\,e^{-ip\cdot x}e^{-iq\cdot y}\left\langle 0 |T\left\{J^{Y}_{\mu}(0)J^{D^{*+}}_{\alpha}(x)J^{\widetilde{D}^{0}}(y)\right\}| \pi(r)\right\rangle,\] \[\Pi^{2}_{\mu\alpha}(p,q) = i^{2}\int d^{4}xd^{4}y\,e^{-ip\cdot x}e^{-iq\cdot y}\left\langle 0 |T\left\{J^{Y}_{\mu}(0)J^{J/\psi}_{\alpha}(x)J^{\pi^{+}}(y)\right\}|\pi(r) \right\rangle, \tag{1}\] where the currents \[J^{Y}_{\mu}(0) = \frac{\varepsilon^{ijk}\varepsilon^{imn}}{2}\Big{\{}u_{j}^{T}(0) C\gamma_{5}c_{k}(0)\stackrel{{\leftrightarrow}}{{\partial}}_{\mu} \bar{u}_{m}(0)\gamma_{5}C\bar{c}_{n}^{T}(0) \tag{2}\] \[+d_{j}^{T}(0)C\gamma_{5}c_{k}(0)\stackrel{{ \leftrightarrow}}{{\partial}}_{\mu}\bar{d}_{m}(0)\gamma_{5}C\bar{c}_{n}^{T}(0 )\Big{\}}\,,\] \[J^{D^{*+}}_{\alpha}(x) = \bar{d}(x)\gamma_{\alpha}c(x)\,,\] \[J^{\widetilde{D}^{*0}}_{\beta}(x) = \bar{c}(y)\gamma_{\beta}u(y)\,,\] \[J^{\widetilde{D}^{0}}(y) = \bar{c}(y)i\gamma_{5}u(y)\,,\] \[J^{J/\psi}_{\alpha}(x) = \bar{c}(x)\gamma_{\alpha}c(x)\,,\] \[J^{\pi^{+}}(x) = \bar{d}(y)i\gamma_{5}u(y)\,, \tag{3}\] interpolate the mesons \(Y(4230)\), \(\bar{D}^{*}\), \(D^{*}\), \(D\), \(J/\psi\) and \(\pi\) respectively [20], the \(|\pi(r)\rangle\) is the external \(\pi\) state, the derivative \(\stackrel{{\leftrightarrow}}{{\partial}}_{\mu}=\stackrel{{ \rightarrow}}{{\partial}}_{\mu}-\stackrel{{\leftarrow}}{{ \partial}}_{\mu}\) embodies the P-wave effects. We resort to the correlation functions \(\Pi_{\mu\alpha\beta}(p,q)\), \(\Pi^{1}_{\mu\alpha}(p,q)\) and \(\Pi^{2}_{\mu\alpha}(p,q)\) to explore the hadronic coupling constants in the three-body strong decays \(Y(4230)\to\bar{D}^{*}D^{*}\pi^{+}\), \(\bar{D}^{*}D\pi^{+}\) and \(J/\psi\pi^{-}\pi^{+}\), respectively. At the hadron side, we insert a complete set of intermediate hadronic states having potential couplings with the interpolating currents into the three-point correlation functions \(\Pi_{\mu\alpha\beta}(p,q)\), \(\Pi^{1}_{\mu\alpha}(p,q)\) and \(\Pi^{2}_{\mu\alpha}(p,q)\), and isolate the ground state contributions explicitly, \[\Pi_{\mu\alpha\beta}(p,q) = \lambda_{Y}f_{D^{*}}^{2}m_{D^{*}}^{2}\frac{-iG_{A}r_{\tau}+iG_{B} p_{\tau}^{\prime}}{(m_{Y}^{2}-p^{\prime 2})(m_{D^{*}}^{2}-p^{2})(m_{D^{*}}^{2}-q^{2})} \varepsilon^{\rho\sigma\lambda\tau}\left(-g_{\mu\rho}+\frac{p_{\mu}^{\prime}p_ {\rho}^{\prime}}{p^{\prime 2}}\right) \tag{4}\] \[\left(-g_{\alpha\sigma}+\frac{p_{\alpha}p_{\sigma}}{p^{2}}\right) \left(-g_{\lambda\beta}+\frac{q_{\lambda}q_{\beta}}{q^{2}}\right)+\cdots\,,\] \[\Pi^{1}_{\mu\alpha}(p,q) = \frac{\lambda_{Y}f_{D^{*}}m_{D^{*}}f_{D}m_{D}^{2}}{m_{c}}\frac{iG _{C}}{(m_{Y}^{2}-p^{\prime 2})(m_{D^{*}}^{2}-p^{2})(m_{D}^{2}-q^{2})}\left(-g_{\mu \rho}+\frac{p_{\mu}^{\prime}p_{\rho}^{\prime}}{p^{\prime 2}}\right)\] (5) \[\left(-g_{\alpha\rho}+\frac{p_{\alpha}p_{\rho}}{p^{2}}\right)+\cdots\,,\] \[= i\Pi_{C}(p^{\prime 2},p^{2},q^{2})\,g_{\mu\alpha}+\cdots\,,\] \[\Pi^{2}_{\mu\alpha}(p,q) = \frac{\lambda_{Y}f_{J/\psi}m_{J/\psi}f_{\pi}m_{\pi}^{2}}{m_{u}+m _{d}}\frac{iG_{D}\,q\cdot r}{(m_{Y}^{2}-p^{\prime 2})(m_{J/\psi}^{2}-p^{2})(m_{\pi}^{2}-q^{2})} \left(-g_{\mu\rho}+\frac{p_{\mu}^{\prime}p_{\rho}^{\prime}}{p^{\prime 2}}\right)\] (6) \[\left(-g_{\alpha\rho}+\frac{p_{\alpha}p_{\rho}}{p^{2}}\right)+\cdots\,,\] \[= i\Pi_{D}(p^{\prime 2},p^{2},q^{2})\,q\cdot r\,g_{\mu\alpha}+\cdots\,,\] where \[\Pi_{C}(p^{\prime 2},p^{2},q^{2}) = \frac{\lambda_{Y}f_{D^{*}}m_{D^{*}}f_{D}m_{D}^{2}}{m_{c}}\frac{G_{C} }{(m_{Y}^{2}-p^{\prime 2})(m_{D^{*}}^{2}-p^{2})(m_{D}^{2}-q^{2})}+\cdots\,,\] \[\Pi_{D}(p^{\prime 2},p^{2},q^{2}) = \frac{\lambda_{Y}f_{J/\psi}m_{J/\psi}f_{\pi}m_{\pi}^{2}}{m_{u}+m_ {d}}\frac{G_{D}}{(m_{Y}^{2}-p^{\prime 2})(m_{J/\psi}^{2}-p^{2})(m_{\pi}^{2}-q^{2})}+\cdots\,, \tag{7}\] \(p^{\prime}=p+q+r\), the decay constants \(\lambda_{Y}\), \(f_{D^{*}}\), \(f_{\bar{D}^{*}}\), \(f_{D}\), \(f_{J/\psi}\), \(f_{\pi}\) and hadronic coupling constants \(G_{A}\), \(G_{B}\), \(G_{C}\), \(G_{D}\) are defined by, \[\langle 0|J_{\mu}^{Y}(0)|Y_{c}(p^{\prime})\rangle = \lambda_{Y}\varepsilon_{\mu}\,\] \[\langle 0|J_{\alpha}^{D^{*}\dagger}(0)|\bar{D}^{*}(p)\rangle = f_{\bar{D}^{*}}m_{\bar{D}^{*}}\xi_{\alpha}\,\] \[\langle 0|J_{\bar{D}^{*}}^{\bar{D}^{*}\dagger}(0)|D^{*}(q)\rangle = f_{D^{*}}m_{D^{*}}\zeta_{\beta}\,\] \[\langle 0|J_{\alpha}^{J/\psi}(0)|J/\psi(p)\rangle = f_{J/\psi}m_{J/\psi}\xi_{\alpha}\,\] \[\langle 0|J^{\bar{D}^{\dagger}}(0)|D(q)\rangle = \frac{f_{D}m_{D}^{2}}{m_{c}}\,\] \[\langle 0|J^{\pi^{\dagger}}(0)|\pi(q)\rangle = \frac{f_{\pi}m_{\pi}^{2}}{m_{u}+m_{d}}\, \tag{8}\] \[\langle Y_{c}(p^{\prime})|\bar{D}^{*}(p)D^{*}(q)\pi(r)\rangle = G_{A}\varepsilon^{\rho\sigma\lambda\tau}\varepsilon_{\rho}^{*} \xi_{\sigma}\zeta_{\lambda}r_{\tau}-G_{B}\varepsilon^{\rho\sigma\lambda\tau} \varepsilon_{\rho\sigma}^{*}\xi_{\sigma}\zeta_{\lambda}p^{\prime}_{\tau}\,\] \[\langle Y_{c}(p^{\prime})|\bar{D}^{*}(p)D(q)\pi(r)\rangle = -G_{C}\,\varepsilon^{*}\cdot\xi\,\] \[\langle Y_{c}(p^{\prime})|J/\psi(p)\pi(q)\pi(r)\rangle = -G_{D}\,q\cdot r\,\varepsilon^{*}\cdot\xi\, \tag{9}\] the \(\varepsilon_{\mu}\), \(\xi_{\alpha}\) and \(\zeta_{\beta}\) are polarization vectors of the \(Y(4230)\), \(\bar{D}^{*}(J/\psi)\) and \(D^{*}\) mesons, respectively. In the correlation functions \(\Pi_{\mu\alpha}^{1}(p,q)\) and \(\Pi_{\mu\alpha}^{2}(p,q)\), see Eqs.(5)-(6), there are other tensor structures, which lead to different QCD sum rules, those QCD sum rules have shortcomings in one way or the other, and we discard them. In the isospin limit, \(m_{u}=m_{d}\), \(f_{D^{*}}=f_{D^{*}}\) and \(m_{D^{*}}=m_{\bar{D}^{*}}\). The vertex \(\langle Y_{c}(p^{\prime})|J/\psi(p)K(q)K(r)\rangle\)\(=-G_{E}\,q\cdot r\,\varepsilon\cdot\xi\), in the \(SU(3)\) limit, \(G_{E}=G_{D}\), we have a universal coupling constant. The tensor structures in the correlation function \(\Pi_{\mu\alpha\beta}(p,q)\), see Eq.(4), are complex, we should simplify them, and thus facilitate the calculations at the QCD side. We multiply Eq.(4) with the tensor \(\varepsilon_{\theta\omega}{}^{\alpha\beta}\) and obtain \[\widetilde{\Pi}_{\mu\theta\omega}(p,q) = \varepsilon_{\theta\omega}{}^{\alpha\beta}\,\Pi_{\mu\alpha\beta}( p,q) \tag{10}\] \[= \lambda_{Y}f_{D^{*}}^{2}m_{D^{*}}^{2}\frac{iG_{A}\left(g_{\mu \omega}r_{\theta}-g_{\mu\theta}r_{\omega}\right)-iG_{B}\left(g_{\mu\omega}p^{ \prime}_{\theta}-g_{\mu\theta}p^{\prime}_{\omega}\right)}{(m_{Y}^{2}-p^{ \prime 2})(m_{D^{*}}^{2}-p^{2})(m_{D^{*}}^{2}-q^{2})}+\cdots\,,\] \[= \left\{i\Pi_{A}(p^{\prime 2},p^{2},q^{2})-i\Pi_{B}(p^{\prime 2},p^{2},q^{2})\right\}(g_{\mu\omega}r_{\theta}-g_{\mu\theta}r_{\omega})\] \[-i\Pi_{B}(p^{\prime 2},p^{2},q^{2})\left(g_{\mu\omega}q_{\theta}-g_{ \mu\theta}q_{\omega}\right)+\cdots\,,\] where \[\Pi_{A}(p^{\prime 2},p^{2},q^{2}) = \lambda_{Y}f_{D^{*}}^{2}m_{D^{*}}^{2}\frac{G_{A}}{(m_{Y}^{2}-p^{ \prime 2})(m_{D^{*}}^{2}-p^{2})(m_{D^{*}}^{2}-q^{2})}+\cdots\,,\] \[\Pi_{B}(p^{\prime 2},p^{2},q^{2}) = \lambda_{Y}f_{D^{*}}^{2}m_{D^{*}}^{2}\frac{G_{B}}{(m_{Y}^{2}-p^{ \prime 2})(m_{D^{*}}^{2}-p^{2})(m_{D^{*}}^{2}-q^{2})}+\cdots\,, \tag{11}\] then we choose the tensor structures \(g_{\mu\omega}r_{\theta}-g_{\mu\theta}r_{\omega}\) and \(g_{\mu\omega}q_{\theta}-g_{\mu\theta}q_{\omega}\) to explore the \(G_{A}\) and \(G_{B}\), respectively. Here we also neglect the tensor structures, which cannot end up with good QCD sum rules. After choosing the best tensor structures therefore the best components of the correlation functions, we obtain the hadronic spectral densities \(\rho_{H}(s^{\prime},s,u)\) through triple dispersion relation, \[\Pi_{H}(p^{\prime 2},p^{2},q^{2}) = \int_{\Delta_{s}^{\prime 2}}^{\infty}ds^{\prime}\int_{\Delta_{s}^{ 2}}^{\infty}ds\int_{\Delta_{u}^{2}}^{\infty}du\frac{\rho_{H}(s^{\prime},s,u)}{(s ^{\prime}-{p^{\prime 2}})(s-p^{2})(u-q^{2})}\,, \tag{12}\] where the \(\Delta_{s}^{\prime 2}\), \(\Delta_{s}^{2}\) and \(\Delta_{u}^{2}\) are the thresholds, we add the subscript \(H\) to denote the hadron side, and \(H=A\), \(B\), \(C\) and \(D\). Now we explore the QCD side with an eye on the relevant tensor structures, we carry out the operator product expansion up to the vacuum condensates of dimension 5 and neglect the tiny gluon condensate contributions [55, 56, 57], and choose the \(\pi\)-meson light-cone wave functions [58], which are defined by \[\langle 0|\bar{d}(0)\gamma_{\mu}\gamma_{5}u(x)|\pi(r)\rangle = if_{\pi}r_{\mu}\int_{0}^{1}due^{-iur\cdot x}\varphi_{\pi}(u)+\cdots\,,\] \[\langle 0|\bar{d}(0)\sigma_{\mu\nu}\gamma_{5}u(x)|\pi(r)\rangle = \frac{i}{6}\frac{f_{\pi}m_{\pi}^{2}}{m_{u}+m_{d}}\left(r_{\mu}x_{ \nu}-r_{\nu}x_{\mu}\right)\int_{0}^{1}due^{-iur\cdot x}\varphi_{\sigma}(u)\,,\] \[\langle 0|\bar{d}(0)i\gamma_{5}u(x)|\pi(r)\rangle = \frac{f_{\pi}m_{\pi}^{2}}{m_{u}+m_{d}}\int_{0}^{1}due^{-iur\cdot x }\varphi_{5}(u)\,, \tag{13}\] and take the approximation, \[\langle 0|\bar{d}(x_{1})\sigma_{\mu\nu}\gamma_{5}g_{s}G_{\alpha \beta}(x_{2})u(x_{3})|\pi(r)\rangle = if_{3\pi}\left(r_{\mu}r_{\alpha}g_{\nu\beta}+r_{\nu}r_{\beta}g_{ \mu\alpha}-r_{\nu}r_{\alpha}g_{\mu\beta}-r_{\mu}r_{\beta}g_{\nu\alpha}\right)\,, \tag{14}\] for the twist-3 quark-gluon light-cone wave functions. In calculations, we find that the terms proportional to \(m_{\pi}^{2}\) are greatly suppressed and neglect them safely, except for the case that the terms proportional to \(\frac{m_{\pi}^{2}}{m_{u}+m_{d}}\) are Chiral enhanced due to the Gell-Mann-Oakes-Renner relation \(\frac{f_{\pi}m_{\pi}^{2}}{m_{u}+m_{d}}=-\frac{2\langle\bar{q}q\rangle}{f_{\pi}}\), and we take account of their contributions fully. Now we list out the \(\pi\)-meson light-cone wave functions of twist-2 and twist-3 explicitly, \[\varphi_{\pi}(u) = 6u\bar{u}\left[1+A_{2}\frac{3}{2}\left(5t^{2}-1\right)+A_{4} \frac{15}{8}\left(21t^{4}-14t^{2}+1\right)\right]\,,\] \[\varphi_{5}(u) = 1+B_{2}\frac{1}{2}\left(3t^{2}-1\right)+B_{4}\frac{1}{8}\left(35 t^{4}-30t^{2}+3\right)\,,\] \[\varphi_{\sigma}(u) = 6u\bar{u}\left[1+C_{2}\frac{3}{2}\left(5t^{2}-1\right)\right]\,, \tag{15}\] where \(t=2u-1\), and the coefficients \(A_{2}=0.44\), \(A_{4}=0.25\), \(B_{2}=0.43\), \(B_{4}=0.10\), \(C_{2}=0.09\), and the decay constant \(f_{3\pi}=0.0035\,{\rm GeV}^{2}\) at the energy scale \(\mu=1\,{\rm GeV}\)[58, 59]. We neglect the twist-4 light-cone wave functions due to their small contributions considering the associated factor \(m_{\pi}^{2}\). In the soft limit \(r_{\mu}\to 0\), \(\tilde{q}^{2}=(q+r)^{2}=q^{2}\) and \(\tilde{p}^{2}=(p+r)^{2}=p^{2}\), we set \(\Pi_{A/B/C/D}(p^{2},\tilde{p}^{2},q^{2})\) or \(\Pi_{A/B/C/D}(p^{2},\tilde{q}^{2},q^{2})=\Pi_{A/B/C/D}(p^{2},q^{2})\) to simplify the analytical expressions, then we obtain the QCD spectral densities \(\rho_{QCD}(s,u)\) through double dispersion relation, \[\Pi_{A/B/C/D}^{QCD}(p^{2},q^{2}) = \int_{\Delta_{s}^{2}}^{\infty}ds\int_{\Delta_{u}^{2}}^{\infty}du \frac{\rho_{QCD}(s,u)}{(s-p^{2})(u-q^{2})}\,, \tag{16}\] again the \(\Delta_{s}^{2}\) and \(\Delta_{u}^{2}\) are the thresholds, we add the superscript or subscript \(QCD\) to denote the QCD side. We match the hadron side with the QCD side bellow the continuum thresholds \(s_{0}\) and \(u_{0}\) to obtain rigorous quark-hadron duality [56, 57], \[\int_{\Delta_{s}^{2}}^{s_{0}}ds\int_{\Delta_{u}^{2}}^{u_{0}}du \frac{\rho_{QCD}(s,u)}{(s-p^{2})(u-q^{2})} = \int_{\Delta_{s}^{2}}^{s_{0}}ds\int_{\Delta_{u}^{2}}^{u_{0}}du\left[ \int_{\Delta_{s}^{\prime 2}}^{\infty}ds^{\prime}\frac{\rho_{H}(s^{\prime},s,u)}{(s^{ \prime}-{p^{\prime}}^{2})(s-p^{2})(u-q^{2})}\right]\,. \tag{17}\] To facilitate the procedure beyond formal calculations, we carry out the integral over \(ds^{\prime}\) firstly, then write down the hadron representation explicitly, \[\Pi_{A/B}(p^{\prime 2},p^{2},q^{2}) = \frac{\lambda_{Y}f_{D^{*}}^{2}m_{D^{*}}^{2}G_{A/B}}{(m_{Y}^{2}-p^{ \prime 2})(m_{D^{*}}^{2}-p^{2})(m_{D^{*}}^{2}-q^{2})}+\frac{C_{A/B}}{(m_{D^{* }}^{2}-p^{2})(m_{D^{*}}^{2}-q^{2})}+\cdots,\] \[\Pi_{C}(p^{\prime 2},p^{2},q^{2}) = \frac{\lambda_{Y}f_{D^{*}}m_{D^{*}}f_{D}m_{D}^{2}\,G_{C}}{m_{c}( m_{Y}^{2}-p^{\prime 2})(m_{D^{*}}^{2}-p^{2})(m_{D}^{2}-q^{2})}+\frac{C_{C}}{(m_{D^{* }}^{2}-p^{2})(m_{D}^{2}-q^{2})}+\cdots,\] \[\Pi_{D}(p^{\prime 2},p^{2},q^{2}) = \frac{\lambda_{Y}f_{J/\psi}m_{J/\psi}\mu_{\pi}\,G_{D}}{(m_{Y}^{2 }-p^{\prime 2})(m_{J/\psi}^{2}-p^{2})(m_{\pi}^{2}-q^{2})}+\frac{C_{D}}{(m_{J/ \psi}^{2}-p^{2})(m_{\pi}^{2}-q^{2})}+\cdots, \tag{18}\] where \(\mu_{\pi}=\frac{f_{\pi}m_{\pi}^{2}}{m_{u}+m_{d}}\), we introduce the parameters \(C_{A/B/C/D}\) to parameterize the contributions involving the higher resonances and continuum states in the \(s^{\prime}\) channel, \[C_{A/B} = \int_{s^{\prime}_{0}}^{\infty}ds^{\prime}\frac{\tilde{\rho}_{A/B} (s^{\prime},m_{D^{*}}^{2},m_{D^{*}}^{2})}{s^{\prime}-p^{\prime 2}}\,,\] \[C_{C} = \int_{s^{\prime}_{0}}^{\infty}ds^{\prime}\frac{\tilde{\rho}_{C}(s ^{\prime},m_{D^{*}}^{2},m_{D}^{2})}{s^{\prime}-p^{\prime 2}}\,,\] \[C_{D} = \int_{s^{\prime}_{0}}^{\infty}ds^{\prime}\frac{\tilde{\rho}_{D} (s^{\prime},m_{J/\psi}^{2},m_{\pi}^{2})}{s^{\prime}-p^{\prime 2}}\,, \tag{19}\] where the hadronic spectral densities \(\rho_{H}(s^{\prime},s,u)=\tilde{\rho}_{A/B}(s^{\prime},s,u)\delta(s-m_{D^{*}} ^{2})\delta(u-m_{D^{*}}^{2})\), \(\tilde{\rho}_{C}(s^{\prime},s,u)\delta(s-m_{D^{*}}^{2})\delta(u-m_{D}^{2})\) and \(\tilde{\rho}_{D}(s^{\prime},s,u)\delta(s-m_{J/\psi}^{2})\delta(u-m_{\pi}^{2})\), respectively. We have no knowledge about the hadronic interactions so as to obtain the analytical expressions of the spectral densities \(\tilde{\rho}_{A/B/C/D}(s^{\prime},s,u)\) at the region \(s^{\prime}>s^{\prime}_{0}\), fortunately, we take account of their contributions for the first time. In previous works except for ours, they are neglected without proving feasibility. In numerical calculations, we take the unknown functions \(C_{A/B/C/D}\) as free parameters and adjust the values to obtain flat platforms for the hadronic coupling constants \(G_{A/B/C/D}\) in regard to variations of the Borel parameters. Such a method works well in the case of three-hadron contact vertexes [56, 57, 60, 61, 62, 63, 64], and four-hadron contact vertexes \(Y(4500)\bar{D}^{*}D^{*}\pi\)[55]. We set \(p^{\prime 2}=p^{2}\) in the correlation functions \(\Pi_{H}(p^{\prime 2},p^{2},q^{2})\) for simplicity, and perform double Borel transform with respect to \(P^{2}=-p^{2}\) and \(Q^{2}=-q^{2}\) respectively, then we set \(T_{1}^{2}=T_{2}^{2}=T^{2}\) to obtain four QCD sum rules, \[\frac{\lambda_{YD^{*}D^{*}}G_{A}}{m_{Y}^{2}-m_{D^{*}}^{2}}\left[\exp \left(-\frac{m_{D^{*}}^{2}}{T^{2}}\right)-\exp\left(-\frac{m_{Y}^{2}}{T^{2}} \right)\right]\exp\left(-\frac{m_{D^{*}}^{2}}{T^{2}}\right)+C_{A}\exp\left(- \frac{m_{D^{*}}^{2}+m_{D^{*}}^{2}}{T^{2}}\right)\] \[=\frac{\mu_{\pi}}{192\pi^{2}}\int_{m_{c}^{2}}^{s_{D^{*}}^{0}}ds \int_{0}^{1}du\,\varphi_{5}(u)\left(1-\frac{m_{c}^{2}}{s}\right)^{3}s(s+2m_{c} ^{2})\exp\left(-\frac{s+m_{c}^{2}+u\bar{u}m_{\pi}^{2}}{T^{2}}\right)\] \[+\frac{\mu_{\pi}m_{c}}{192\pi^{2}}\int_{m_{c}^{2}}^{s_{D^{*}}^{0}} ds\int_{0}^{1}du\,\varphi_{5}(u)\exp\left(-\frac{2m_{c}^{2}+u\bar{u}m_{\pi}^{2}}{T^{2}} \right)\,, \tag{22}\] \[\frac{\lambda_{YJ/\psi\pi}G_{D}}{m_{Y}^{2}-m_{J/\psi}^{2}}\left[ \exp\left(-\frac{m_{J/\psi}^{2}}{T^{2}}\right)-\exp\left(-\frac{m_{Y}^{2}}{T^{ 2}}\right)\right]\exp\left(-\frac{m_{\pi}^{2}}{T^{2}}\right)+C_{D}\exp\left(- \frac{m_{J/\psi}^{2}+m_{\pi}^{2}}{T^{2}}\right)\] \[=\frac{\mu_{\pi}}{72\pi^{2}}\int_{4m_{c}^{2}}^{s_{J/\psi}^{0}}ds \sqrt{1-\frac{4m_{c}^{2}}{s}}\int_{0}^{1}du\,\varphi_{\sigma}(u)(s+2m_{c}^{2} )\exp\left(-\frac{s+u\bar{u}m_{\pi}^{2}}{T^{2}}\right)\] \[+\frac{f_{\pi}m_{b}}{24\pi^{2}}\int_{4m_{c}^{2}}^{s_{J/\psi}^{0}} ds\sqrt{1-\frac{4m_{c}^{2}}{s}}\int_{0}^{1}du\,\varphi_{\pi}(u)(s-4m_{c}^{2})\exp \left(-\frac{s+u\bar{u}m_{\pi}^{2}}{T^{2}}\right)\,, \tag{23}\] where \(\lambda_{YD^{*}D^{*}}=\lambda_{Y}f_{D^{*}}^{2}m_{D^{*}}^{2}\), \(\lambda_{YD^{*}D}=\frac{\lambda_{Y}f_{D^{*}}m_{D^{*}}f_{D}m_{D}^{2}}{m_{c}}\) and \(\lambda_{YJ/\psi\pi}=\frac{\lambda_{YJ/\psi}m_{J/\psi}f_{\pi}m_{\pi}^{2}}{m_{u }+m_{d}}\). ## 3 Numerical results and discussions We take the standard values of the vacuum condensates, \(\langle\bar{q}q\rangle=-(0.24\pm 0.01\,\mathrm{GeV})^{3}\), \(\langle\bar{q}g_{s}\sigma Gq\rangle=m_{0}^{2}\langle\bar{q}q\rangle\), \(m_{0}^{2}=(0.8\pm 0.1)\,\mathrm{GeV}^{2}\) at the energy scale \(\mu=1\,\mathrm{GeV}\)[65, 66, 67], and take the \(\overline{MS}\) mass \(m_{c}(m_{c})=(1.275\pm 0.025)\,\mathrm{GeV}\) from the Particle Data Group [68]. We set \(m_{u}=m_{d}=0\) and take account of the energy-scale dependence of the input parameters, \[\langle\bar{q}q\rangle(\mu) = \langle\bar{q}q\rangle(1\mathrm{GeV})\left[\frac{\alpha_{s}(1 \mathrm{GeV})}{\alpha_{s}(\mu)}\right]^{\frac{12}{33-2n_{f}}}\,,\] \[\langle\bar{q}g_{s}\sigma Gq\rangle(\mu) = \langle\bar{q}g_{s}\sigma Gq\rangle(1\mathrm{GeV})\left[\frac{ \alpha_{s}(1\mathrm{GeV})}{\alpha_{s}(\mu)}\right]^{\frac{2}{33-2n_{f}}}\,,\] \[m_{c}(\mu) = m_{c}(m_{c})\left[\frac{\alpha_{s}(\mu)}{\alpha_{s}(m_{c})} \right]^{\frac{12}{33-2n_{f}}}\,,\] \[\alpha_{s}(\mu) = \frac{1}{b_{0}t}\left[1-\frac{b_{1}}{b_{0}^{2}}\frac{\log t}{t}+ \frac{b_{1}^{2}(\log^{2}t-\log t-1)+b_{0}b_{2}}{b_{0}^{4}t^{2}}\right]\,, \tag{24}\] where \(t=\log\frac{\mu^{2}}{\Lambda_{QCD}^{2}}\), \(b_{0}=\frac{33-2n_{f}}{12\pi}\), \(b_{1}=\frac{153-19n_{f}}{24\pi^{2}}\), \(b_{2}=\frac{2857-\frac{5033}{9}n_{f}+\frac{235}{27}n_{f}^{2}}{128\pi^{3}}\), \(\Lambda_{QCD}=210\,\mathrm{MeV}\), \(292\,\mathrm{MeV}\) and \(332\,\mathrm{MeV}\) for the flavors \(n_{f}=5\), \(4\) and \(3\), respectively [68, 69], and we choose \(n_{f}=4\), and evolve all the input parameters to the typical energy scale \(\mu=1\,\mathrm{GeV}\). At the hadron side, we take \(m_{\pi}=0.13957\,\mathrm{GeV}\), \(m_{J/\psi}=3.0969\,\mathrm{GeV}\), \(f_{\pi}=0.130\,\mathrm{GeV}\) from the Particle Data Group [68], \(m_{D^{*}}=2.01\,\mathrm{GeV}\), \(m_{D}=1.87\,\mathrm{GeV}\), \(f_{D^{*}}=263\,\mathrm{MeV}\), \(f_{D}=208\,\mathrm{MeV}\), \(s_{D^{*}}^{0}=6.4\,\mathrm{GeV}^{2}\), \(s_{D}^{0}=6.2\,\mathrm{GeV}^{2}\)[70], \(f_{J/\psi}=0.418\,\mathrm{GeV}\)[71], \(m_{Y}=4.24\,\mathrm{GeV}\), \(\lambda_{Y}=2.31\times 10^{-2}\,\mathrm{GeV}^{6}\)[20] from the QCD sum rules, and \(f_{\pi}m_{\pi}^{2}/(m_{u}+m_{d})=-2\langle\bar{q}q\rangle/f_{\pi}\) from the Gell-Mann-Oakes-Renner relation. In calculations, we fit the free parameters to be \(C_{A}=0.00011(T^{2}-4\,\mathrm{GeV}^{2})\,\mathrm{GeV}^{5}\) and \(C_{B}=-0.0000084(T^{2}-4\,\mathrm{GeV}^{2})\,\mathrm{GeV}^{5}\), \(C_{C}=0.000179(T^{2}-4\,\mathrm{GeV}^{2})\,\mathrm{GeV}^{6}\), and \(C_{D}=0.0015(T^{2}-4\,\mathrm{GeV}^{2})\,\mathrm{GeV}^{4}\) to obtain uniform flat Borel platforms \(T_{max}^{2}-T_{min}^{2}=1\,\mathrm{GeV}^{2}\) (just like in our previous works [55, 56, 57, 60, 61, 62, 63, 64]) via trial and error, where the max and min denote the maximum and minimum values, respectively. The Borel windows are \(T_{A}^{2}=(6.7-7.7)\,\mathrm{GeV}^{2}\), \(T_{B}^{2}=(6.1-7.1)\,\mathrm{GeV}^{2}\), \(T_{C}^{2}=(6.4-7.4)\,\mathrm{GeV}^{2}\) and \(T_{D}^{2}=(4.2-5.2)\,\mathrm{GeV}^{2}\), where the subscripts \(A\), \(B\), \(C\) and \(D\) denote the corresponding QCD sum rules, the uncertainties \(\delta G_{A/B/C/D}\) come from the Borel parameters \(T^{2}\) are less than \(0.01\,(\mathrm{GeV}^{-1}/\mathrm{GeV}/\mathrm{GeV}^{-2})\). In Fig.1, we plot the \(G_{A}\), \(G_{B}\), \(G_{C}\) and \(G_{D}\) in regard to variations of the Borel parameters. In the Borel windows, there appear very flat platforms in all channels indeed, it is reasonable and reliable to extract the hadron coupling constants. If we take the symbol \(\xi\) to stand for the input parameters at the QCD side, generally speaking, all the uncertainties originate from the QCD parameters, then the uncertainties \(\bar{\xi}\to\bar{\xi}+\delta\xi\) result in the uncertainties \(\bar{\lambda}_{Y}\bar{f}_{D^{*}}\bar{f}_{D^{*}}\bar{G}_{A/B}\to\bar{\lambda}_{ Y}\bar{f}_{D^{*}}\bar{f}_{D^{*}}\bar{G}_{A/B}+\delta\,\bar{\lambda}_{Y}\bar{f}_{D^{*}} \bar{f}_{D^{*}}\bar{G}_{A/B}\), \(\bar{C}_{A/B}\to\bar{C}_{A/B}+\delta C_{A/B}\), \[\delta\,\bar{\lambda}_{Y}\bar{f}_{D^{*}}\bar{f}_{D^{*}}\bar{G}_{A/B} = \bar{\lambda}_{Y}\bar{f}_{D^{*}}\bar{f}_{D^{*}}\bar{G}_{A/B}\left( \frac{\delta f_{D^{*}}}{\bar{f}_{D^{*}}}+\frac{\delta f_{\bar{D}^{*}}}{\bar{f}_{ D^{*}}}+\frac{\delta\lambda_{Y}}{\bar{\lambda}_{Y}}+\frac{\delta G_{A/B}}{\bar{G}_{A/B}} \right)\,, \tag{25}\] where the short overline \(\bar{\phantom{0}}\) on all the parameters denotes the central values. Direct calculations indicate that we can set \(\delta C_{A/B}=0\) and \(\frac{\delta f_{D^{*}}}{f_{D^{*}}}=\frac{\delta f_{D^{*}}}{f_{D^{*}}}=\frac{ \delta\lambda_{Y}}{\lambda_{Y}}=\frac{\delta G_{A/B}}{G_{A/B}}\) approximately. And the hadronic coupling constants \(G_{C}\) and \(G_{D}\) are treated in the same way. In fact, not in all QCD sum rules we can set \(\delta C=0\) approximately, if such situations occur, we have to take account of the uncertainties \(\delta C\). Now we obtain the hadronic coupling constants routinely, \[G_{A} = 8.69\pm 0.27\,{\rm GeV}^{-1}\,,\] \[G_{B} = 0.51\pm 0.03\,{\rm GeV}^{-1}\,,\] \[G_{C} = 12.15\pm 0.45\,,\] \[G_{D} = 18.42\pm 0.82\,{\rm GeV}^{-2}\,. \tag{26}\] Then we calculate the partial decay widths by taking the hadron masses \(m_{Y}=4.2225\,{\rm GeV}\), \(m_{D^{*-}}=2.01026\,{\rm GeV}\), \(m_{D^{*0}}=2.00685\,{\rm GeV}\), \(m_{D^{0}}=1.86484\,{\rm GeV}\), \(m_{J/\psi}=3.09690\,{\rm GeV}\), and \(m_{\pi}=0.13957\,{\rm GeV}\), \(m_{K}=0.493677\,{\rm GeV}\) from the Particle Data Group [68], \[\Gamma\left(Y(4230)\to\bar{D}^{*}D^{*}\pi^{+}\right) = 0.068^{+0.017}_{-0.015}\,{\rm KeV}\,,\] \[\Gamma\left(Y(4230)\to\bar{D}^{*}D\pi^{+}\right) = 0.12^{+0.01}_{-0.01}\,{\rm MeV}\,,\] \[\Gamma\left(Y(4230)\to J/\psi\pi^{-}\pi^{+}\right) = 0.31^{+0.03}_{-0.03}\,{\rm MeV}\,,\] \[\Gamma\left(Y(4230)\to J/\psi K^{-}K^{+}\right) = 0.013^{+0.001}_{-0.001}\,{\rm MeV}\,, \tag{27}\] where we set \(G_{E}=G_{D}\) in the light flavor \(SU(3)\) limit. The partial decay widths are much smaller than the average total width \(\Gamma=48\pm 8\,{\rm MeV}\) from the Particle Data Group [68], it is obvious that the contact four-meson coupling constants lead to too small partial decay widths, and disavors observations of the \(Y(4230)\) in the three-meson final states, however, the decays \(Y(4230)\to\pi^{+}\pi^{-}J/\psi\)[1], \(\pi^{+}\pi^{-}h_{c}\)[10], \(\pi^{+}D^{0}D^{*-}\)[12], \(K^{+}K^{-}J/\psi\)[13], \(D^{*-}D^{*0}\pi^{+}\)[14] have been observed experimentally. We expect those decays take Figure 1: The hadronic coupling constants with variations of the Borel parameters \(T^{2}\), where the \(A\), \(B\), \(C\) and \(D\) denote the \(G_{A}\), \(G_{B}\), \(G_{C}\) and \(G_{D}\), respectively. place through an intermediate meson, \[Y(4230) \rightarrow Z_{c}(3900/4020)^{\pm}\pi^{\mp}\to J/\psi\pi^{+}\pi^{-}\,,\, \pi^{+}\pi^{-}h_{c}\,,\,\pi^{\pm}(D\bar{D}^{*})^{\mp}\,,\,\pi^{\pm}(D^{*}\bar{D}^ {*})^{\mp}\,,\] \[Y(4230) \rightarrow J/\psi f_{0}(500)\to J/\psi\pi^{+}\pi^{-}\,,\] \[Y(4230) \rightarrow \bar{D}^{*-}D_{0}^{*+}(2300)\rightarrow\bar{D}^{*-}D^{0}\pi^{+}\,,\] \[Y(4230) \rightarrow Z_{cs}(3985)^{\pm}K^{\mp}\to J/\psi K^{+}K^{-}\,, \tag{28}\] we can search for the intermediate states and precisely measure the branching fractions, which maybe shed light on the nature of the \(Y\) states. In fact, the processes \(Y(4230)\to Z_{c}(3900)^{\pm}\pi^{\mp}\to J/\psi\pi^{+}\pi^{-}\) have been observed [52, 53]. We naively expect that the main decay channels of the vector tetraquark states are two-body strong decays \(Y\to D\bar{D}\), \(D^{*}\bar{D}^{*}\), \(D\bar{D}^{*}\), \(D^{*}\bar{D}\), as they would take place through the Okubo-Zweig-Iizuka super-allowed fall-apart mechanism. If we assign the \(Y(4230)\) as a \(\chi_{c0}\rho^{0}\) molecule, it is easy to interpret why the decay \(Y(4230)\to\pi^{+}\pi^{-}J/\psi\) has a larger branching fraction than the decay \(Y(4230)\to D\bar{D}\), which has not been observed yet [46]. Furthermore, it is a direct consequence that the decay mode \(Y(4230)\to\pi^{+}\pi^{-}J/\psi\) is more favorable than the mode \(Y(4230)\to K\bar{K}J/\psi\). However, it is difficult (not impossible) to interpret observation of the \(Y(4230)\) in the \(\bar{D}^{*}D^{*}\pi^{+}\) or \(\bar{D}^{*}D\pi^{+}\) or \(Z_{c}^{\pm}(3900)\pi^{\mp}\) invariant mass spectrum. In Ref.[41], Chen et al study the dipion invariant mass spectrum of the \(e^{+}e^{-}\to Y(4230)\to J/\psi\pi^{+}\pi^{-}\) process and the ratio of the cross sections \(\sigma(e^{+}e^{-}\to J/\psi K^{+}K^{-})/\sigma(e^{+}e^{-}\to J/\psi\pi^{+} \pi^{-})\), and observe that the \(SU(3)\) octet state plays a significant role in those transitions, the \(Y(4230)\) is neither a hybrid nor a conventional charmonium state, but has a sizeable \(\bar{D}D_{1}\) component, which, however, is not completely dominant. On the other hand, the calculations based on the QCD sum rules indicate that we cannot obtain a \(\bar{D}D_{1}\) molecular state having the mass as low as \(4.2\,\mathrm{GeV}\)[40]. The situation is very complex as there maybe exist mixing effects [45]. Recently, the BESIII collaboration measured the Born cross sections of the process \(e^{+}e^{-}\to D_{s}^{*}\bar{D}_{s}^{*}\) at center-of-mass energies from threshold to \(4.95\,\mathrm{GeV}\) with high precision for the first time, and observed two resonance structures around \(4.2\) and \(4.4\) GeV, respectively [72]. The fitted Breit-Wigner masses are \(4186.5\pm 9.0\pm 3\,\mathrm{MeV}\) and \(4414.5\pm 3.2\pm 6.0\,\mathrm{MeV}\), the widths are \(55\pm 17\pm 53\,\mathrm{MeV}\) and \(122.6\pm 7.0\pm 8.2\,\mathrm{MeV}\). If they are not the \(\psi(4160)\) and \(\psi(4415)\), there are some contributions from the \(Y(4230)\), which indicates that the \(Y(4230)\) couples more strongly to the \(D_{s}^{*}\bar{D}_{s}^{*}\) mode than to the modes with charmonium states, as the cross section of the process \(e^{+}e^{-}\to D_{s}^{*}\bar{D}_{s}^{*}\) at \(4.23\) GeV is roughly one order of magnitude higher than that of the process \(e^{+}e^{-}\to J/\psi\pi^{+}\pi^{-}\), then the \(Y(4230)\) should have some \(\bar{c}c\bar{s}s\) components at least. So exploring the processes \(Y(4230)\to D\bar{D}\), \(D^{*}\bar{D}^{*}\), \(D\bar{D}^{*}\), \(D^{*}\bar{D}\) is of great importance. ## 4 Conclusion In our previous works, we have proven that a vector tetraquark configuration with a relative P-wave between the scalar diquark pair could reproduce the mass of the \(Y(4230)\), the lowest vector tetraquark mass up to now. In the present work, we extend our previous works to investigate the three-body strong decays \(Y(4230)\to\bar{D}^{*-}D^{*0}\pi^{+}\), \(\bar{D}^{*-}D^{0}\pi^{+}\), \(J/\psi\pi^{+}\pi^{-}\) and \(J/\psi K^{+}K^{-}\) with the light-cone QCD sum rules by assuming contact four-meson coupling constants. We introduce free parameters to parameterize the higher resonance contributions to acquire rigorous quark-hadron duality, and obtain four QCD sum rules for the hadronic coupling constants, then we vary the free parameters to obtain flat Borel platforms therefore extracting the values of the four-meson hadronic coupling constants. Finally, we obtain the corresponding partial decay widths, which are too small to account for the experimental data. We expect that those decays take place through an intermediate meson to outcome the dilemma, we can search for the intermediate states and precisely measure the branching fractions, which maybe shed light on the nature of the \(Y\) states. Furthermore, we expect to search for the vector tetraquark states in the two-body strong decays \(Y\to D\bar{D}\), \(D^{*}\bar{D}^{*}\), \(D\bar{D}^{*}\), \(D^{*}\bar{D}\), as they would take place through the Okubo-Zweig-Iizuka superallowed fall-apart mechanism. ## Acknowledgements This work is supported by National Natural Science Foundation, Grant Number 12175068.
2308.08984
Hybrid Classical/Machine-Learning Force Fields for the Accurate Description of Molecular Condensed-Phase Systems
Electronic structure methods offer in principle accurate predictions of molecular properties, however, their applicability is limited by computational costs. Empirical methods are cheaper, but come with inherent approximations and are dependent on the quality and quantity of training data. The rise of machine learning (ML) force fields (FFs) exacerbates limitations related to training data even further, especially for condensed-phase systems for which the generation of large and high-quality training datasets is difficult. Here, we propose a hybrid ML/classical FF model that is parametrized exclusively on high-quality ab initio data of dimers and monomers in vacuum but is transferable to condensed-phase systems. The proposed hybrid model combines our previous ML-parametrized classical model with ML corrections for situations where classical approximations break down, thus combining the robustness and efficiency of classical FFs with the flexibility of ML. Extensive validation on benchmarking datasets and experimental condensed-phase data, including organic liquids and small-molecule crystal structures, showcases how the proposed approach may promote FF development and unlock the full potential of classical FFs.
Moritz Thürlemann, Sereina Riniker
2023-08-17T13:47:10Z
http://arxiv.org/abs/2308.08984v1
Hybrid Classical/Machine-Learning Force Fields for the Accurate Description of Molecular Condensed-Phase Systems ###### Abstract Electronic structure methods offer in principle accurate predictions of molecular properties, however, their applicability is limited by computational costs. Empirical methods are cheaper, but come with inherent approximations and are dependent on the quality and quantity of training data. The rise of machine learning (ML) force fields (FFs) exacerbates limitations related to training data even further, especially for condensed-phase systems for which the generation of large and high-quality training datasets is difficult. Here, we propose a hybrid ML/classical FF model that is parametrized exclusively on high-quality _ab initio_ data of dimers and monomers in vacuum but is transferable to condensed-phase systems. The proposed hybrid model combines our previous ML-parametrized classical model with ML corrections for situations where classical approximations break down, thus combining the robustness and efficiency of classical FFs with the flexibility of ML. Extensive validation on benchmarking datasets and experimental condensed-phase data, including organic liquids and small-molecule crystal structures, showcases how the proposed approach may promote FF development and unlock the full potential of classical FFs. ## 1 Introduction An accurate description of the physical interactions between atoms in condensed-phase molecular systems remains one of the biggest challenge in computational chemistry. Electronic structure methods are in principle able to describe properties of such systems reliably [1, 2]. However, access to long time-scales and large systems is severely limited by the associated computational cost. Due to the computational complexity of electronic structure methods, this issue is unlikely to be resolved solely by additional computational power in the near future [3]. As a solution, approximate methods, such as force fields (FFs) [4] or semi-empirical quantum chemistry methods, have been developed [5, 6]. Especially FFs enable routine access to large systems at microsecond time scales [7]. However, approximations inherent to FFs and semi-empirical methods limit their ability to describe certain interactions, for instance polarization [8]. With the development of machine learning (ML) potentials during the last decade (see e.g. Refs. [9, 10, 11, 12]), a new paradigm has emerged for the computational study of atomic systems. Thanks to the fast-paced development of underlying architectures, ML potentials achieve now routinely errors on training sets and validation sets comparable to the errors of the reference method itself [13, 14, 15, 16, 17, 18, 19]. However, existing models are still limited by their robustness for long prospective simulations, transferability, and computational cost [20, 21]. Especially the ability to transfer from small systems in vacuum, i.e., monomers and dimers, to diverse condensed-phase systems has, to our knowledge, not been demonstrated yet. In practice, extending the sampling of accurate electronic structure methods with ML could be one of the most interesting use cases for ML potentials [22, 23]. Transferability from the gas phase to the condensed phase is essential due to the computational cost associated with the generation of large training sets with highly accurate reference methods. With increasingly accurate ML models, the quality of the reference method becomes decisive as the model itself will no longer be the leading error source. As an exemplary use case, special attention is given to molecular crystals in this work. Crystal structure prediction (CSP), i.e., the prediction of the spatial arrangement of atoms in the crystalline phase given a chemical or structural formula, is a long-standing challenge in physical sciences [24, 25, 26, 27]. As demonstrated in the sixth CSP blind test [28], successful prediction and ranking of crystal structures does not only hinge on the ability to accurately predict the lattice energy. Instead, the importance of entropic contributions, and possibly to a lesser degree nuclear quantum effects, has emerged [29, 30, 31, 32, 33]. Obtaining a good estimate of these contributions requires, however, extensive sampling. In this study, we build on the developments and results proposed in previous work and extends the formalism proposed in Ref. [34]. As the most important addition, we introduce a ML-parametrized two-body potential to improve the description of short-range interactions. This two-body potential incorporates directional information through the use of static multipoles and induced dipoles. Such generic ML n-body potentials could greatly facilitate the development of potentials for situations where classical approximations break down or in cases where the derivation of an analytic functional form is difficult. At the same time, interpretability is retained to a large degree. In this work, particular emphasis is put on the transferability from small and isolated systems to large systems in the condensed phase. We argue that this size-transferability provides not only a strong signal that the model predicts interactions in accordance with underlying physical laws, but also enables parametrization on high-quality data which is typically only available for small systems. At present, size-transferability is possibly the most overlooked property for ML potentials, which are either only trained and applied to small systems where such effects are not apparent, or which are only trained and applied to condensed-phase systems, possibly obscuring this limitation. To achieve this goal, the proposed model relies on existing classical models, which describe the interactions between atoms where possible, such as classical dispersion models or multipole electrostatics. ML comes into play to (i) parametrize these classical models, and (ii) to replace and correct the classical description. The former takes advantage of the automatic differentiation based parametrization framework described in previous work [34]. Automatic differentiation has emerged as a powerful tool in computational science, permitting efficient gradient-based parametrization of physical models [35, 36, 37]. The latter is used to introduce a higher degree of flexibility, which is necessary for situations where classical approximations break down, for instance at short distances and large overlaps. ## 2 Theory ### Model Overview We assume a classical description of atomic interactions. Within this formalism, molecules are described as graphs with nodes corresponding to atoms and edges to covalent bonds. This notion allows for the definition of learned atom types following the formalism that we proposed in our previous work on graph neural network (GNN) parametrized FFs [34]. At the same time, the classical description permits a separation into _intermolecular_ and _intramolecular interactions. Taking advantage of this separation, an intramolecular potential was parametrized on energies, gradients, and multipoles (MBIS [39]) of isolated molecules on PBE0/def2-TZVP level of theory [40, 41, 42]. For the treatment of intermolecular interactions, an additional separation of long-range and short-range interactions is introduced. We assume that long-range interactions, including electrostatics, polarization, and dispersion, are accurately captured by classical models using atomic multipoles and polarizabilities [43, 44], and the D3 dispersion correction [45, 46]. As these descriptions break down at short distances, a number of classical models have been put forward in recent years [47, 48], which resolve this limitation, for instance through the use of charge-penetration models for the description of short-range electrostatics [49]. Here, a pairwise ML potential is adopted as an alternative. Within a classical formalism, potentials can be classified according to the information used as input feature (Figure 1). We follow a classification based on two fundamental dimensions: The degree of directional information (angular momentum) and the number of particles (many-body order) involved in the interaction. Thanks to their flexibility, ML potentials can be parametrized in a systematic manner according to the proposed categorization. In this work, we limit ourselves to an anisotropic pairwise ML potential, which is applied to intermolecular atom pairs at short distances in addition to dispersion, electrostatic, and polarization interactions. We will refer to this model as ANA2B, i.e., an anisotropic, non-additive FF in combination with a two-body ML potential. As input features, pairwise distances, atom types, and the interaction coefficients of static and induced multipoles are used. A more detailed description of these features is given in Section 2.5.3. The pairwise intermolecular interaction is trained on neutral systems of the DES5M dataset [50], which includes intermolecular potentials of small molecule dimers obtained with spin-network-scaled MP2 (SNS-MP2) [51, 52, 53]. At present, DES5M is the largest dataset of high-quality intermolecular interactions. Since datasets of similar quality are not available for condensed-phase systems, we limit ourselves to DES5M for intermolecular interactions of dimers and PBE0/def2-TZVP for intramolecular interactions of monomers (see Section 2.5), with the aim to develop a model that can transfer from these small systems to the condensed phase. ### Molecular Graphs and Atom Types The notion of atom types used as part of the proposed model relies on the formalism proposed in Ref. [34]. This formalism makes use of atom types, which are learned from molecular graphs, i.e., graphs that do not include information about the geometry of a molecule but only its covalent bonds. Graphs were constructed in the same manner as described in Ref. [34]. We will refer to these molecular graphs as \(\mathcal{G}_{\text{Mol}}\). Atom types extracted from these molecular graphs are used as input features for subsequent tasks. Figure 1: Classification of classical intermolecular interactions. (Left): Classical fixed-charge FF with point charges and a Lennard-Jones potential. (Middle): Classical polarizable FF such as AMOEBA [38] with the additional inclusion of polarization (\(\infty\), 1) and atomic multipoles (2, (0, 1, 2)). (Right): Model proposed in this work, which includes a three-body dispersion term (3, 1) and a pairwise ML potential (2, (0, 1, 2)) compared to the polarizable FF. The pairwise ML potential can account for directional interactions in a systematic manner. The atom type of atom \(i\) is defined as an \(n\)-dimensional feature vector \(h_{i}^{n}\in\mathbf{R^{n}}\), where the superscript \(n\) indicates the order, i.e., \(h^{0}\) corresponds to the element itself, \(h^{1}\) to an atom type that incorporates information about the immediate neighbours, and so on. Atom types are learned as part of the training process with a message passing GNN as proposed in Ref. [54]. ### Geometric Graphs The models for the prediction of atomic multipoles and the correction to the intramolecular potential \(V_{\Delta\text{ML}}\) use geometric information. These graphs were constructed by including an edge between all atoms, which were \(<5\,\text{\AA}\) apart. Following the approach described by Gasteiger _et al_. [55], distances were encoded with \(20\) Bessel functions and enveloped with a cutoff function to ensure a smooth cutoff. Element types were encoded as one-hot vectors serving as initial node features. ### Message Passing Graph Neural Networks Given a molecular or geometric graph \(\mathcal{G}=(V,E)\) with nodes \(V\) and edges \(E\) as described above, message passing can be defined as [56, 57], \[h_{i}^{l+1}=\phi_{h}(h_{i}^{l},\sum_{j\in N(i)}\phi_{e}(h_{i}^{l},h_{j}^{l},u_{ ij})), \tag{1}\] where \(h_{i}^{l}\in\mathbf{R^{n}}\) describes the hidden-feature vector of node \(v_{i}\) after \(l\) iterations, \(u_{ij}\in\mathbf{R^{n}}\) the edge feature of edge \(e_{ij}\) between node \(i\) and \(j\), and \(N(i)\) denoting the set of neighbours of \(v_{i}\). \(\phi_{e}\) and \(\phi_{h}\) refer to edge and node update functions. The superscript \(l\) denotes the current message passing iteration with \(n\) being the total number of message passing layers. In this context, geometric and molecular graphs used in this work differ by the definition of \(N(i)\) and the edge feature \(u_{ij}\). ### Energy Decomposition Essential to the ANA2B model is a decomposition of interactions, which aims to follow a physically motivated classical description of interatomic interactions where possible. Remaining interactions are treated as corrections parametrized by ML models. The decomposition achieves two goals: First, the total potential energy is separated into manageable pieces. Second, the resulting interactions are interpretable. Here, a brief description of the involved interaction terms is given. Based on the classical description assumed in this work, interactions are separated into purely intermolecular and purely intramolecular contributions, as well as dispersion interactions (D3), \[V_{\text{Total}}=V_{\text{Intra}}+V_{\text{Inter}}+V_{\text{D3}}. \tag{2}\] Dispersion interactions \(V_{\text{D3}}\) are described with the D3 dispersion correction [45] with Becke-Johnson damping with parameters for PBE0 [46, 58], and are applied to both intramolecular and intermolecular interactions. The purely intramolecular term \(V_{\text{intra}}\) is described in the ANA2B model by a ML potential, referred hereafter as \(V_{\Delta\text{ML}}\). This ML potential was trained on energies and gradients of small molecules using PBE0/def2-TZVP as the reference method. The purely intermolecular term \(V_{\text{Inter}}\) consists of \[V_{\text{Inter}}=V_{\text{ES}}+V_{\text{Pol}}+V_{\Delta\text{SR}}. \tag{3}\] where \(V_{\text{Pol}}\) refers to the polarization energy and \(V_{\Delta\text{SR}}\) to the short-range two-body ML correction. A detailed description of the intermolecular terms is given in the following paragraphs. #### 2.5.1 Electrostatics Long-range intermolecular electrostatic interactions are described with atomic multipoles. We made use of our previously introduced formalism for the prediction of atomic multipoles [59] based on MBIS atomic multipoles [39] up to the quadrupole and atomic volumes. Here, the model is re-trained and improved based on the model architecture described in Ref. [60]. Implementation of the electrostatic interaction and Ewald summation follows the formalism outlined in Refs. [61, 62, 63, 64]. The interaction of point multipoles at site \(i\) and site \(j\) is described as [61], \[V_{\text{ES}}=\sum_{l=0}^{4}B_{l}(r_{ij})G^{l}(\vec{r}_{ij}). \tag{4}\] Note that intramolecular electrostatic interactions are contained in \(V_{\Delta\text{ML}}\) (see above). #### 2.5.2 Polarization A description of polarization is introduced through the Applequist model [43] including Thole damping [44] as the energy resulting from placing the molecule in the electric field produced by the static multipoles, \[V_{\text{Pol}}=-\frac{1}{2}\mu_{\text{Ind}}E_{\text{Static}}, \tag{5}\] where \(\mu_{\text{Ind}}\) refers to the self-consistently converged induced dipoles, and \(E_{\text{Static}}\) to the electric field produced by the static multipoles. \(E_{\text{Static}}\) is not damped and includes only intermolecular contributions. Induced dipoles \(\mu\) are obtained as \[\mu_{\text{Ind}}=\textbf{B}^{-1}E_{\text{Static}} \tag{6}\] via inversion of the \(3N\times 3N\) polarizability matrix \(B\)[65], \[\textbf{B}=\begin{cases}\alpha_{ij}^{-1}&\text{for }i=j\\ -\textbf{T}_{ij}&\text{for }i\neq j\end{cases} \tag{7}\] with the atomic polarizability \(\alpha_{i}\) and the elements \(\textbf{B}_{ij}\) of the dipole-dipole interaction matrix. These elements \(\textbf{B}_{ij}\) are damped with the damping proposed by Thole, \[f_{\text{Thole}}=1-\exp(-au_{ij}^{3}) \tag{8}\] using a damping factor \(a\) and the polarizability-normalized distance \[u_{ij}=\frac{r_{ij}}{(\alpha_{i}\alpha_{j})^{\frac{1}{6}}}. \tag{9}\] The damping factor \(a\) is set to \(0.39\) as in the AMOEBA FF [38]. For the first order polarization model ANA2B\({}^{1}\), \(\mu_{\text{Ind}}\) was obtained as \[\mu_{i,\text{Ind}}=\alpha_{i}\cdot E_{i,\text{Static}}, \tag{10}\] i.e., taking only the direct polarization into account. Thole damping is not applied to the direct polarization term. Periodic boundary conditions are introduced through the Ewald summation formalism described in Ref. [66]. The reciprocal space contribution is neglected for the mutual polarization term. Static atomic dipole polarizabilities are obtained as \[\alpha_{i}=\alpha_{0}\cdot\langle r_{3}\rangle\cdot\phi_{\alpha}(h_{i}^{2}), \tag{11}\] where \(\alpha_{0}\) is the polarizability of the isolated element, and \(\langle r_{3}\rangle\) the ratio between the atomic volume of the isolated atom and the atom in the molecule analogous to the Tkatchenko-Scheffler model [67]. Finally, an atom type derived scaling factor \(\phi_{\alpha}(h_{i}^{2})\) is introduced to calibrate the polarizabilities with respect to the dataset published in Ref. [68]. Atomic volumes\(\langle r_{3}\rangle\) are predicted by the same model that predicts the atomic multipoles, i.e., for the isolated molecule using MBIS atomic volumes [39] as the reference. #### 2.5.3 Short-Range Correction (\(\Delta\)Sr) Instead of developing corrections for short-range phenomena such as charge penetration, a NN-parametrized pairwise interaction is proposed. This short-range pairwise potential is composed of the following terms, \[V_{\Delta\text{SR}}=V_{\text{Ex, Static}}+V_{\text{Ex, Ind}}+V_{\text{Att}}, \tag{12}\] and is applied to all intermolecular atom pairs within a distance of \(6.5\,\text{\AA}\). The repulsive terms \(V_{\text{Ex, Static}}\) and \(V_{\text{Ex, Ind}}\) build on the orbital overlap model proposed by Salem [69] and extended by Murrell _et al._[70], which describes the exchange energy as a function of the orbital overlap \(S^{2}\), \[V_{\text{Ex}}=\frac{K_{1}S^{2}}{r}+\frac{K_{2}S^{2}}{r^{2}}. \tag{13}\] Attractive contributions to the short-range interaction due to charge transfer and charge-penetration effects are introduced as \[V_{\text{Att}}=-KS^{2}. \tag{14}\] Parameters for these interaction terms (coupling parameters \(K\) and overlaps \(S^{2}\)) are parametrized by a ML model. The input features are described in the following. * _Pairwise Atom Types_: Features obtained from molecular graphs \(\mathcal{G}_{\text{Mol}}\) described in Section 2.2 are symmetrized as \(h^{1}_{ij}=\phi_{h}(h^{1}_{i},h^{1}_{j})+\phi_{h}(h^{1}_{j},h^{1}_{i})\). For the short-range correction, only first-order atom types are used. These will be referred to as \(h^{1}_{ij}\). Only first-order atom types are used to avoid overfitting as multipoles already include information about the environment. * _Distance Features_: Distances are encoded with five Gaussians \(\exp{(-\alpha r^{2}_{ij})}\) with logarithmically spaced \(\alpha\in{0.1,1}\) A\({}^{-2}\). Preliminary work (data not shown) indicated that Bessel functions, which are frequently used to encode distances, induce oscillations in the pairwise potential. The Gaussians are centered at \(0\) to avoid this behaviour. These features will be referred to as \(d_{ij}\). * _Anisotropic Features_: Anisotropy is introduced based on the atomic multipoles \(M^{k}\) of order \(k\) as the symmetrized multipole-multipole interaction coefficients [61], \[\begin{split} g_{0}&=M^{0}_{i}\cdot M^{0}_{j}\\ g_{1}&=M^{0}_{i}\cdot(M^{1}_{i,\alpha}\vec{r}_{ij, \alpha})-M^{0}_{j}\cdot(M^{1}_{i,\alpha}\vec{r}_{ij,\alpha})\\ g_{2}&=(M^{1}_{i,\alpha}\vec{r}_{ij,\alpha})\cdot(M^ {1}_{j,\alpha}\vec{r}_{ij,\alpha})\\ g_{3}&=M^{1}_{i,\alpha}M^{1}_{j,\alpha}\\ g_{4}&=(M^{2}_{i,\alpha\beta}\vec{r}_{ij,\alpha})_{ \beta}M^{1}_{j,\beta}-(M^{2}_{j,\alpha\beta}\vec{r}_{ij,\alpha})_{\beta}M^{1} _{i,\beta}\\ g_{5}&=M^{0}_{j}\cdot(M^{1}_{i,\alpha\beta}\vec{r}_{ ij,\alpha\beta})+M^{0}_{i}\cdot(M^{2}_{j,\alpha\beta}\vec{r}_{ij,\alpha})\\ g_{6}&=M^{2}_{i,\alpha\beta}M^{2}_{j,\alpha\beta}\\ g_{7}&=(M^{2}_{i,\alpha\beta}\vec{r}_{ij,\alpha})_{ \beta}(M^{2}_{j,\alpha\beta}\vec{r}_{ij,\alpha})_{\beta}\\ g_{8}&=(M^{2}_{j,\alpha\beta}\vec{r}_{ij,\alpha}) \cdot(M^{1}_{i,\alpha}\vec{r}_{ij,\alpha})-(M^{2}_{i,\alpha\beta}\vec{r}_{ij, \alpha\beta})\cdot(M^{1}_{j,\alpha}\vec{r}_{ij,\alpha})\\ g_{9}&=(M^{2}_{i,\alpha\beta}\vec{r}_{ij,\alpha \beta})\cdot(M^{2}_{j,\alpha\beta}\vec{r}_{ij,\alpha\beta}).\end{split}\] (15) In this context, scalar multiplication is indicated by \(\cdot\) and contractions are performed over the Cartesian components indicated by the greek indices. \(\vec{r}_{ij,\alpha\beta}\) refers to the tensor product of the Euclidean vector \(\vec{r}_{ij,\alpha}=\vec{r}_{j}-\vec{r}_{i}\) with itself. Vectors \(\vec{r}_{ij,\alpha}\) are normalized. Two types of features are used. The first type is calculated without inclusion of the induced dipoles, i.e., \((M^{0},M^{1},M^{2})\), whereas the second includes the contribution of the induced dipoles, i.e., \((M^{0},M^{1}+\mu_{\text{Ind}},M^{2})\). These features will be referred to as \(g_{ij,\text{Static}}\), and \(g_{ij,\text{Ind}}\), respectively. Using the above features, the orbital overlaps \(S^{2}\) are parametrized by an ANN \(\phi_{S^{2}}\) as, \[\begin{split} S^{2}_{\text{Att}}&=\phi_{S^{2},\text{ Static}}(h^{1}_{ij},d_{ij},g_{ij,\text{Static}})\\ S^{2}_{\text{Ex, Ind}}&=\phi_{S^{2},\text{Ind}}(h^{1} _{ij},d_{ij},g_{ij,\text{Ind}})\\ S^{2}_{\text{Ex, Static}}&=\phi_{S^{2},\text{Static}}(h^ {1}_{ij},d_{ij},g_{ij,\text{Static}})\end{split} \tag{16}\] Overlaps sharing the same input features, i.e., \(S^{2}_{\text{Att}}\) and \(S^{2}_{\text{Ex, Static}}\), are predicted by the same model. Coupling constants \(K\) are predicted as \[K=\phi_{K}(h^{1}_{ij},d_{ij}), \tag{17}\] that is without including the anisotropic features. Independent coupling constants are predicted for each term using the same model. Overlaps and Gaussian distance features are multiplied with a switching function to guarantee a smooth cutoff [71], \[\begin{split} f_{\text{Switch}}(x)&=1-6x^{5}+15x^{ 4}-10x^{3}\\ x(r)&=\frac{(r-r_{\text{switch}})}{(r_{\text{cut}}- r_{\text{switch}})}\end{split} \tag{18}\] with distance \(r\), cutoff \(r_{\text{cut}}\), and switching distance \(r_{\text{switch}}\). The switching distance is set to \(r_{\text{cut}}-1\,\text{\AA}\). ## 3 Methods ### Models and Training Procedure Several ML models were used in this work. An overview is given in Figure 2. If not mentioned otherwise, ANN parametrized functions \(\phi\) were constructed from two fully connected feed-forward layers of size \(128\) using the Swish activation function [72]. The GNNs used to extract features of molecular graphs \(\mathcal{G}_{\text{Mol}}\) used a node-embedding and edge-embedding layer and message passing layers consisting of a single feed-forward layer of size \(64\). Each model was trained separately on its respective target. If not noted otherwise, models were optimized with Adam [73] using an exponentially decaying learning rate \(\in[5\cdot 10^{-4},1\cdot 10^{-5}]\). Figure 2: Overview of the ANA2B model. Dotted lines refer to features that depend on the geometry while bold lines to features based on molecular graphs. Blue components refer to intermolecular interactions, red to intramolecular interactions, and grey to shared interactions. #### 3.1.1 Multipoles and Atomic Volumes MBIS multipoles on a PBE0/def2-TZVP level of theory were predicted using our previous formalism for an equivariant multipole GNN [59]. In addition, the MBIS atomic volume ratio was included. The message passing formalism described in Ref. [59] was replaced with the AMP formalism described in Ref. [60]. For training, the dataset generated in Ref. [59] was used and extended with conformations sampled with molecular dynamics (MD) to improve coverage of off-equilibrium conformations. MD simulations were performed with xTB (version 6.4.1) [6] using the GFN-1 Hamiltonian [74]. A seed conformation for MD was generated with the ETKDG conformation generator [75] as implemented in the RDKit [76]. MD simulations were carried out in the NVT ensemble for \(n\cdot 100\,\mathrm{ps}\), with integration steps of \(0.5\,\mathrm{fs}\) at \(800\,\mathrm{K}\) without any constraints. If not stated otherwise, default settings (scacc\(=2\), hmass\(=4\,\mathrm{a.u.}\)) were used. \(n\) was determined based on the number of heavy atoms in the molecule (\(<\)5: \(n=16\), \(<\)7: \(n=8\), \(<\)11: \(n=4\), \(>\)10: \(n=2\)). Snapshots were written out every \(100\,\mathrm{ps}\). The \(n+1\) conformations, including the xTB GFN-1 minimum structure, obtained in this manner served as input for the following single-point calculations. Single-point gradients were evaluated for each structure with PBE0/def2-TZVPP [40, 41, 42] using PSI4 (version 1.4) [77, 78]. MBIS multipoles [39] and volumes were obtained with PSI4 [79]. If not stated otherwise, default PSI4 settings were used (energy and density convergence threshold \(10^{-8}\,\mathrm{a.u.}\)). Data for \(1^{\prime}514^{\prime}462\) conformations for a total of \(451^{\prime}973\) unique molecules were obtained in this way. #### 3.1.2 ML Correction The ML correction was used to describe intramolecular interactions except for the contribution of the D3 dispersion model. The \(\Delta\)ML potential is based on the AMP architecture proposed in Ref. [60]. However, instead of a single set of multipoles, a total of \(32\) independent sets of multipoles up to the quadrupole were expanded on each atom. Note that these multipoles serve only as a tool to introduce directional interactions, unlike the electrostatic multipoles used for \(V_{\mathrm{ES}}\). Three message passing steps were employed with a cutoff of \(5\,\mathrm{\SIUnitSymbolAngstrom}\). The model was trained on the difference between the reference PBE0 potential energy and gradient as well as the potential energy and gradient of the bond-stretching and damped electrostatic interactions. The model was trained on the same dataset used to train the multipole model. The model was trained over \(2^{\prime}048\) epochs. Gradient norms were clipped to norm \(1\). During each epoch, \(1^{\prime}028\) samples were presented. Each sample consisted of a batch of all conformations of five molecules. The model was trained on weighted relative energies and gradients, \[\mathcal{L}_{\Delta\text{ML}}=w_{i}\cdot(1-\beta)\cdot(\Delta V_{\text{Ref}} -\Delta V_{\text{ML}})^{2}+\frac{\beta}{3N}\sum_{i}^{N}\sum_{\alpha}^{3}\left( \frac{\partial V_{\text{Ref}}}{\partial x_{i,\alpha}}-\frac{\partial V_{ \text{ML}}}{\partial x_{i,\alpha}}\right)^{2}. \tag{19}\] \(\Delta V_{\text{ML}}\) and \(\Delta V_{\text{Ref}}\) refer to the relative energies, i.e., the difference between the energy of a conformation \(i\) and a conformation \(j\) serving as a reference point \(\Delta V=V_{i}-V_{j}\). \(\beta\) was set to \(0.9\). Weights \(w_{i}\) were defined as, \[w_{i}=\exp\left(-\frac{V_{min}-V_{i}}{k_{b}TN}\right), \tag{20}\] where \(V_{min}\) is the energy of the conformation with the lowest energy of a given molecule, and \(N\) to the numbers of atoms. \(T\) was set to \(2^{\prime}000\,\mathrm{K}\). Only molecules with more than one possible conformation and conformations with negative atomization energies and with maximum gradient components \(\leq 2^{\prime}000\,\mathrm{kJ/molA}\) were used. #### 3.1.3 Short-Range Correction The short-range pairwise potential \(V_{\Delta\text{SR}}\) was trained on the intermolecular potentials of dimers in vacuum from the DES5M dataset [50]. A cutoff of \(6.5\,\mathrm{\SIUnitSymbolAngstrom}\) was used for this interaction. As an exception, a cutoff of \(5.5\,\mathrm{\SIUnitSymbolAngstrom}\) was found to be optimal for the ANA2B\({}^{0}\) model, i.e., the model without any polarization interactions. The model was trained over \(512\) epochs. Gradient norms were clipped to norm \(1\). During each epoch, \(2^{\prime}048\) samples were presented. Each sample consisted of all configurations of a given dimer. The mean squared error (MSE) between the predicted intermolecular potential and the reference (SNS-MP2) [51, 52] was optimized. Performance on the S7L [80, 81] and S66x8 [82] datasets were used as signals for early stopping. The mean absolute error (MAE) on a set of structures from X23 and ICE13 (CYTSIN01, URACIL, UREAXX12, HXMTAM10, CYHEXO, SUACCB03, CYANAAM01, PYRZOL05, OXALAC04, ammonia, CO2 and ice polymorphs Ih and II) was used to the select the final models. #### 3.1.4 Polarizabilities The model used to predict polarizability scaling factors from molecular graphs was trained on a dataset of CCSD molecular polarizabilities reported in Ref. [68]. The model was trained over \(512\) epochs. During each epoch \(512\) randomly drawn samples consisting of a single molecule were presented. The model was optimized with respect to the MSE between the predicted molecular polarizability and the CCSD molecular polarizability. ### General Implementation Details All ML models were implemented in TensorFlow (version 2.11.0) [83]. The atomic simulation environment (ASE, version 3.22.1) [84] was used as MD engine, for optimization, and for general analysis tasks including the calculation of harmonic free energies and thermodynamic integration. MDTraj (version 1.9.8) [85] was used for post-processing and analysis tasks. For long-range electrostatic interactions and polarization, a real-space cutoff of \(10\,\mathrm{\SIUnitSymbolAngstrom}\) was used. The screening parameter \(\alpha\) for Ewald summations was set to \(0.292\) and \(0.215\) for the evaluation of the electrostatic interaction and the mutual polarization, respectively. Crystal structures were minimized with fixed lattice parameters. For MD simulations involving liquids, cutoffs for the D3 model were set to \(10\,\mathrm{\SIUnitSymbolAngstrom}\), \(5\,\mathrm{\SIUnitSymbolAngstrom}\), and \(10\,\mathrm{\SIUnitSymbolAngstrom}\) for the two-body-term, three-body-term, and the coordination number, respectively. For calculations and MD simulations involving crystals, cutoffs for the D3 model were set to \(15\,\mathrm{\SIUnitSymbolAngstrom}\), \(8\,\mathrm{\SIUnitSymbolAngstrom}\), and \(15\,\mathrm{\SIUnitSymbolAngstrom}\) for the two-body-term, three-body-term, and the coordination number, respectively. ### Molecular Dynamics (MD) Set-up Simulations of the pure liquids in the GROMOS 2016H66 dataset were performed with ASE [84]. \(22\,\mathrm{\SIUnitSymbolAngstrom}\) cubix boxes were generated with packmol [86] followed by a pre-equilibration over \(10^{\prime}000\) steps at \(300\,\mathrm{K}\) with OpenFF (version 2.0) using OpenMM (version 8.0) [87, 88]. Equilibration and production runs were performed with an Andersen thermostat [89] at the simulation temperature described in the GROMOS 2016H66 publication [90] (\(298.15\,\mathrm{K}\) if not noted otherwise) and a Monte-Carlo barostat [91] with a target presssure of \(1\,\mathrm{\SIUnitSymbolAngstrom}\). The integration step was set to \(0.5\,\mathrm{fs}\). The equilibration was performed over \(2^{\prime}000\) steps (\(1\,\mathrm{ps}\)) using the respective ANA2B model with the collision frequency set to \(0.1\) and the barostat frequency set to \(10\). For the production run over \(100^{\prime}000\) steps (\(50\,\mathrm{ps}\)), the collision frequency was set to \(0.01\) and the barostat was applied every \(25\)th step. These runs were repeated three times with different random number seeds for the generation of the initial velocities. Ensemble properties were averaged over the last \(25\,\mathrm{ps}\). For the prediction of the heat of vaporization, monomers were simulated in the gas phase. These simulations were equilibrated over \(2^{\prime}000\) steps (\(1\,\mathrm{ps}\)) using a Berendsen thermostat [92] (\(\tau=$10\,\mathrm{fs}$\)) followed by a \(100^{\prime}000\) step (\(50\,\mathrm{ps}\)) production run using a Langevin thermostat with a friction of \(1\,\mathrm{a.u.}\) Starting conformations were generated with the ETKDG conformation generator [75] in the RDKit [76]. Again, averages were taken over four replicates with different initial velocities. ### Ranking of Crystal Structures - CSP Blind Tests 3 and 5 For the third CSP blind test [93], all structures submitted by van Eijck were used (entries VIII, X, XI) [94, 95, 96]. For the fifth blind test [97], submissions of Neumann and co-workers were considered (entries XVI, XVII, XVIII) [98, 99, 100]. These submissions were selected because they contain in all cases a candidate structure that was considered a match with the experimental structure. Candidate structures were relaxed under fixed lattices using the (L)BFGS optimizer with a tolerance of \(1\,\mathrm{kJ/molA}\)[101, 102, 103, 104, 105]. Lattices parameters were not minimized. Structures that did not converge within \(250\) steps were excluded. ### Ranking of Crystal Structures - CSP Blind Test 6 #### 3.5.1 Relaxation of Crystal Structures Lattices were relaxed with an external pressure of \(1\,\mathrm{bar}\) using an anisotropic Monte Carlo barostat [91] at \(0\,\mathrm{K}\). Subsequently, structures were relaxed under fixed lattices using the (L)BFGS optimizer with a tolerance of \(1\,\mathrm{kJ/molA}\)[101, 102, 103, 104, 105]. #### 3.5.2 MD Simulations The NPT ensemble was sampled using an Andersen thermostat [89] and an anisotropic Monte Carlo barostat [91] at \(1\,\mathrm{bar}\) and a temperature of \(150\,\mathrm{K}\) (XXII) and \(300\) (XXIII, XXVI). The collision frequency was set to \(0.1\) and the barostat frequency was set to \(10\). Structures were equilibrated for \(1\,\mathrm{ps}\) followed by \(5\,\mathrm{ps}\) production runs. These simulations were used to obtain thermally expanded cells and the mean potential energy. #### 3.5.3 Gibbs Term The difference between the Helmholtz free energy \(F\) and the Gibbs free energy \(\left(G\right)\) was obtained as [106], \[\Delta_{F\to G}=P\langle V\rangle+k_{B}T\log\rho(h|P,T), \tag{21}\] with \(\langle V\rangle\) referring to the mean volume during the simulation and P to the pressure. The density \(\rho(h|P,T)\) was obtained through a kernel density estimation using Gaussian kernels with a width of \(0.1\). The density was estimated for the cell parameters. #### 3.5.4 Helmholtz Free Energy The harmonic Helmholtz free energy \(\left(F_{H}\right)\) was calculated with the phonon module implemented in ASE using the minimized structures from step one. The phonon density of states was sampled on a uniform \(k\)-point grid of size (\(20\), \(20\), \(20\)) using \(2^{\prime}000\) sampling points. #### 3.5.5 Thermodynamic Integration The anharmonic correction to the harmonic Helmholtz free energy \(\left(F_{A}\right)\) was obtained with a thermodynamic integration from the harmonic potential \(\left(V_{H}\right)\) to the unconstrained potential \(\left(V_{A}\right)\), \[\Delta_{H\to A}=\int_{0}^{1}\langle V_{A}-V_{H}\rangle_{\lambda}d\lambda \tag{22}\] following the description in Ref. [107]. The harmonic potential was obtained from the numerically calculated Hessian of the relaxed structure using the lattice parameters with the highest likelihood. The thermodynamic integration was performed over eleven uniformly spaced \(\lambda\)-points. Numerical integration was performed using a trapezoidal integration. An initial equilibration over \(500\,\mathrm{fs}\) was performed followed by \(100\,\mathrm{fs}\) of equilibration and \(1\,\mathrm{ps}\) of sampling at each lambda point. The NVT ensemble was sampled with an Andersen thermostat [89] at \(150\,\mathrm{K}\) (XXII) and \(300\,\mathrm{K}\) (XXIII, XXVI). ## 4 Results and Discussion The proposed ANA2B model was applied to a range of existing benchmarks to establish a level of accuracy. The datasets are categorized by their use as training, validation, or test set, and include intermolecular potential energies of dimers and lattice energies of molecular crystals and water ice. Particular attention is given to the role of polarization because preliminary results (data not shown) highlighted its importance. We have thus studied three variations of the ANA2B model: The first variation does not include any polarization interaction at all, and will be referred to as ANA2B\({}^{0}\). The second variation, labelled ANA2B\({}^{1}\), includes only the polarization stemming from the direct field, i.e., neglecting the mutual polarization. The third variation, labelled ANA2B\({}^{\infty}\), includes a full treatment of the direct and mutual polarization terms. At present, all models were only trained and applied to neutral molecules consisting of the elements H, C, N, O, F, S, Cl. ### Monomers in Vacuum #### 4.1.1 Performance on Training and Validation Sets A dataset of small molecules, covering potential energies, gradient, atomic multipoles, and atomic volume ratios on a PBE0/def2-TZVP level of theory was used to train the intramolecular potential. The construction of this dataset is discussed in Section 3.1. Table 1 reports the errors for the gradients and relative energies for the training set and validation set. #### 4.1.2 Performance on Test Set The following section reports the performance of the intramolecular ML potential on several computational benchmark datasets of conformation energies. Overall, we find that our model performs comparable to the reference method (PBE0-D3BJ/def2-TZVP) with MAE values that are typically larger by a few tenths of a kJ/mol. These results justify on one hand the decision to use the ML potential in place of the DFT calculation and that ML potentials might overall be able to substitute DFT in many situations. At the same time, datasets such as PCONF clearly display how the ML potential 'inherits' the accuracy of the method used to generate the training set. \begin{table} \begin{tabular}{l c c|c} \hline Name & N & Type & \(\Delta\)ML\({}_{\text{intra}}\) \\ \hline \(\Delta\)Energy & 1’398’301 & Train & 0.5 \\ Gradient & & Train & 0.8 \\ \(\Delta\)Energy & 79’369 & Val & 0.6 \\ Gradient & & Validation & 0.8 \\ \hline \end{tabular} \end{table} Table 1: Mean absolute error (MAE) in [kJ/mol] for the training set and validation set of energies and gradients of small molecules in vacuum. Errors for relative energies, i.e., with respect to the energy of a reference conformation, are reported. Three outlier conformations were excluded due to highly deformed structures being present. More detailed error statistics is provided in Table S1 in the Supporting Information. \begin{table} \begin{tabular}{l c|c c c} \hline Name & \(N\) & Type & Intra ML & PBE0-D3 \\ \hline Glucose [108] & 205 & Test & 2.5 & 2.3 \\ Maltose [108] & 223 & Test & 2.7 & 1.9 \\ SCONF [109] & 17 & Test & 1.6 & 1.1 \\ PCONF [110] & 10 & Test & 6.7 & 6.2 \\ ACONF [111] & 15 & Test & 0.5 & 0.2 \\ CYCONF [112] & 15 & Test & 2.7 & 2.7 \\ \hline \end{tabular} \end{table} Table 2: Mean absolute error (MAE) in [kJ/mol] for the intramolecular ML potential used in this work for benchmarks of conformation energies (test sets). \(N\) is the number of data points per dataset. The method used to generate the training data (PBE0-D3) is shown as a comparison. More detailed error statistics is provided in Table S2 in the Supporting Information. ### Dimers in Vacuum #### 4.2.1 Performance on Training Set Table 3 displays MAEs for the full training set (DESSM). In all cases, the prediction error of around 2.0 kJ/mol is below the 'chemical accuracy' level of \(4.184\,\mathrm{kJ/mol}\). If only near-equilibrium structures (\(<\)10 kJ/mol) are considered, the MAE drops further to \(0.5\,\mathrm{kJ/mol}\). For a subset of 370'000 molecules (DES370K), CBS extrapolated CCSD(T) reference data exists, which was used to train the SNS-MP2 model [50] applied to the remaining DES5M dataset. Compared to SNS-MP2 itself (0.2 kJ/mol for DES370K and 0.1 kJ/mol for DES370K\(<\)10 kJ/mol [50]), the ANA2B\({}^{\infty}\) model introduces an additional error of 0.9 kJ/mol. On near-equilibrium structures (DES370K\(<\)10 kJ/mol), our model introduces only an additional \(0.4\,\mathrm{kJ/mol}\) error compared to the error between SNS-MP2 and CCSD(T)/CBS. #### 4.2.2 Performance on Validation Set The S66x8 [82] and S7L [80] datasets were used as early-stopping signal during training of the ANA2B models. While only small differences are found for the small molecule dimers in the S66x8 dataset, a considerably larger MAE is observed for the supramolecular systems in the S7L dataset. These results are consistent with the results observed for molecular systems shown below in Subsection 4.3. Very large molecules and/or molecular clusters might thus be an adequate and cost-efficient substitute to train and validate size-transferable ML potentials in the absence of condensed-phase data. For the S7L structures, the PNO coupled cluster calculations of Ref. [81] were used. #### 4.2.3 Performance on Test Set Table 5 lists MAE values for 14 computational benchmark datasets of dimer interaction potentials. In most cases, errors for the three ANA2B models are comparable. However, for datasets that contain highly polarizable systems, e.g., nucleobases in JSCH and ACHC, or for datasets with hydrogen-bonded systems, i.e., HB375x10, HB300SPX10 and HBC1, the two models which include a treatment of polarization (ANA2B\({}^{1}\) and ANA2B\({}^{\infty}\)) perform better. \begin{table} \begin{tabular}{l c c|c c c} \hline Name & N & Type & ANA2B\({}^{0}\) & ANA2B\({}^{1}\) & ANA2B\({}^{\infty}\) \\ \hline S66x8 [82] & 528 & Validation & 1.3 & 0.8 & 0.8 \\ S7L [80, 81] & 7 & Validation & 21.1 & 2.0 & 2.3 \\ \hline \end{tabular} \end{table} Table 4: Mean absolute error (MAE) in [kJ/mol] for the validation sets S66x8 [82] and S7L [80, 81]. \(N\) is the number of data points per dataset. More detailed error statistics is provided in Table S4 in the Supporting Information. \begin{table} \begin{tabular}{l c|c c c c} \hline Name & \(N\) & Type & ANA2B\({}^{0}\) & ANA2B\({}^{1}\) & ANA2B\({}^{\infty}\) \\ \hline DES5M [50] & 4’034’267 & Train & 1.9 & 2.0 & 2.0 \\ DES5M\(<\)10kJ/mol [50] & 3’255’535 & Train & 0.5 & 0.5 & 0.5 \\ DES370K [50] & 269’611 & Train\({}^{1}\) & 1.2 & 1.2 & 1.1 \\ DES370K\(<\)10kJ/mol [50] & 235’958 & Train\({}^{1}\) & 0.5 & 0.5 & 0.5 \\ \hline \end{tabular} \end{table} Table 3: Mean absolute errors (MAE) in [kJ/mol] for the training set of DES5M (SNS-MP2) and the subset DES370K (CCSD(T)/CBS). \({}^{1}\) For the DES370K subset, MAE values with respect to the CCSD(T)/CBS reference are reported. The models were trained on the full SNS-MP2 dataset (DES5M). \(N\) is the number of data points per dataset. More detailed error statistics is provided in Table S3 in the Supporting Information. ### Molecular Crystals To assess whether the ANA2B can transfer from dimers and monomers in vacuum to condensed-phase systems, the model was applied to the prediction of lattice energies of molecular crystals. Table 6 and Figure 3 show the corrected experimental lattice energies of the X23 dataset [123, 124, 125] and diffusion Monte Carlo lattice energies for water ice polymorphs [126]. Note that a subset of structures from X23 and ICE13 were used as validation structures to select the final model (see Sec. 3.1.3). Overall, the observed MAE is comparable to the most accurate dispersion corrected DFT calculations reported so far. For example, a recent study by Price _et al._[127] reported an MAE of \(2.0\,\mathrm{kJ/mol}\) using B86bPBE-25 in combination with the XDM dispersion correction. The same study also reported an MAE of \(0.8\,\mathrm{kJ/mol}\) for the ICE13 dataset. Note that direct comparison with B86bPBE-25 is somewhat complicated by the fact that the lattice energies were obtained for structures minimized with a different method (B86bPBE). Finally, the MAE for the X23 dataset with an existing multipole FF for molecular crystals, FIT [128], is reported as \(9.2\,\mathrm{kJ/mol}\)[129]. This direct comparison indicates that the hybrid approach proposed in this work may present a way to unlock the full potential of classical FFs. The overall good performance of ANA2B compared to hybrid DFT methods is particularly interesting considering that hybrid DFT calculations are currently probably the most accurate approach feasible for relatively large scale studies of condensed-phase systems. Taking into account the error of the reference method itself and the error resulting from the ML model underscores the importance of developing ML models, which are transferable and thus able to take advantage of the high-quality data available for small systems. In the case of the ice polymorphs, the importance of a description of polarization becomes evident. While the expensive treatment of mutual polarization (ANA2B\({}^{\infty}\)) results only in a small improvement of the MAE compared to the (ANA2B\({}^{1}\)), a clear difference is observed with regards to the ranking of the ice polymorphs (Table 6): For the ANA2B\({}^{\infty}\) model, good agreement with the DMC reference is found with a Spearman correlation coefficient \(r_{\text{Spearman}}\) of \(0.77\). For the ANA2B\({}^{1}\), the ranking is considerably worse with a slightly negative coefficient \(r_{\text{Spearman}}=-0.04\) (ANA2B\({}^{0}\): \(r_{\text{Spearman}}=-0.74\)). While water presents a unique case, which might exaggerate the importance of polarization, these results still show a clear trend. Including some description of the non-additive nature of polarization might thus be the most important ingredient required to achieve transferability from vacuum to the condensed phase. \begin{table} \begin{tabular}{l c c|c c c} \hline Name & N & Type & ANA2B\({}^{0}\) & ANA2B\({}^{1}\) & ANA2B\({}^{\infty}\) \\ \hline SSI [113] & 2’596 & Test & 0.6 & 0.7 & 0.6 \\ BBI [113] & 100 & Test & 1.0 & 0.7 & 0.7 \\ UBQ [114] & 81 & Test & 1.0 & 1.2 & 1.0 \\ ACHC [8] & 54 & Test & 4.8 & 2.2 & 1.0 \\ JSCH [115] & 123 & Test & 4.7 & 2.9 & 2.8 \\ HSG [116] & 16 & Test & 0.8 & 0.7 & 0.8 \\ HBC1 [117] & 58 & Test & 8.5 & 3.3 & 2.0 \\ S22 [115] & 22 & Test & 3.3 & 1.7 & 1.6 \\ S22\(\times\)7 [118] & 154 & Test & 5.9 & 3.0 & 2.8 \\ D1200 [119] & 482 & Test & 1.7 & 1.2 & 1.2 \\ D442\(\times\)10 [119] & 1570 & Test & 1.9 & 1.6 & 1.5 \\ R739\(\times\)5 [120] & 1’615 & Test & 2.5 & 2.3 & 2.3 \\ HB375x10 [121] & 3’750 & Test & 1.9 & 1.4 & 1.4 \\ HB3005PX\(\times\)10 [122] & 1’210 & Test & 4.1 & 3.1 & 3.5 \\ \hline \end{tabular} \end{table} Table 5: Mean absolute error (MAE) in [kJ/mol] for intermolecular potential benchmarks of dimers in vacuum (test sets). \(N\) is the number of data points per dataset. More detailed error statistics are provided in Table S5-S7 in the Supporting Information. ### Condensed-Phase Properties of Pure Liquids Reproduction of experimental condensed-phase properties of molecular liquids have been a long-standing goal for the parametrization and testing of classical FF. Particularly for ML-based FF, these properties are an interesting test case as they require sufficient sampling in both the gas phase and the condensed phase. Here, we rely on a dataset that was used to parametrize and validate the GROMOS 2016H66 FF [90]. This dataset consists of a diverse set of \(57\) small molecules and several properties including the heat of vaporization, density (\(\rho\)), isothermal compressibility (\(\kappa\)), thermal expansion coefficient (\(\alpha\)), and the dielectric permittivity (\(\epsilon\)). We limit the analysis to the heat of vaporization and the density in this study due to the slow convergence of the other properties. Results with ANA2B1 are shown in Table 7. For both properties, we observe RMSE values comparable to the fixed-charge FF (IPA and GROMOS \begin{table} \begin{tabular}{l c c|c c c c c} \hline Name & N & Type & ANA2B0 & ANA2B1 & ANA2B\({}^{\infty}\) & B86bPBE & B86bPBE-25 \\ \hline X23 [123, 124, 125] & 23 & Test1 & 4.6 & 3.2 & 2.9 & 3.0 & 2.0 \\ DMC-ICE13 [126] & 13 & Test1 & 8.3 & 1.4 & 1.3 & 7.5 & 0.8 \\ \hline \end{tabular} \end{table} Table 6: Mean absolute error (MAE) in [kJ/mol] for experimental (X23) and computationally (ICE13) derived lattice energies of molecular crystals in the test set. \(N\) is the number of data points per dataset. Results for B86bPBE and B86bPBE-25 with XDM dispersion correction are taken from Ref. [127]. B86bPBE-25 values were calculated with geometries relaxed at the B86bPBE level (B86bPBE-25//B86bPBE). 1Errors on a subset of X23 (cytosine, uracil, urea, hexamethylenetramine, cyclohexane-1,4-dione, succinidic acid, cyanamide, pyrazole, ammonia, and CO\({}_{2}\)) and ICE13 (ice Ih, ice II) were used to select the final model. Graphical results are shown in Figure 3. More detailed error statistics are provided in Table S8-S10 in the Supporting Information. Figure 3: Results for the lattice energies of systems in the X23 (top panels) and ICE13 datasets (bottom panels) with the ANA2B models. The left column (ANA2B0) refers to the model without any treatment of polarization, the middle (ANA2B1) column shows results for the model that includes only the direct polarization term, and the right column (ANA2B\({}^{\infty}\)) displays results for the model which includes a full treatment of polarization. Parity \(\pm\) 4.184 kJ/mol is indicated by the black lines. 2016H66) shown in Figure 4 and Table 7, confirming the observation made for the prediction of lattice energies, i.e., transferability to the condensed phase is possible for the ANA2B\({}^{1}\) model. These results are particularly noteworthy as GROMOS 2016H66 was parametrized on these two properties. The slightly smaller error of IPA for the density might stem from the fact that its parametrization included molecular crystals, indicating that the prediction of densities could be improved by incorporating condensed-phase structures during training. Finally, we note that as the only exception, two of three simulations of ethylenediamine in the liquid phase crashed after \(24.3\) and \(31.6\,\mathrm{ps}\), respectively, with the ANA2B\({}^{1}\) model. ### Crystal Structure Prediction Having established a level of accuracy in the previous sections, this last section is concerned with the application of the ANA2B\({}^{\infty}\) model to the (retrospective) ranking of molecular crystals. As targets we use the structures, which were part of the CSP blind tests 3 [93], 5 [97], and 6 [28] organized by the Cambridge Crystallographic Data Centre in the past. These blind tests were chosen due to the availability of all submitted candidates, allowing for the least biased assessment of the ability to find the experimental crystal structure given a list of candidates. We limit ourselves to the pure and neutral targets restricted to H, C, N, O, F, S, Cl. Target XX of the third blind test was excluded due to convergence issues. For the third and fifth blind test, a ranking based on lattice energies is used. For the sixth blind test, we furthermore explore how additional contributions, such as entropic terms, impact the ranking. \begin{table} \begin{tabular}{l|c c c} \hline Property & IPA[34] & GROMOS 2016H66 [90] & ANA2B\({}^{1}\) \\ \hline H\({}_{\text{rap}}\) [kJ/mol] & 4.5 & 3.5 & 2.8 \(\pm\) 0.9 \\ \(\rho\) [kg/m\({}^{3}\)] & 26.3 & 32.4 & 33.9 \(\pm\) 6.2 \\ \hline \end{tabular} \end{table} Table 7: Root-mean-square error (RMSE) for pure liquid properties of 57 systems used in the calibration and validation of the GROMOS 2016H66 FF [90]. Values for GROMOS 2016H66 and IPA were taken from the referenced publications. The uncertainty is given as the mean standard deviation obtained over four replica. More detailed error statistics is provided in Table S11 in the Supporting Information. The individual numerical values are given in Table S12. Figure 4: Results for condensed-phase properties of \(57\) molecules used in the parametrization and validation of GROMOS 2016H66: density (left) and heat of vaporization (right). Black lines indicate equality \(\pm\) 50 kg/m\({}^{3}\) and \(\pm\) 4.184 kJ/mol. #### 4.5.1 CSP Blind Tests 3 and 5 Rankings for targets stemming from the third and fifth blind test are shown in Figure 5. Candidates for blind test three submitted by van Eijck were generated using random search [95]. Candidates for the fifth blind test submitted by Neumann _et al._ were generated using Monte Carlo parallel tempering [99]. In all cases, a match with the experimental structure (red) would have been found as the most stable structure of within a window of \(<1.3\,\mathrm{kJ/mol}\). Overall, these results underscore the strength of the proposed ML-augmented FF, which yields rankings that are in most cases comparable to rankings based on much more expensive methods such as system-tailored FFs [100] or DFT [130]. #### 4.5.2 CSP Blind Test 6 In previous work, Hoja _et al._[30] presented a workflow to rank crystal structures of the 6th CSP blind test [28] in a hierarchical manner. They generated candidate structures first using the tailor-made FF developed by Neumann and co-workers [99, 100], and subsequently ranked them with increasingly accurate methods, including vibrational contributions in the final ranking. Here, we base our study on the candidate structures made available as part of their work [30], which includes all known experimental structures. The exhaustive computational study by Hoja _et al._ has provided insight into the different contributions stemming from DFT on different levels of theory and vibrational contributions, which we can use for a comparison with our ANA2B\({}^{\infty}\) model. Rankings for the three pure systems XXII, XXIII, and XXVI are shown in Figures 6-8 based on the lattice energy (ANA2B\({}^{\infty}\) E(0K)), the harmonic Helmholtz free energy (ANA2B\({}^{\infty}\) F\({}_{\mathrm{H}}\)(T)), the Helmholtz free energy including anharmonic corrections (ANA2B\({}^{\infty}\) F\({}_{\mathrm{A}}\)(T)), the Gibbs free energy (ANA2B\({}^{\infty}\) G\({}_{\mathrm{A}}\)(T), and the mean potential energy during a molecular simulation (ANA2B\({}^{\infty}\) E(T)). Rankings for dispersion corrected PBE and PBE0 are taken from Ref. [30]. Figure 5: Stability ranking for the crystal structure for the compounds of the CSP blind tests 3 (VIII, X, XI) [93] and 5 (XVI, XVII, XVIII) [97] using the lattice energy predicted with the ANA2B\({}^{\infty}\) model. Each horizontal bar represents the stability of a structure with respect to the most stable structure. Red bars indicate experimental structures. The candidate structures were taken from the corresponding publications [93, 97]. Figure 6: Stability ranking for the crystal structure of compound XXII. Each horizontal bar represents the stability of a structure with respect to the most stable structure. The stability is given in kJ/mol per molecule. Candidate structures and rankings for dispersion corrected PBE and PBE0 are taken from Ref. [30]. Experimental structures are marked in red. Figure 7: Stability ranking for the crystal structure of compound XXIII. Each horizontal bar represents the stability of a structure with respect to the most stable structure. The stability is given in kJ/mol per molecule. Candidate structures and rankings for dispersion corrected PBE and PBE0 are taken from Ref. [30]. Experimental structures are marked in color. For compounds XXII and XXVI, the ANA2B\({}^{\infty}\) lattice energy ranks the experimental polymorph as the most stable (XXII) and the fifth most stable (XXVI) structure within a window of 2 kJ/mol. Interestingly, we do not observe a distinct benefit for the inclusion of corrections to the lattice energy based on entropic contributions. While in some cases a destabilization of non-experimental structures is observed, no clear improvement of the actual ranking is found. This surprising finding suggests that improving the accuracy of the predicted energy might be the highest priority for future work. A fine-tuning on high-quality data of crystalline energies and/or gradients could be a possible solution. Such a fine-tuning might be particularly important for systems where a fine balance between intramolecular and intermolecular interactions exists, i.e., most flexible molecules. A second interesting observation concerns compound XXIII, where the ANA2B\({}^{\infty}\) model fails to rank the experimental structures near the most stable candidate. This failure is most evident for polymorph A, which is in all cases ranked as one of the least stable structures. As the only exception, polymorph B is found within a window of a bit more than 5 kJ/mol. Importantly, several structures, most notably Figure 8: Stability ranking for the crystal structure of compound XXVI. Each horizontal bar represents the stability of a structure with respect to the most stable structure. The stability is given in kJ/mol per molecule. Candidate structures and rankings for dispersion corrected PBE and PBE0 are taken from Ref. [30]. Experimental structures are marked in red. Note that PBE0\(+\)MBD is not available for all polymorphs of this structure. \begin{table} \begin{tabular}{l|c c c c c c c} \hline Systems & a & b & c & \(\alpha\) & \(\beta\) & \(\gamma\) & Volume \\ \hline XXII-N2 & -0.78 & -0.49 & 1.11 & - & 0.76 & - & -0.66 \\ XXIII-A & -1.85 & -2.45 & 0.37 & - & -1.48 & - & -3.69 \\ XXIII-B & 2.55 & -0.49 & -4.92 & 3.97 & 1.56 & -1.49 & -3.47 \\ XXIII-C & -2.70 & -0.81 & -0.92 & 2.03 & 2.06 & 0.23 & -3.96 \\ XXIII-D & -2.20 & 0.63 & 0.99 & - & 2.07 & - & -2.47 \\ XXVI-N1 & -1.72 & -1.26 & -2.64 & 3.30 & 0.73 & 0.86 & -4.41 \\ \hline MAPE & 1.97 & 1.02 & 1.83 & 3.10 & 1.44 & 0.86 & 3.11 \\ \hline \end{tabular} \end{table} Table 8: Relative deviations in percentage \(((\text{pred.}-\text{exp.})/\text{exp.})\cdot 100\%) from the experimental lattice cell parameters and volumes for the polymorphs minimized with ANA2B\({}^{\infty}\) and mean absolute percentage errors (MAPE). polymorph D and N70, could not be converged during the optimization or resulted in unstable MD simulations. In previous work [30], N70 was ranked as the most stable polymorph with PBE0\(+\)MBD\(+\)F\({}_{\text{vib}}\). Relative errors in percent of the lattice cell parameters with respect to the experimental structures are given in Table 8. A consistent underestimation of cell parameters and volumes is found, consistent with the results obtained for the densities of liquids. However, unlike for liquids sampled at finite temperatures, the underestimation of cell volumes might be explained partially with the optimization of cell parameters at 0 K. ## 5 Conclusion In the present work, we have introduced a hybrid classical/ML potential for the simulation of molecular systems. Our work demonstrates that the combination of classical potentials with specific ML-based corrections can result in highly accurate, interpretable, and transferable potentials. The classical description of atomic interactions can thereby profit from augmentation with ML while ML can profit from the constraints imposed by classical models, especially for long-range electrostatics. The proposed hybrid approach could thus fill the existing methodological gap with a method, which can reach the accuracy of DFT at a computational cost between classical FF and semi-empirical methods while simultaneously improving the applicability of ML potentials. In the present work, particular attention was given to the development of a ML-based approach, which can be used for condensed-phase systems but does not require reference data of such systems. However, the results for the crystal structure prediction indicate that the inclusion of some high-quality reference data of condensed-phase systems might be needed to fine tune the balance between intramolecular and intermolecular interactions. Besides improving the efficiency and computational cost, possible avenues for future investigations could include the explicit treatment of three-body interactions with a ML potential or higher-order polarization. However, both of these options would result in significant additional computational costs. An alternative route might be the application within a semi-empirical model instead of a classical FF. In principle, the proposed pairwise ML potential could be applied to semi-empirical methods. Assuming that semi-empirical methods are able to accurately describe long-range interactions, a short-range pairwise potential might be able to largely resolve the limitations of semi-empirical models. This application might be particularly interesting for systems for which the classical approximations assumed in this work are not valid. In a similar vein, the pairwise potential could also be used to improve the description interactions between the QM and MM particles in QM/MM simulations, which typically still rely on classical Lennard-Jones potentials. Overall, we anticipate that the proposed methods will significantly facilitate the parametrization of highly accurate FF. ## Software and Data Availability All datasets used in this work are publicly available (see the corresponding references in the text). The dataset used to train the intramolecular potential is published as part of this work and can be found on the ETH Research Collection [https://www.research-collection.ethz.ch/handle/20.500.11850/626683](https://www.research-collection.ethz.ch/handle/20.500.11850/626683) Code and model weights necessary to reproduce the results in this work are made available on GitHub: [https://github.com/rinikerlab/ANA2B](https://github.com/rinikerlab/ANA2B). ## Acknowledgements The authors thank Felix Pultar for helpful discussions.
2307.11488
Bi-Equivariant Fibrations
The lifting problem for continuous bi-equivariant maps and bi-equivariant covering homotopies is considered, which leads to the notion of a bi-equivariant fibration. An intrinsic characteristic of a bi-equivariant Hurewicz fibration is obtained. Theorems concerning a relationship between bi-equivariant fibrations and fibrations generated by them are proved.
Pavel S. Gevorgyan
2023-07-21T10:46:40Z
http://arxiv.org/abs/2307.11488v1
# Bi-equivariant fibrations ###### Abstract. The lifting problem for continuous bi-equivariant maps and bi-equivariant covering homotopies is considered, which leads to the notion of a bi-equivariant fibration. An intrinsic characteristic of a bi-equivariant Hurewicz fibration is obtained. Theorems concerning a relationship between bi-equivariant fibrations and fibrations generated by them are proved. Key words and phrases:binary \(G\)-space, distributive action, bi-equivariant covering homotopy, bi-equivariant fibration, \(H\)-fixed point, orbit space 2020 Mathematics Subject Classification: 54H15, 55R91 ## 1. Introduction The foundations of bi-equivariant topology were laid in [4], where the notions of a binary \(G\)-space and of a bi-equivariant map were introduced. The space \(H_{2}(X)\) of all continuous invertible binary operations on a locally compact locally connected space \(X\) with left multiplication \(AB(x,x^{\prime})=A(x,B(x,x^{\prime}))\) for \(A,B\in H_{2}(X)\) and \(x,x^{\prime}\in X\) is a topological group which acts binarily on the space \(X\). If a topological group \(G\) acts binarily and effectively on \(X\), then \(G\) is a subgroup of \(H_{2}(X)\)[2]. All binary \(G\)-spaces and bi-equivariant maps form a category, which is a natural extension of the category of \(G\)-spaces and equivariant maps. Set-theoretic questions related to bi-equivariant topology were also considered in [2]-[7]. This paper is devoted to the important problems of lifting continuous bi-equivariant maps and bi-equivariant covering homotopies, which lead to bi-equivariant fibrations. To study bi-equivariant Hurewicz fibrations, we introduce the notion of a bi-equivariant covering function and prove that a map \(p\colon E\to B\) is a bi-equivariant covering homotopy with respect to any binary \(G\)-space \(X\) if and only if \(p\) has a bi-equivariant covering function. This is an intrinsic characteristic of a bi-equivariant Hurewicz fibration, because the definition of a bi-equivariant covering function for a map \(p\colon E\to B\) is not related to a binary \(G\)-space \(X\). The property of being a bi-equivariant covering homotopy is preserved under the passage to closed subgroups of \(G\); i.e., a bi-equivariant Hurewicz \(G\)-fibration \(p\colon E\to B\) is also a bi-equivariant \(H\)-fibration. In the case where \(E\) and \(B\) are distributive binary \(G\)-spaces, a bi-equivariant surjective map \(p\colon E\to B\) induces a bi-equivariant map \(p^{H}\colon E^{H}\to B^{H}\) of the spaces of \(H\)-fixed points, which is a bi-equivariant Hurewicz \(G\)-fibration if so is \(p\colon E\to B\). In the case where \(G\) is a compact Abelian Lie group and \(E\) and \(B\) are distributive binary \(G\)-spaces, a bi-equivariant Hurewicz fibration \(p\colon E\to B\) induces a Hurewicz fibration \(p^{*}\colon E^{*}\to B^{*}\) of the orbit spaces. ## 2. Auxiliary Definitions and Results ### Binary \(G\)-Spaces and bi-equivariant Maps Let \(G\) be any topological group, and let \(X\) be any topological space. A continuous map \(\alpha\colon G\times X^{2}\to X\) is called a _binary action_ of the topological group \(G\) on the space \(X\) if, for any \(g,h\in G\) and \(x_{1},x_{2}\in X\), \[gh(x,x^{\prime})=g(x,h(x,x^{\prime})), \tag{1}\] \[e(x,x^{\prime})=x^{\prime}, \tag{2}\] where \(e\) is the identity element of \(G\) and \(g(x,x^{\prime})=\alpha(g,x,x^{\prime})\). A triple \((G,X,\alpha)\) consisting of a space \(X\), a group \(G\), and a fixed binary action \(\alpha\) of \(G\) on \(X\) is called a _binary \(G\)-space_. Let \((G,X,\alpha)\) and \((G,Y,\beta)\) be binary \(G\)-spaces. A continuous map \(f\colon X\to Y\) is said to be _bi-equivariant_, or \(G\)_-bi-equivariant_, if \[f(\alpha(g,x,x^{\prime}))=\beta(g,f(x),f(x^{\prime})),\] or \[f(g(x,x^{\prime}))=g(f(x),f(x^{\prime})),\] for all \(g\in G\) and \(x,x^{\prime}\in X\). All binary \(G\)-spaces and bi-equivariant maps form a category, which we denote by Bi-\(G\)-\(Top\). Note that, given any \(G\)-space \((X,G,\alpha)\), the unary action \(\alpha\) on \(X\) generates the binary action \(\overline{\alpha}\) defined by \[\overline{\alpha}(g,x,x^{\prime})=\alpha(g,x^{\prime})\quad\text{or}\quad g(x,x^{\prime})=gx^{\prime} \tag{3}\] for all \(g\in G\) and \(x,x^{\prime}\in X\), and any equivariant map \(f:(G,X,\alpha)\to(G,Y,\beta)\) between \(G\)-spaces \(X\) and \(Y\) is bi-equivariant with respect to the action (3). Indeed, \[f(\overline{\alpha}(g,x,x^{\prime}))=f(\alpha(g,x^{\prime}))=\beta(g,f(x^{ \prime}))=\overline{\beta}(g,f(x),f(x^{\prime})).\] Thus, the category Bi-\(G\)-\(Top\) can be considered as a natural extension of the category \(G\)-\(Top\) of all \(G\)-spaces and equivariant maps. Let \(H\) be a subgroup of a group \(G\). Then any binary \(G\)-space is also a binary \(H\)-space, and any \(G\)-bi-equivariant map is an \(H\)-bi-equivariant map. Thus, there exists a natural covariant functor from the category Bi-\(G\)-TOP to the category Bi-\(H\)-TOP. Let \(X\) be a binary \(G\)-space, and let \(A\) be its subset. The set \(G(A,A)=\{g(a,a^{\prime});g\in G,a,a^{\prime}\in A\}\) is called the _binary saturation_ of \(A\). A subset \(A\) of a binary \(G\)-space \(X\) is said to be _bi-invariant_, or \(G\)_-bi-invariant_, if \(A\) coincides with its binary saturation: \(G(A,A)=A\). A bi-invariant subset \(A\subset X\) is itself a binary \(G\)-space; it is called a _binary \(G\)-subspace_ of the space \(X\). A binary \(G\)-space \(X\) is said to be _distributive_ if, for any \(x,x^{\prime},x^{\prime\prime}\in X\) and \(g,h\in G\), we have \[g(h(x,x^{\prime}),h(x,x^{\prime\prime}))=h(x,g(x^{\prime},x^{\prime\prime})). \tag{4}\] The class of distributive binary \(G\)-spaces plays an important role in the theory of binary \(G\)-spaces. This is explained, in particular, by the special role of distributive subgroups of the group \(H_{2}(X)\) of all continuous invertible binary operations on \(X\). For example, any topological group is a distributive subgroup of the group of invertible binary operations on some space [3]. This statement is a binary topological analogue of Cayley's classical theorem on the representation of any finite group by unary operations (permutations). ### The space of \(H\)-fixed points of a binary \(G\)-space Let \(X\) be a binary \(G\)-space, and let \(H\) be a subgroup of \(G\). The set \[X^{H}=\{x^{\prime}\in X;\quad h(x,x^{\prime})=x^{\prime}\text{ for all }h\in H,\ x \in X\}\] is called the _space of \(H\)-fixed points_ of the binary \(G\)-space \(X\). This set is not generally \(G\)-bi-invariant. However, the following assertion is valid. **Proposition 1**.: _If \(X\) is a distributive binary \(G\)-space, then \(X^{H}\) is a \(G\)-bi-invariant subset and, therefore, a binary \(G\)-subspace._ Proof.: Let \(x^{\prime},x^{\prime\prime}\in X^{H}\) be any \(H\)-fixed points. Then, since the binary action on \(X\) is distributive, it follows that, for any \(g\in G\), \(h\in H\) and \(x\in X\), we have \[h(x,g(x^{\prime},x^{\prime\prime}))=g(h(x,x^{\prime}),h(x,x^{\prime\prime}))= g(x^{\prime},x^{\prime\prime}),\] i.e., \(g(x^{\prime},x^{\prime\prime})\in X^{H}\). Thus, \(G(X^{H},X^{H})=X^{H}\). **Proposition 2**.: _If \(X\) and \(Y\) are binary \(G\)-spaces and \(f\colon X\to Y\) is a surjective bi-equivariant map, then \(f(X^{H})\subset Y^{H}\)._ Proof.: Indeed, for any \(x^{\prime}\in X^{H}\), we have \[h(y,f(x^{\prime}))=h(f(x),f(x^{\prime}))=f(h(x,x^{\prime}))=f(x^{\prime}),\] where \(y\in Y\) is any element and \(x\in X\) is a preimage of \(y\). Therefore, \(f(x^{\prime})\in Y^{H}\). We denote the restriction of a surjective bi-equivariant map \(f\colon X\to Y\) to \(X^{H}\) by \(f^{H}\colon X^{H}\to Y^{H}\). Thus, if \(X\) is a distributive binary \(G\)-space, then \(f^{H}\colon X^{H}\to Y^{H}\) is a bi-equivariant map of binary \(G\)-spaces. ### The orbit space of a distributive binary \(G\)-space Let \(X\) be a binary \(G\)-space. The orbit of a point \(x\in X\) is the minimal bi-invariant subset \([x]\) of \(X\) containing the point \(x\). Obviously, \(x\in G(x,x)\subset[x]\) for all \(x\in X\). Therefore, if \(G(x,x)\) is a bi-invariant set, then \(G(x,x)=[x]\). As is known, the set \(G(x,x)\) is not generally bi-invariant [7, Example 2]. However, in distributive binary \(G\)-spaces the sets \(G(x,x)\) are bi-invariant, and hence \([x]=G(x,x)\)[2, Theorem 9]. Moreover, the orbits of a distributive binary \(G\)-space either are disjoint or coincide [7, Proposition 6]; therefore, the space \(X\) is partitioned into disjoint classes. We denote the corresponding quotient set by \(X|G\). Let \(\pi=\pi_{X}:X\to X|G\) be the natural quotient map which sends each point \(x\in X\) to its orbit \([x]\). The quotient topology on \(X|G\) is defined in the standard way. The space thus obtained is called the _orbit space_ of the distributive binary \(G\)-space \(X\). If \(X\) and \(Y\) are distributive binary \(G\)-spaces, then any bi-equivariant map \(f:X\to Y\) generates a map \(f^{*}:X|G\to Y|G\) of the orbit spaces, which is defined by \(f^{*}([x])=[f(x)]\). This map is well defined, because it does not depend on the choice of representatives of orbits: \(f^{*}([g(x,x)])=[f(g(x,x))]=[g(f(x),f(x))]=[f(x)]=f^{*}([x])\) for any \(g\in G\). All notions, definitions, and results used in the paper without references, as well as all those mentioned above, can be found in [2]-[7]. ### The bi-equivariant covering homotopy property and bi-equivariant Hurewicz fibrations Important problems of algebraic topology are the problem of _lifting_ continuous maps and the _covering homotopy problem_, which is dual to the problem of extending homotopies. The covering homotopy problem leads to the notion of a fibration. We study these problems in the category Bi-\(G\)-TOP of binary \(G\)-spaces and bi-equivariant maps. Let \(G\) be a topological group. Suppose given binary \(G\)-spaces \(E\), \(B\) and \(X\) and bi-equivariant maps \(p\colon E\to B\) and \(f\colon X\to B\). The _problem of lifting_ the bi-equivariant map \(f\) to \(E\) consists in determining whether there exists a continuous bi-equivariant map \(\tilde{f}:X\to E\) such that \(f=p\circ\tilde{f}\); this map is denoted by dashed arrow in the diagram To turn this problem into a well-posed problem of binary \(G\)-homotopy category, we need the counterpart of the _homotopy extension property_, which is called the _bi-equivariant covering homotopy property_. We say that a bi-equivariant map \(p:E\to B\) has the _bi-equivariant covering homotopy property_ with respect to a binary \(G\)-space \(X\) if, for any bi-equivariant maps \(\tilde{f}:X\to E\) and \(F:X\times I\to B\) such that \(F\circ i_{0}=p\circ\tilde{f}\), where \(i_{0}:X\to X\times I\) is the embedding defined by \(i_{0}(x)=(x,0)\), \(x\in X\), there exists a bi-equivariant map \(\tilde{F}:X\times I\to E\) for which the diagram is commutative, i.e., \(p\circ\tilde{F}=F\) and \(\tilde{F}\circ i_{0}=\tilde{f}\). Obviously, if \(p:E\to B\) has the bi-equivariant covering homotopy property with respect to a binary \(G\)-space \(X\) and \(f,g:X\to B\) are \(G\)-homotopic bi-equivariant maps, then \(f\) lifts to \(E\) if and only if so does \(g\). Thus, the existence of a lifting of a bi-equivariant map \(f:X\to B\) is a property of a binary \(G\)-homotopy class of this map. The bi-equivariant covering homotopy property leads to the notion of a bi-equivariant fibration. A bi-equivariant map \(p:E\to B\) is called a _bi-equivariant Hurewicz fibration_, or a _bi-equivariant Hurewicz \(G\)-fibration_, if \(p\) has the bi-equivariant covering homotopy property with respect to any binary \(G\)-space \(X\). In this case, the binary \(G\)-space \(E\) is called the _space of a bi-equivariant fibration_, and \(B\) is the _base_ of this fibration. ## 3. Covering Binary \(G\)-Functions and an Intrinsic Characteristic of bi-equivariant Fibrations Let \(G\) be a compact topological group. Suppose given binary \(G\)-spaces \(E\) and \(B\) and a bi-equivariant map \(p:E\to B\). For the path space \(B^{I}\), we define a map \(G\times B^{I}\times B^{I}\to B^{I}\) by \[g(\alpha,\alpha^{\prime})(t)=g(\alpha(t),\alpha^{\prime}(t)), \tag{5}\] where \(g\in G\), \(\alpha,\alpha^{\prime}\in B^{I}\), and \(t\in I\). **Proposition 3**.: _The map (5) defines a continuous binary action of the group \(G\) on the path space \(B^{I}\)._ Proof.: The continuity of the map (5) follows from that of \(\alpha\) and \(\alpha^{\prime}\) and of the binary action of \(G\) on \(B\). Let us check that the action (5) is binary. For any \(g,h\in G\), \(\alpha,\alpha^{\prime}\in B^{I}\), and \(t,t^{\prime}\in I\), we have (1) \(e(\alpha,\alpha^{\prime})(t)=g(\alpha(t),\alpha^{\prime}(t))=\alpha^{\prime}(t)\), i.e., \(e(\alpha,\alpha^{\prime})=\alpha^{\prime}\); (2) \(gh(\alpha,\alpha^{\prime})(t)=gh(\alpha(t),\alpha^{\prime}(t))=g(\alpha(t),h( \alpha(t),\alpha^{\prime}(t)))=\\ =g(\alpha(t),h(\alpha,\alpha^{\prime})(t))=g(\alpha,h(\alpha, \alpha^{\prime}))(t)\), i.e., \(gh(\alpha,\alpha^{\prime})=g(\alpha,h(\alpha,\alpha^{\prime}))\). Since \(E\) and \(B^{I}\) are binary \(G\)-spaces, it follows that the product \(E\times B^{I}\) is a binary \(G\)-space with coordinatewise binary action. Consider the subspace \(\Delta\subset E\times B^{I}\) defined by \[\Delta=\{(e,\alpha)\in E\times B^{I};\ \alpha(0)=p(e)\}.\] **Proposition 4**.: _The set \(\Delta\) is a bi-invariant subspace of the binary \(G\)-space \(E\times B^{I}\)._ Proof.: Let \((e,\alpha),(e^{\prime},\alpha^{\prime})\in\Delta\) be any elements, i.e., \(\alpha(0)=p(e)\) and \(\alpha^{\prime}(0)=p(e^{\prime})\). Then \(g((e,\alpha),(e^{\prime},\alpha^{\prime}))=(g(e,e^{\prime}),g(\alpha,\alpha^{ \prime}))\) and \[g(\alpha,\alpha^{\prime})(0)=g(\alpha(0),\alpha^{\prime}(0))=g(p(e),p(e^{ \prime}))=p(g(e,e^{\prime})).\] Therefore, \(g((e,\alpha),(e^{\prime},\alpha^{\prime}))\in\Delta\) for any \(g\in G\). **Definition 1**.: A bi-equivariant map \(\lambda:\Delta\to E^{I}\) satisfying the conditions \[\lambda(e,\alpha)(0)=e\quad\text{and}\quad[p\circ\lambda(e,\alpha)](t)=\alpha(t) \tag{6}\] is called a _bi-equivariant covering function_, or a _covering bi-equivariant \(G\)-function_, for \(p\). The following theorem describes a relationship between bi-equivariant Hurewicz fibrations and bi-equivariant covering functions. **Theorem 1**.: _A bi-equivariant map \(p:E\to B\) is a bi-equivariant Hurewicz fibration if and only if \(p\) has a bi-equivariant covering function._ Proof.: Let \(\lambda:\Delta\to E^{I}\) be a bi-equivariant covering function for \(p\). Consider any binary \(G\)-space \(X\), bi-equivariant map \(\tilde{f}:X\times 0\to E\), and bi-equivariant homotopy \(F:X\times I\to B\) for which \(F(x,0)=p(\tilde{f}(x,0))\). Note that, for any \(x\in X\), the formula \(F_{x}(t)=F(x,t)\) defines a path \(F_{x}\in B^{I}\). It follows from the bi-equivariance of \(F\) that \[F_{g(x,x^{\prime})}=g(F_{x},F_{x^{\prime}}). \tag{7}\] Indeed, \[F_{g(x,x^{\prime})}(t)=F(g(x,x^{\prime}),t)=g(F(x,t),F(x^{\prime},t))=g(F_{x} (t),F_{x^{\prime}}(t))=g(F_{x},F_{x^{\prime}})(t).\] Now consider the homotopy \(\tilde{F}:X\times I\to E\) defined by \[\tilde{F}(x,t)=\lambda(\tilde{f}(x,0),F_{x})(t). \tag{8}\] Let us prove that \(\tilde{F}\) is the required bi-equivariant covering homotopy. The bi-equivariance of the homotopy \(\tilde{F}\) follows from that of the maps \(\lambda\), \(\tilde{f}\), and \(F\) and relations (5), (7), and (8): \[\tilde{F}(g(x,x^{\prime}),t)=\lambda(\tilde{f}(g(x,x^{\prime}),0),F _{g(x,x^{\prime})})(t)=\lambda(g(\tilde{f}(x,0),\tilde{f}(x^{\prime},0)),g(F_{ x},F_{x}^{\prime}))(t)=\\ =\lambda(g((\tilde{f}(x,0),F_{x}),(\tilde{f}(x^{\prime},0),F_{x}^{ \prime})))(t)=g(\lambda(\tilde{f}(x,0),F_{x}),\lambda(\tilde{f}(x^{\prime},0), F_{x}^{\prime}))(t)=\\ =g(\lambda(\tilde{f}(x,0),F_{x})(t),\lambda(\tilde{f}(x^{\prime},0),F_{x}^{\prime})(t))=g(\tilde{F}(x,t),\tilde{F}(x^{\prime},t)).\] According to conditions (6), we have \[\tilde{F}(x,0)=\lambda(\tilde{f}(x,0),F_{x})(0)=\tilde{f}(x,0),\] \[(p\circ\tilde{F})(x,t)=p(\tilde{F}(x,t))=p(\lambda(\tilde{f}(x,0),F_{x})(t))=[ p\circ\lambda(\tilde{f}(x,0),F_{x})](t)=F_{x}(t)=F(x,t).\] Therefore, \(\tilde{F}\) is a covering homotopy. Now suppose that \(p:E\to B\) is a bi-equivariant Hurewicz fibration. Consider the binary \(G\)-space \(X=\Delta\) and the maps \[\tilde{f}:\Delta\times 0\to E\quad\text{and}\quad F:\Delta\times I\to B\] defined by \[\tilde{f}[(e,\alpha),0]=e\quad\text{and}\quad F[(e,\alpha),t]=\alpha(t). \tag{9}\] The maps \(\tilde{f}\) and \(F\) are bi-equivariant. Indeed, for any \(g\in G\), \((e,\alpha),(e^{\prime},\alpha^{\prime})\in\Delta\subset E\times B^{I}\), and \(t\in I\), we have \[\tilde{f}[g((e,\alpha),(e^{\prime},\alpha^{\prime})),0]=\tilde{f}[(g(e,e^{ \prime}),g(\alpha,\alpha^{\prime})),0]=g(e,e^{\prime})=g(\tilde{f}[(e,\alpha), 0],\tilde{f}[(e^{\prime},\alpha^{\prime}),0]),\] \[F[g((e,\alpha),(e^{\prime},\alpha^{\prime})),t]=F[(g(e,e^{\prime }),g(\alpha,\alpha^{\prime})),t]=g(\alpha,\alpha^{\prime})(t)=\\ =g(\alpha(t),\alpha^{\prime}(t))=g(F[(e,\alpha),t],F[(e^{\prime},\alpha^{\prime}),t]).\] Note that \(F:\Delta\times I\to B\) is a bi-equivariant homotopy of the map \(p\circ\tilde{f}:\Delta\times 0\to B\). Indeed, \[F[(e,\alpha),0]=\alpha(0)=p(e)=p(\tilde{f}[(e,\alpha),0])=(p\circ\tilde{f})[(e,\alpha),0].\] Hence there exists a bi-equivariant covering homotopy \(\tilde{F}:\Delta\times I\to E\) for \(F\), i.e., \[\tilde{F}[(e,\alpha),0]=\tilde{f}[(e,\alpha),0]\quad\text{and}\quad p\circ \tilde{F}=F. \tag{10}\] Now consider the map \(\lambda:\Delta\to E^{I}\) defined by \[\lambda(e,\alpha)(t)=\tilde{F}[(e,\alpha),t]. \tag{11}\] Let us prove that \(\lambda\) is a covering bi-equivariant \(G\)-function for \(p\). The bi-equivariance of \(\lambda\) follows from that of the covering homotopy \(\tilde{F}\): \[\lambda(g((e,\alpha),(e^{\prime},\alpha^{\prime})))(t)=\tilde{F}[ g((e,\alpha),(e^{\prime},\alpha^{\prime})),t]=\\ =g(\tilde{F}[(e,\alpha),t],\tilde{F}[(e^{\prime},\alpha^{\prime }),t])=g(\lambda(e,\alpha)(t),\lambda(e^{\prime},\alpha^{\prime})(t)).\] Conditions (6) in Definition 1 also hold. By virtue of (9), (10), and (11), we have \[\lambda(e,\alpha)(0)=\tilde{F}[(e,\alpha),0]=\tilde{f}[(e,\alpha),0]=e,\] \[[p\circ\lambda(e,\alpha)](t)=p(\lambda(e,\alpha)(t))=p(\tilde{F}[(e,\alpha),t ])=F[(e,\alpha),t]=\alpha(t).\] This completes the proof of the theorem. Note that the last theorem gives an intrinsic characteristic of a bi-equivariant Hurewicz fibration, because Definition 1 of a bi-equivariant covering function does not involve any outer space \(X\). Applying Theorem 1 to the trivial binary action, we obtain the following result. **Corollary 1**.: _A continuous map \(p:E\to B\) is a Hurewicz fibration if and only if \(p\) has a covering function._ ## 4. Fibrations Generated by bi-equivariant \(G\)-Fibrations Let \(H\) be a closed subgroup of a compact group \(G\). Since all binary \(G\)-spaces are also binary \(H\)-spaces, and all \(G\)-bi-equivariant maps are \(H\)-bi-equivariant maps, there arises the natural question of whether the property of being a bi-equivariant Hurewicz foliation is preserved under the passage to a closed subgroup \(H\). **Theorem 2**.: _Let \(p:E\to B\) be a bi-equivariant Hurewicz \(G\)-fibration. Then, for any closed subgroup \(H\) of the compact group \(G\), the map \(p:E\to B\) is also a bi-equivariant Hurewicz \(H\)-fibration._ Proof.: By virtue of Theorem 1, there exists a covering bi-equivariant \(G\)-function \(\lambda:\Delta\to E^{I}\) for \(p\). Since \(\lambda:\Delta\to E^{I}\) is also a bi-equivariant \(H\)-map, it follows that \(\lambda\) is a covering bi-equivariant \(H\)-function for the bi-equivariant \(H\)-map \(p\). Therefore, \(p:E\to B\) is a bi-equivariant Hurewicz \(H\)-fibration by virtue of the same Theorem 1. Under certain constraints, the property of being a bi-equivariant Hurewicz \(G\)-fibration is preserved under the passage to binary \(G\)-subspaces of \(H\)-fixed points. **Theorem 3**.: _Let \(E\) and \(B\) be a distributive binary \(G\)-space, and let \(p:E\to B\) be a surjective bi-equivariant Hurewicz \(G\)-fibration. Then, for any closed subgroup \(H\) of the compact group \(G\), the induced bi-equivariant \(G\)-map \(p^{H}:E^{H}\to B^{H}\) between the spaces of \(H\)-fixed points is a bi-equivariant Hurewicz \(G\)-fibration._ Proof.: The bi-invariance of the set of \(H\)-fixed points of a distributive binary \(G\)-space and the preservation of \(H\)-fixed points under a surjective bi-equivariant map were proved in Propositions 1 and 2. Let \(\lambda:\Delta\to E^{I}\) be a covering bi-equivariant \(G\)-function for \(p\), whose existence follows from Theorem 1. Consider the set \[\Delta^{H}=\{(\dot{e},\dot{\alpha})\in E^{H}\times(B^{H})^{I};\ \dot{\alpha}(0)=p^{H}( \dot{e})\}.\] Let us prove the existence of a covering bi-equivariant \(G\)-function \(\lambda^{H}:\Delta^{H}\to(E^{H})^{I}\) for a bi-equivariant map \(p^{H}:E^{H}\to B^{H}\). Since \(\Delta^{H}\subset\Delta\), we can set \(\lambda^{H}=\lambda|\Delta^{H}\). It is easy to see that \(\lambda^{H}\) takes the set \(\Delta^{H}\) to \((E^{H})^{I}\) and that \(\lambda^{H}:\Delta^{H}\to(E^{H})^{I}\) is a covering bi-equivariant \(G\)-function for the map \(p^{H}:E^{H}\to B^{H}\). Therefore, by virtue of Theorem 1, \(p^{H}:E^{H}\to B^{H}\) is a bi-equivariant Hurewicz \(G\)-fibration. In the case of distributive binary \(G\)-spaces, the property of being a binary Hurewicz \(G\)-fibration is also preserved under the passage to orbit spaces. **Theorem 4**.: _Let \(G\) be a compact Abelian Lie group, and let \(E\) and \(B\) be distributive binary \(G\)-spaces. If \(p:E\to B\) is a bi-equivariant Hurewicz \(G\)-fibration, then the induced map \(p^{*}:E|G\to B|G\) of orbit spaces is a Hurewicz fibration._ Proof.: Let \(p:E\to B\) be a bi-equivariant Hurewicz \(G\)-fibration. By Theorem 1 there exists a covering bi-equivariant \(G\)-function \(\lambda:\Delta\to E^{I}\) for \(p\). Consider the set \[\Delta_{G}=\{(e^{*},\alpha^{*})\in E|G\times(B|G)^{I};\ \alpha^{*}(0)=p^{*}(e^{*})\}.\] Let us prove that there exists a covering function \(\lambda^{*}:\Delta_{G}\to(E|G)^{I}\) for the map \(p^{*}:E|G\to B|G\). Since \(B\) is a distributive binary \(G\)-space, it follows that its orbits have the form \(G(x,x)\) for any \(x\in B\). Note that \(B|G\) can also be treated as the orbit space of the \(G\)-space \(B\) with the action \(gx=g(x,x)\), \(g\in G\), \(x\in B\). This formula indeed defines an action of the group \(G\) on \(B\), because the group \(G\) is commutative and the binary action of the group \(G\) on \(B\) is distributive. Note that any path \(\alpha^{*}:I\to B|G\) can be lifted to \(B\) ([1, Theorem 6.2]), i.e., there exists a path \(\alpha:I\to B\) such that \(\pi_{B}\circ\alpha=\alpha^{*}\), where \(\pi_{B}:B\to B|G\) is the orbit projection. Now we define the required map \(\lambda^{*}:\Delta_{G}\to(E|G)^{I}\) by \[\lambda^{*}(e^{*},\alpha^{*})(t)=(\lambda(e,\alpha)(t))^{*}.\] The map \(\lambda^{*}\) is well defined, and it is a covering function for \(p^{*}:E|G\to B|G\). Therefore, by Corollary 1, \(p^{*}:E|G\to B|G\) is a Hurewicz fibration.
2310.03443
The North System for Formosa Speech Recognition Challenge 2023
This report provides a concise overview of the proposed North system, which aims to achieve automatic word/syllable recognition for Taiwanese Hakka (Sixian). The report outlines three key components of the system: the acquisition, composition, and utilization of the training data; the architecture of the model; and the hardware specifications and operational statistics. The demonstration of the system has been made public at https://asrvm.iis.sinica.edu.tw/hakka_sixian.
Li-Wei Chen, Kai-Chen Cheng, Hung-Shin Lee
2023-10-05T10:29:18Z
http://arxiv.org/abs/2310.03443v2
# The North System for Formosa Speech Recognition Challenge 2023 ###### Abstract This report provides a concise overview of the proposed North system, which aims to achieve automatic word/syllable recognition for Taiwanese Hakka (Sixian). The report outlines three key components of the system: the acquisition, composition, and utilization of the training data; the architecture of the model; and the hardware specifications and operational statistics. The demonstration of the system has been made public1. Footnote 1: [https://shorturl.at/mzGL7](https://shorturl.at/mzGL7) _Keywords:_ Hakka, Sixian, speech recognition ## 1 Introduction This document furnishes a succinct yet comprehensive overview of the proposed North System, a technologically advanced mechanism designed with the primary objective of achieving automatic recognition of words and syllables specific to the Taiwanese Hakka language, with a particular focus on the Sixian dialect. The report meticulously delineates three pivotal components integral to the effective functionality and operation of the system, as enumerated below: 1. Acquisition, Composition, and Utilization of Training Data: * Acquisition: The systematic collection and sourcing of relevant linguistic data pertinent to the Taiwanese Hakka language, ensuring a robust and representative dataset. * Composition: The strategic assembly and organization of the acquired data, ensuring it is structured in a manner conducive to effective machine learning. * Utilization: The application of the composed data in training the system, ensuring it accurately and efficiently recognizes and processes linguistic elements of the Sixian dialect. 2. Architectural Framework of the Model: * A detailed exposition of the structural and operational framework of the model, elucidating the technological and algorithmic methodologies employed to facilitate accurate linguistic recognition and processing. * An exploration of the model's design principles, underlying algorithms, and computational processes that enable it to effectively learn, recognize, and interpret the linguistic nuances of the Taiwanese Hakka language. 3. Hardware Specifications and Operational Metrics: * Hardware Specifications: A thorough breakdown of the technological infrastructure supporting the system, detailing the hardware components and specifications that underpin its operation. * Operational Metrics: An analytical overview of the system's performance metrics, providing insights into its operational efficiency, accuracy, and reliability in real-world applications. ## 2 Data Table 1 meticulously presents a detailed inventory of our principal corpus sources, accompanied by their respective statistical details, providing an in-depth insight into the voluminous data utilized in our research and development endeavors. In addition to the primary corpus, our research team has assiduously gathered an extensive collection of Hakka (Sixian dialect)-related speech data, exceeding 100 hours, from a variety of online platforms. This includes, but is not limited to: * YouTube: A prominent video-sharing platform where a myriad of Hakka (Sixian) linguistic content, ranging from casual conversations to formal discourses, has been extracted. * Podcasts: Audio programs and series that provide a rich source of conversational and narrative Hakka (Sixian) speech data. * Additional Online Platforms: Various other digital platforms that host a wealth of linguistic content pertinent to the Hakka (Sixian) dialect. Moreover, we have amassed a substantial volume of Hakka text data, specifically curated for language modeling purposes, from a plethora of websites ardently dedicated to the promotion, preservation, and dissemination of the Hakka language and its diverse dialects. Our methodologies for obtaining speech data are not solely confined to direct data gathering but also partially derive inspiration from the scholarly paper presented at O-COCOSDA 2020 by Dr. Hung-Shin Lee (Chen et al., 2020). This paper provides valuable insights and methodologies that have been judiciously considered and adapted to enhance our data acquisition strategies. It is imperative to underscore that our research and development team consciously opted to abstain from employing speech synthesis as a means for generating training speech data. This decision is firmly rooted in our belief that, while speech synthesis may offer a modicum of utility in certain contexts, its efficacy is notably constrained in the realm of speech recognition due to the intrinsic emphasis on accommodating and understanding variability in speaker characteristics and environmental acoustics. ## 3 Model Structure The meticulous training of the acoustic model necessitates the concatenation of two distinct types of speech features: the 40-dimensional Mel Frequency Cepstral Coefficients (MFCCs) and the 1024-dimensional Semi-Supervised Learning (SSL) embeddings. The SSL model, which was previously subjected to training on a comprehensive dataset of Chinese linguistic data utilizing the HuBERT-large model, is judiciously utilized to procure the SSL embeddings. The model employed is fundamentally based on two primary architectural structures. Firstly, the Chain-based Discriminative Autoencoder (DcAE) (Lee et al., 2022), and secondly, the Multistream Convolutional Neural Network (CNN) (Han et al., 2021). For an in-depth elaboration and comprehensive understanding of these structures, readers are ardently encouraged to consult the referenced scholarly papers or the Appendix at the end of the paper. The integration of both aforementioned structures is employed in a joint training methodology, wherein the latter structure serves as the foundational bedrock upon which the former is developed and refined. The overarching objective permeating the entirety of the model is to assiduously minimize losses associated with Automatic Speech Recognition (ASR), particularly those pertaining to lattice-free Maximum Mutual Information (MMI), as well as errors inherent in feature reconstruction and restoration. Through a meticulous analysis of speech and noise factors, and the incorporation of multi-resolution information, the model significantly enhances its performance and robustness in various linguistic contexts. The rescoring mechanism, which is meticulously designed to operate on the word lattice generated by a four-gram language model, employs a Recurrent Neural Network Language Model (RNN-LM) to enhance the accuracy and reliability of linguistic predictions and outputs. It is pivotal to note that, consequent to the insuf \begin{table} \begin{tabular}{l c c c} \hline \hline **Source** & **Hours** & **\# Utt.** & **SPU** \\ \hline Official Dataset (train) & 59.43 & 20,591 & 10.39 \\ Official Dataset (pilot-test) & 10.01 & 3,595 & 10.02 \\ Hakka Dictionary & 5.84 & 15,250 & 1.38 \\ HAC & 11.26 & 4,216 & 9.61 \\ \hline Total & 86.54 & 43,652 & 7.14 \\ \hline \hline \end{tabular} \end{table} Table 1: Statistics of sources of Hakka (Sixian) speech. The speech channels used in Official (train) and Official (pilot-test) are lavalier and Zoom, respectively. Hakka Dictionary comes from Dictionary of Frequently-Used Taiwan Hakka ([https://hakkadict.moe.edu.tw](https://hakkadict.moe.edu.tw)). HAC, provided by Hakka Affairs Council ([https://corpus.hakka.gov.tw](https://corpus.hakka.gov.tw)), is further cleaned by our technology. SPU denotes average seconds per utterance. ficiency of Graphical Processing Unit (GPU) resources, our team made a conscious decision to abstain from proceeding with the optimization and fine-tuning of model parameters and hyperparameters. This includes, but is not limited to, the number of model layers, training epochs, batch size, learning rate, among other crucial variables that significantly impact the model's learning and predictive capabilities. ## 4 Hardware In the execution of our experiment, we strategically employed a total of four NVIDIA RTXTM A5000 graphics cards, which are renowned for their robust computational capabilities and adeptness in handling graphically-intensive tasks and processes. It is imperative to underscore that this allocation of graphical processing units (GPUs) was the sole resource utilized to facilitate the extensive computational demands inherent in the training of the neural model. The entirety of the training duration for a singular neural model was approximately 82 hours, a substantial temporal investment that underscores the computational and temporal demands associated with the development and refinement of sophisticated neural networks. This extensive training period was meticulously undertaken to ensure the model was afforded ample opportunity to learn, adapt, and refine its predictive capabilities, thereby enhancing its overall performance and reliability in practical applications. ## 5 Final Results The conclusive results, which encapsulate the outcomes and findings derived from our experimental processes, are meticulously presented in Table 2. Our North system got the double champion of this Formosa Speech Recognition Challenge 2023 (General Group) in the two tracks of Hakka Character and Pinyin. It is imperative to underscore that one of the predominant weaknesses that permeated our model during the training phase is the conspicuous absence of spontaneous speech. This deficiency in the training data potentially impacts the model's capacity to accurately and reliably recognize and process unscripted, natural linguistic patterns and variations, thereby presenting an area warranting further investigation and enhancement in future research endeavors. We also compared the Hakka ASR system developed by ASUS2, which is built up based on OpenAI's Whisper. Table 3 shows that our system is far superior to ASUS. It is worth noting that ASUS's system cannot handle sentences that are too long, meaning that 384 utterances could not be processed and were not included in the error rate calculation. Footnote 2: [https://www.asuscloud.com/event-hakka](https://www.asuscloud.com/event-hakka) ## Acknowledgments Regarding the collection and understanding of Hakka language corpus, we would like to express our gratitude to the following individuals for their professional assistance: * [noitemsep,topsep=0pt] * [noitemsep,topsep=0pt] * [noitemsep,topsep=0pt] ## Appendix ### Multistream CNN Han et al. (2021) The Multistream Convolutional Neural Network (CNN), a pioneering neural network architecture, meticulously designed to fortify acoustic modeling within the realm of speech recognition tasks, \begin{table} \begin{tabular}{c c c c c} \hline \hline **Track** & **Read** & **Spont.** & **Average** & **Rank** \\ \hline \hline **[noitemsep,topsep=0pt]** & 4.27 & 19.14 & 17.15 & 1 \\ **[noitemsep,topsep=0pt]** & 7.33 & 18.90 & 17.42 & 1 \\ \hline \hline \end{tabular} \end{table} Table 2: Final results of North ASRs with respect to two tracks, [noitemsep,topsep=0pt]** (Hakka Character) and [noitemsep,topsep=0pt]** (Hakka Pinyin), in terms of character and syllable error rate (%), respectively. The proportion of reading and spontaneous speech in the total evaluation data is approximately 13% and 87% respectively. The total number of utterances for evaluation is 5,913. \begin{table} \begin{tabular}{c c c c} \hline \hline **Track** & **ASUS** & **North** & **Rel. Improve.** \\ \hline \hline **[noitemsep,topsep=0pt]** & 28.87 & **18.17** & 37.06 \\ **[noitemsep,topsep=0pt]** & 42.43 & **19.65** & 53.69 \\ \hline \hline \end{tabular} \end{table} Table 3: Comparisons of North ASRs and ASUS Hakka ASRs respect to two tracks, [noitemsep,topsep=0pt]** (Hakka Character) and [noitemsep,topsep=0pt]** (Hakka Pinyin), in terms of character and syllable error rate (%), respectively. Because the ASUS Hakka ASR cannot deal with long utterances, only 5,529 utterances are evaluated. is introduced. The architectural framework processes input speech utilizing diverse temporal resolutions, achieved by applying varying dilation rates to convolutional neural networks across multiple streams, thereby attaining robustness in acoustic modeling. The dilation rates are judiciously selected from multiples of a sub-sampling rate, specifically, three frames. Each stream systematically stacks Time Delay Neural Network-F (TDNN-F) layers, a variant of 1D CNN, and the output embedding vectors derived from the streams are concatenated and subsequently projected to the terminal layer. The efficacy of the Multistream CNN architecture is validated through demonstrable and consistent enhancements against Kaldi's optimal TDNN-F model, observed across a myriad of data sets. The Multistream CNN facilitates a 12% (relative) improvement in the Word Error Rate (WER) of the test-other set within the LibriSpeech corpus. Furthermore, on custom data derived from ASAPP's production Automatic Speech Recognition (ASR) system for a contact center, it records a relative WER enhancement of 11% for customer channel audio, thereby substantiating its robustness to data in uncontrolled environments. In terms of the real-time factor, the Multistream CNN surpasses the baseline TDNN-F by 15%, thereby also indicating its practical applicability within production systems. When amalgamated with self-attentive Simple Recurrent Unit (SRU) Language Model (LM) rescoring, the Multistream CNN significantly contributes to ASAPP achieving an optimal WER of 1.75% on the test-clean set in LibriSpeech. #### Chain-based Discriminative Autoencoder (Lee et al., 2022) In preceding research endeavors, the authors introduced a model known as the Discriminative Autoencoder (DcAE), specifically tailored for applications within the domain of speech recognition. The DcAE amalgamates two distinct training schemes into a singular, cohesive model. Initially, as the DcAE is designed with the objective of learning encoder-decoder mappings, it endeavors to minimize the squared error between the reconstructed speech and the original input speech. Subsequently, within the code layer, frame-based phonetic embeddings are procured by minimizing the categorical cross-entropy between the ground truth labels and the predicted triphone-state scores. The development of DcAE is grounded in the Kaldi toolkit, wherein various Time Delay Neural Network (TDNN) models are treated as encoders. In the Chain-based DcAE, they further introduce three novel iterations of the DcAE. Firstly, a new objective function is employed, which takes into consideration both the categorical cross-entropy and the mutual information between ground truth and predicted triphone-state sequences, resulting in the formulation of a chain-based DcAE (c-DcAE). To facilitate its application to robust speech recognition, they further extend c-DcAE to incorporate hierarchical and parallel structures, culminating in the development of hc-DcAE and pc-DcAE, respectively. Within these two models, both the error between the reconstructed noisy speech and the input noisy speech, as well as the error between the enhanced speech and the reference clean speech, are integrated into the objective function. Experimental results, derived from the Wall Street Journal (WSJ) and Aurora-4 corpora, substantiate that the DcAE models exhibit superior performance when juxtaposed with baseline systems, thereby affirming their efficacy in speech recognition tasks.
2303.13768
Symmetry manipulation of nonlinear optical effect for metallic TMDC
Nonlinear optical (NLO) effect plays a crucial role to engineer optical angular frequency and symmetry of electronic system. Metallic transition-metal dichalcogenide (TMDC) is one of two-dimensional (2D) materials, which has no inversion symmetry for odd-number-layer. In particular, odd-number-layered NbSe$_2$ has spin splitting owing to Ising-type spin-orbit coupling. In this paper, we numerically calculate the NLO charge and spin conductivities of NbSe$_2$ based on an effective tight-binding model for several different optical effects, i.e., symmetry manipulation by bi-circular light (BCL) and bulk photovoltaic effect (shift and injection current). Under irradiation of BCL which can control the symmetry of electronic system, the current can be generated even in even-number-layered NbSe$_2$. Also, we find that shift current can be generated for odd-number-layered NbSe$_2$, which is robust against electronic scattering, i.e., topological current. The direction of generated shift current can be switched by altering polarization of light. Our results will serve to design opt-spintronics devices based on 2D materials to manipulate the charge and spin current and their directions by controlling the polarization of incident light which recasts the symmetry of electronic system.
Ren Habara, Katsunori Wakabayashi
2023-03-24T02:49:05Z
http://arxiv.org/abs/2303.13768v1
# Symmetry manipulation of nonlinear optical effect for metallic TMDC ###### Abstract Nonlinear optical (NLO) effect plays a crucial role to engineer optical angular frequency and symmetry of electronic system. Metallic transition-metal dichalcogenide (TMDC) is one of two-dimensional (2D) materials, which has no inversion symmetry for odd-number-layer. In particular, odd-number-layered NbSe\({}_{2}\) has spin splitting owing to Ising-type spin-orbit coupling. In this paper, we numerically calculate the NLO charge and spin conductivities of NbSe\({}_{2}\) based on an effective tight-binding model for several different optical effects, i.e., symmetry manipulation by bi-circular light (BCL) and bulk photovoltaic effect (shift and injection current). Under irradiation of BCL which can control the symmetry of electronic system, the current can be generated even in even-number-layered NbSe\({}_{2}\). Also, we find that shift current can be generated for odd-number-layered NbSe\({}_{2}\), which is robust against electronic scattering, i.e., topological current. The direction of generated shift current can be switched by altering polarization of light. Our results will serve to design opt-spintronics devices based on 2D materials to manipulate the charge and spin current and their directions by controlling the polarization of incident light which recasts the symmetry of electronic system. ## I Introduction Interaction of strong coherent light with matter advanced the field of nonlinear optics, which induces a charge polarization nonlinearly. The emergence of second-order nonlinear optical (NLO) effect has been extensively studied in condensed matter physics and materials science [1; 2; 3; 4]. It provides the foundation of laser frequency conversion [5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21] and direct current (DC) photocurrent generation [22; 23; 24; 25; 26; 27; 28; 29; 30; 31]. In the NLO process, the crystal symmetry is significant. If a crystal is centrosymmetric, no charge polarization occurs. In noncentrosymmetric crystal, however, nonlinear charge polarizaiton is induced which can generate DC photocurrent without p-n junctions [22; 23; 24; 25; 26; 27; 28; 29; 30; 31] and second-harmonic generation (SHG) [5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15]. Transition-metal dichalcogenide (TMDC) with the chemical formula MX\({}_{2}\) (M = Mo, W, Nb, Ta; X = S, Se) is layered materials, which can be easily exfoliated into monolayer due to weak van der Waals forces between layers of TMDC [32; 33; 34; 35; 36]. Thus, TMDC forms a new class of atomically thin two-dimensional (2D) electronic systems. For NLO response, the advantage of atomically-thin 2D materials such as graphene and TMDC is that the phase-matching conditions between incident light and light-induced electric polarization wave can be ignored because of their extremely smaller thickness than the incident light wavelength [5; 37; 38]. Thus, in few-layered TMDC the NLO effect mainly depends on the crystal symmetry. The generation mechanisms for NLO current are dominated by parity symmetry \(P\)[28]. Under \(P\)-conserved, NLO charge and spin current is absent. On the other hand, for \(P\)-broken, bulk photovoltaic effect is induced in the system. In general, bulk photovoltaic effect has two contributions: (i) shift current and (ii) injection current [39; 40; 41; 31; 4]. (i) Shift current is the photoinduced spontaneous DC, which corresponds to the shift of electrons in real space during the optical excitaiton of electron. (ii) Injection current is net current for different velocities between electron and hole owing to the population imbalance between them induced by photo-excitation. In the noncentrosymmetric system, the linearly polarized (LP) light irradiation solely induces the charge shift current, however the incident circularly polarized (CP) light solely induces the charge injection current [28]. In further, bi-circular light (BCL) is useful method to optically engineer the symmetry of electronic system [42; 43; 44; 45; 46; 47]. BCL is the superposition of left-handed CP (LCP) light with the angular frequency \(n_{1}\omega\) and right-handed CP (RCP) light with \(n_{2}\omega\) (\(n_{1}\neq n_{2}\)), which forms the trajectory of rose curve. It is defined as \[A_{\text{BCL}}(t)=A_{L}e^{\mu_{1}\omega t}+A_{R}e^{\mu_{2}\omega t-i\theta}+ \text{c.c.}, \tag{1}\] where \(A_{L(R)}\) is amplitude of LCP(RCP) light and \(\theta\) is a phase difference between LCP and RCP light. The application of BCL can artificially control the symmetry of electronic system and induce charge polarizations along the directions of leaves of BCL. In this paper, we theoretically consider the NLO effects on metallic TMDCs. The metallic TMDCs such as NbSe\({}_{2}\), NbS\({}_{2}\), TaSe\({}_{2}\) and TaS\({}_{2}\) are metallic at room temperature, and successively show the charge density wave (CDW) phase [48; 49] and superconducting phase at low temperature [50; 51; 52; 53; 54; 55; 56]. In these materials, AB-stacking structure is most stable in nature and has different crystal symmetries for even and odd number of layers. Even-number-layered TMDCs have a space group D\({}_{3d}\), which respect to inversion and out-of-plane mirror symmetries. However odd-number-layered TMDCs have a space group D\({}_{3h}\), which break inversion symmetry. In further, owing to the broken inversion symmetry and a strong atomic spin-orbit coupling (SOC) field such as Nb and Ta atoms, its system possesses Ising-type SOC [57; 58; 59; 60; 52; 52; 54; 52; 53; 54; 55; 56], i.e., an effective Zeeman field that locks electron spins to out-of-plane directions by in-plane momentum. In NbSe\({}_{2}\), the SOC causes large spin splitting in the energy band structures of odd-number-layered systems (about 157 meV at K point), which leads to unconventional topological spin properties. In actual, we have shown that monolayer NbSe\({}_{2}\) can generate the spin Hall current under visible light irradiation owing to its finite topological spin Berry curvature [61]. Also, we have studied that the second-order NLO charge and spin Hall current of SHG process can be selectively generated in few-layered NbSe\({}_{2}\) according to the crystal symmetry and polarization of incident light [62]. Here, we extend our theoretical analysis to NLO charge and spin conductivities under BCL irradiation and DC photocurrent (shift and injection current) in few-layered NbSe\({}_{2}\). We employ the effective tight-binding model (TBM) including the electron hopping among \(d_{z^{2}}\), \(d_{xy}\) and \(d_{x^{2}-y^{2}}\) orbitals of Nb atom and Ising-type SOC in order to describe the electronic structures of NbSe\({}_{2}\) around Fermi level. In general, the second-order NLO current is generated in odd-number-layered NbSe\({}_{2}\), but absent in even-number-layered NbSe\({}_{2}\). However, since BCL can manipulate the symmetry of electronic states, the NLO current can be generated even in even-number-layered NbSe\({}_{2}\). In further, we find that the charge and spin shift current can be generated in odd-number-layered NbSe\({}_{2}\), which is robust to the electronic scattering, i.e., topological current. In addition, the direction of generated shift current can be switched if LP light is altered to CP light. Our results will serve to design opt-spintronics devices on the basis of 2D materials to manipulate the charge and spin current and their directions by controlling the polarization of incident light which recasts the crystal symmetry. This paper is organized as follows. In Sec. II we discuss the crystal symmetry, electronic structures of few-layered NbSe\({}_{2}\) based on the effective TBM. In Sec. III we briefly introduce the formula to calculate the second-order NLO conductivities and discuss their relation to the crystal symmetry. In Sec. IV we will show that incident BCL can control the symmetry of electronic system, which induces the NLO current along the directions of leaves of BCL. In Sec. V we show that the shift current has NLO selection rule depending on the crystal symmetry and polarization of incident light. It is also shown that charge and spin shift current can be switched if the polarization altered. Sec. VI provides the summary of our results. In Appendix, we give the symmetry analysis on NLO conductivity, NLO current induced by multi-leaf BCL and contour plots of integrand of NLO conductivities. In Supplementary Material, we show the imaginary parts of the NLO conductivities for BCL, rotational angle dependences of BCL on NLO conductivities, the derivation of shift- and injection-current conductivities, and the NLO conductivity for MoS\({}_{2}\) as a reference of a TMDC semiconductor [63]. ## II Model We will show that the second-order NLO charge and spin conductivities strongly depend on the crystal symmetry and the polarization of light in metallic TMDC. In this paper, we employ NbSe\({}_{2}\) as an exapmle of metallic TMDCs. Since NbSe\({}_{2}\) can be easily exfoliated because of van der Waals forces between layers, few-layered NbSe\({}_{2}\) such as monolayer, AB-stacked bilayer and ABA-stacked trilayer is obtained. In particular we focus on two cases which are odd-number-layered (e.g. monolayer and ABA-stacked trilayer) and even-number-layered (e.g. AB-stacked bilayer) NbSe\({}_{2}\). Figure 1 (a) shows the side view of few-layered NbSe\({}_{2}\). The stacking structure is consisted by inserting B monolayer (180\({}^{\circ}\) rotation of A monolayer) between A monolayers, which is the most energetically stable stacking sequence in NbSe\({}_{2}\). Each layer (monolayer) has the out-of-plane mirror symmetry \(M_{z}\) in perpendicular direction with respect to the plane of Nb atoms. Figures 1 (b) and (c) show the top views of monolayer and AB-stacked bilayer (ABA-stacked trilayer) NbSe\({}_{2}\), respectively. Odd-number-layered NbSe\({}_{2}\) has a space group D\({}_{3h}\), which has mirror symmetry \(M_{x}M_{z}\) and no inversion symmetry. On the other hand, in even-number-layered NbSe\({}_{2}\), the system has a space group D\({}_{3d}\), which respects to two mirror symmetries \(M_{x}M_{z}\) and \(M_{y}M_{z}\) and inversion symmetry \(P\). Figure 1 (d) shows first Brillouin Zone (BZ) for few-layered NbSe\({}_{2}\). For NbSe\({}_{2}\), we employ a multi-orbitals TBM which includes \(d_{z^{2}}\), \(d_{xy}\) and \(d_{x^{2}-y^{2}}\) orbitals of Nb atom, which can well describe the electronic states of NbSe\({}_{2}\) around Fermi energy (\(E_{F}=0\) eV) [52; 61; 62; 65; 61]. The eigenvalue equation for the effective TBM is \(\hat{H}(\mathbf{k})|u_{\mathbf{n}\mathbf{k}}\rangle=E_{\mathbf{n}\mathbf{k}}|u_{\mathbf{n}\mathbf{k}}\rangle\), where \(n\) is the band index with spin states expressed as \(n=1,2,\cdots,6N\) (\(N\) is the number of layer), \(\mathbf{k}=(k_{x},k_{y})\) is the wave-number vector for 2D material and \(E_{\mathbf{n}\mathbf{k}}\) is the eigenvalue for \(n\)-th band. The eigenvector for \(n\)-th band is defined as \(|u_{\mathbf{n}\mathbf{k}}\rangle=(c_{\mathbf{n}\mathbf{k},d_{z^{2}},\uparrow},c_{\mathbf{n}\mathbf{k}, d_{xy},\uparrow},c_{\mathbf{n}\mathbf{k},d_{z^{2}-y^{2}},\uparrow},c_{\mathbf{n}\mathbf{k},d_{z^{2} },\downarrow},c_{\mathbf{n}\mathbf{k},d_{xy},\downarrow},c_{\mathbf{n}\mathbf{k},d_{z^{2}-y^{ 2}},\downarrow})^{T}\), where \((\cdots)^{T}\) indicates the transpose of vector and \(c_{\mathbf{n}\mathbf{k}\mathbf{n}\mathbf{s}}\) means the amplitude at atomic orbital \(\tau\) with spin \(s\) for the \(n\)-th energy band at \(\mathbf{k}\). The Hamiltonian of monolayer NbSe\({}_{2}\) with Ising-type SOC is \[\hat{H}_{\text{mono}}(\mathbf{k})=\hat{\sigma}_{0}\otimes\hat{H}_{\text{TNN}}(\bm {k})+\hat{\sigma}_{z}\otimes\frac{1}{2}\lambda_{\text{SOC}}\hat{L}_{z} \tag{2}\] with \[\hat{H}_{\text{TNN}}(\mathbf{k})=\begin{pmatrix}V_{0}&V_{1}&V_{2}\\ V_{1}^{*}&V_{11}&V_{12}\\ V_{2}^{*}&V_{12}^{*}&V_{22}\end{pmatrix},\;\hat{L}_{z}=\begin{pmatrix}0&0&0\\ 0&0&-2i\\ 0&2i&0\end{pmatrix}, \tag{3}\] where \(\hat{\sigma}_{0}\) and \(\hat{\sigma}_{z}\) are Pauli matrices and \(\lambda_{\text{SOC}}\) is the Ising-type SOC parameter (\(\lambda_{\text{SOC}}=0.0784\) eV for monolayer NbSe\({}_{2}\)). \(\hat{H}_{\text{TNN}}(\mathbf{k})\) includes the electron hoppings only among three \(d\)-orbitals of Nb atoms such as \(d_{z^{2}}\), \(d_{xy}\) and \(d_{x^{2}-y^{2}}\), which are assumed up to third-nearest neighbor sites. Similarly, using the Hamiltonian of monolayer NbSe\({}_{2}\)\(\hat{H}_{\text{mono}}(\mathbf{k})\), the Hamiltonians of bilayer and trilayer NbSe\({}_{2}\) can be written as \[\hat{H}_{\text{bi}}(\mathbf{k})=\begin{pmatrix}\hat{H}_{\text{mono}}(-\mathbf{k})& \hat{H}_{\text{int}}(\mathbf{k})\\ \hat{H}_{\text{int}}^{\dagger}(\mathbf{k})&\hat{H}_{\text{mono}}(\mathbf{k})\end{pmatrix} \tag{4}\] and \[\hat{H}_{\text{tri}}(\mathbf{k})=\begin{pmatrix}\hat{H}_{\text{mono}}(\mathbf{k})& \hat{H}_{\text{int}}(\mathbf{k})&0\\ \hat{H}_{\text{int}}^{\dagger}(\mathbf{k})&\hat{H}_{\text{mono}}(-\mathbf{k})&\hat{H}_{ \text{int}}(\mathbf{k})\\ 0&\hat{H}_{\text{int}}^{\dagger}(\mathbf{k})&\hat{H}_{\text{mono}}(\mathbf{k})\end{pmatrix}, \tag{5}\] respectively [54]. Here, the interlayer coupling Hamiltonian \(\hat{H}_{\text{int}}(\mathbf{k})\) between monolayers is \[\hat{H}_{\text{int}}(\mathbf{k})=\begin{pmatrix}T_{01}&0&0\\ 0&T_{02}&0\\ 0&0&T_{02}\end{pmatrix}\,. \tag{6}\] The details of matrix elements \(V_{0}\), \(V_{1}\), \(V_{2}\), \(V_{11}\), \(V_{12}\), \(V_{22}\), \(T_{01}\) and \(T_{02}\) can be found in Ref. [61; 62]. Figures 1 (e), (f) and (g) show the energy band structures and density of states (DOS) of monolayer, bilayer and trilayer NbSe\({}_{2}\), respectively. Here, red, blue and green lines indicate up-spin, down-spin and spin-degeneracy states, respectively. Fermi energy \(E_{F}\) is set to 0 eV. Also, the system has a large energy band gap between the partially filled valence bands and empty conduction bands (about 1.5 eV). Because of the broken inversion symmetry in odd-number-layered NbSe\({}_{2}\), the spin splitting bands can be seen, which show opposite spin states at the valence band edges at K and K\({}^{\prime}\) points in the energy band structure, i.e., preservation of time-reversal symmetry. On the other hand, since even-number-layered NbSe\({}_{2}\) has crystal inversion symmetry, spin-degeneracy states appear in the energy band structure. In even-number-layered NbSe\({}_{2}\), the interlayer interaction has larger effects on the valence band at \(\Gamma\) point than K and K\({}^{\prime}\) points, which leads to the large band splitting of valence band at \(\Gamma\) point. In trilayer NbSe\({}_{2}\), the energy band consists of spin-splitting and spin-degeneracy states. It should be noted that the energy band structure of trilayer NbSe\({}_{2}\) can be understood by superposition of monolayer and bilayer NbSe\({}_{2}\). Figures 1 (h), (i) and (j) show the spin-dependent Fermi surfaces of monolayer, bilayer and trilayer NbSe\({}_{2}\), respectively. In odd-number-layered NbSe\({}_{2}\), the Fermi surface has up- and down-spin hole pockets centered at \(\Gamma\), K and K\({}^{\prime}\) points. However, in even-number-layered NbSe\({}_{2}\), no such spin splitting Figure 1: Crystal structures of few-layered NbSe\({}_{2}\) with Nb (black) and Se (yellow) atoms. (a) Side view of crystal structure of monolayer (A), bilayer (AB) and trilayer (ABA) NbSe\({}_{2}\). Monolayer NbSe\({}_{2}\) has mirror symmetry \(M_{z}\) in perpendicular direction with respect to the plane. Top views of crystal structures of (b) monolayer, (c) bilayer and trilayer NbSe\({}_{2}\), respectively. Odd-number-layered NbSe\({}_{2}\) has mirror symmetry \(M_{x}M_{z}\) and no inversion symmetry, but even-number-layered NbSe\({}_{2}\) has inversion symmetry because of mirror symmetries \(M_{x}M_{z}\) and \(M_{y}M_{z}\). (d) 1st BZ of NbSe\({}_{2}\). Energy band structures and DOS of (e) monolayer, (f) bilayer and (g) trilayer NbSe\({}_{2}\) with SOC parameter \(\delta_{\text{SOC}}=0.0784\) eV, respectively. Fermi level is set to zero. Fermi surfaces of (h) monolayer, (i) bilayer and (j) trilayer NbSe\({}_{2}\), respectively. Red, blue and green lines indicate up-spin, down-spin and spin-degeneracy states, respectively. occurs owing to the inversion symmetry. Thus, we show that the spin dependence of Fermi surfaces behaves differently for even- and odd-number-layered NbSe\({}_{2}\). ## III Second-order NLO charge and spin conductivities Under irradiation of light with strong amplitude, the current density \(J_{i}\) can be expanded as follows, i.e., \(J_{i}=J_{i}^{(1)}+J_{i}^{(2)}+J_{i}^{(3)}+\cdots\). Here, \(J_{i}^{(1)}\) is the linear optical current. \(J_{i}^{(2)}\) and \(J_{i}^{(3)}\) are second- and third-order NLO current. In second-order NLO effect, the generated current density \(J_{i}^{(2)}\) can be given as \[J_{i}^{(2)}=\sum_{jk}\sum_{\mathbf{\alpha}_{1}\mathbf{\alpha}_{2}}\sigma_{ijk}^{(2)}( \mathbf{\omega}_{1}+\mathbf{\omega}_{2};\mathbf{\omega}_{1},\mathbf{\omega}_{2})E_{j}(\mathbf{ \omega}_{1})E_{k}(\mathbf{\omega}_{2}), \tag{7}\] where \(E_{j}(\mathbf{\omega}_{1})\) and \(E_{k}(\mathbf{\omega}_{2})\) are electric fields of incident light [3]. Note that \((i,j,k)\) is \(x\)- or \(y\)-direction. In particular, \(i\) means the generation direction of NLO current. \(j\) and \(k\) are the polarization of \(E_{j}(\mathbf{\omega}_{1})\) and \(E_{k}(\mathbf{\omega}_{2})\). \(\sigma_{ijk}^{(2)}(\mathbf{\omega}_{1}+\mathbf{\omega}_{2};\mathbf{\omega}_{1},\mathbf{\omega} _{2})\) is the second-order NLO conductivity. In general, the second-order NLO conductivity is given by (second-order) Kubo formula [62; 63; 64; 65; 66; 67; 68; 69; 70; 4]: \[\begin{split}\sigma_{ijk}^{(2)}&(\mathbf{\omega}_{1}+ \mathbf{\omega}_{2};\mathbf{\omega}_{1},\mathbf{\omega}_{2})\\ &\equiv-\frac{\hbar^{2}e^{2}}{2S}\sum_{\mathbf{k}}(\Omega_{ijk}^{(2)} (\mathbf{\omega}_{1},\mathbf{\omega}_{2},\mathbf{k})+\Omega_{ikj}^{(2)}(\mathbf{\omega}_{2}, \mathbf{\omega}_{1},\mathbf{k}))\end{split} \tag{8}\] with \[\begin{split}\Omega_{ijk}^{(2)}(\mathbf{\omega}_{1},\mathbf{\omega}_{2}, \mathbf{k})=&\sum_{nml}\frac{j_{mn}^{l}}{E_{ml}E_{ln}(E_{mn}-\hbar( \mathbf{\omega}_{1}+\mathbf{\omega}_{2})-i\eta)}\\ &\times\Bigg{[}\frac{v_{nl}^{j}v_{ln}^{k}f_{ml}}{E_{ml}-\hbar \omega_{2}-i\eta}-\frac{v_{nl}^{k}v_{lm}^{j}f_{ln}}{E_{ln}-\hbar\omega_{2}-i \eta}\Bigg{]}\\ =&\sum_{nml}\frac{f_{lm}v_{ln}^{k}}{E_{ml}(E_{ml}- \hbar\omega_{2}-i\eta)}\\ &\times\Bigg{[}\frac{j_{mn}^{l}v_{nl}^{j}}{E_{nl}(E_{mn}-\hbar( \mathbf{\omega}_{1}+\mathbf{\omega}_{2})-i\eta)}\\ &-\frac{v_{ml}^{j}g_{nl}^{j}}{E_{mn}(E_{nl}-\hbar(\mathbf{\omega}_{1} +\mathbf{\omega}_{2})-i\eta)}\Bigg{]},\end{split} \tag{9}\] which describes the contribution of the interband optical transition [71]. \(\Omega_{ijk}^{(2)}(\mathbf{\omega}_{1},\mathbf{\omega}_{2},\mathbf{k})\) is integrand of \(\sigma_{ijk}^{(2)}(\mathbf{\omega}_{1}+\mathbf{\omega}_{2};\mathbf{\omega}_{1},\mathbf{\omega} _{2})\) before considering intrinsic permutation symmetry, i.e., \(\Omega_{ijk}^{(2)}(\mathbf{\omega}_{1},\mathbf{\omega}_{2},\mathbf{k})=\Omega_{ikj}^{(2)} (\mathbf{\omega}_{2},\mathbf{\omega}_{1},\mathbf{k})\). Note that \((n,m,l)\) is the band indices including spin degree of freedom. \(v_{nl}^{j}=\langle u_{\mathbf{n}k}|\mathbf{v}\cdot\mathbf{e}_{j}|u_{\mathbf{k}k}\rangle\), and \(\mathbf{v}\) is the group velocity operator, which written as \(\mathbf{v}=(\hat{v}_{x},\hat{v}_{y})=\frac{1}{\hbar}(\frac{\partial\hat{R}_{x}}{ \partial k_{x}},\frac{\partial\hat{R}}{\partial k_{y}})\) for 2D materials. \(\mathbf{e}_{j}(\mathbf{e}_{\mathbf{k}})\) is Jones vector. For example \(\mathbf{e}_{\mathbf{x}}=(1,0)^{T}\) for \(x\)-polarized light, \(\mathbf{e}_{\mathbf{y}}=(0,1)^{T}\) for \(y\)-polarized light, \(\mathbf{e}_{\mathbf{L}}=\frac{1}{\sqrt{2}}(1,i)^{T}\) for LCP light and \(\mathbf{e}_{\mathbf{R}}=\frac{1}{\sqrt{2}}(1,-i)^{T}\) for RCP light. \(|u_{\mathbf{n}k}\rangle\) is the eigenfunction with the eigenenergy \(E_{\mathbf{n}k}\) and \(f(E_{\mathbf{n}k})\) is Fermi-Dirac distribution function. \(E_{ml}\) is the difference between energy levels \(E_{mk}\) and \(E_{l\mathbf{k}}\), i.e., \(E_{ml}\equiv E_{\mathbf{m}k}-E_{l\mathbf{k}}\), and \(f_{ml}\) is also defined as \(f_{ml}\equiv f(E_{mk})-f(E_{l\mathbf{k}})\). \(\mathbf{\omega}_{1}\) and \(\mathbf{\omega}_{2}\) are optical angular frequencies for the absorption process. \(\eta\) is infinitesimally small real number and \(S\) is the area of 2D system. Throughout this paper, \(\eta=0.001\) eV is set for the calculation of conductivity. \(j_{mn}^{l}=\langle u_{\mathbf{n}mk}|\hat{J}_{i}^{(2)}|u_{\mathbf{n}k}\rangle\), where \(\hat{J}_{i}^{(2)}\) is current density operator, and represented as \(\hat{J}_{i}^{\text{charge}}=\frac{1}{2}\{-e\hat{\mathbf{\omega}}_{0}\otimes\hat{I}_ {N},\hat{v}_{i}\}\) for charge current and \(\hat{J}_{i}^{\text{spin}}=\frac{1}{2}\{\frac{\hbar}{2}\hat{\mathbf{\sigma}}_{i} \otimes\hat{I}_{N},\hat{v}_{i}\}\) for spin current. Here, \(\hat{I}_{N}\) is the \(N\times N\) identity matrix and in particular \(N=3,6,9\) is used for monolayer, bilayer and trilayer NbSe\({}_{2}\), respectively. In Eqs. (8) and (9), when \(\mathbf{\omega}_{1}+\mathbf{\omega}_{2}=\mathbf{\omega}_{3}\) and \(\mathbf{\omega}_{1}\neq\mathbf{\omega}_{2}\neq\mathbf{\omega}_{3}\neq\mathbf{0}\), the process is called as sum frequency generation (SFG) [16; 17; 18]. In particular, if \(\mathbf{\omega}_{1}=\mathbf{\omega}_{2}\equiv\mathbf{\omega}\neq\mathbf{0}\) and \(\mathbf{\omega}_{3}=2\mathbf{\omega}\), the process becomes SHG. In addition, when \(\mathbf{\omega}_{1}-\mathbf{\omega}_{2}=\mathbf{\omega}_{3}\) and \(\mathbf{\omega}_{1}\neq\mathbf{\omega}_{2}\neq\mathbf{\omega}_{3}\neq\mathbf{0}\), the process is called as difference frequency generation (DFG) [72; 73], which can express the bulk photovoltaic effect if \(\mathbf{\omega}_{1}=-\mathbf{\omega}_{2}\equiv\mathbf{\omega}\neq\mathbf{0}\). The NLO conductivities are strongly influenced by the crystal symmetries [3]. For the SHG process in the odd-number-layered NbSe\({}_{2}\) where the inversion symmetry is broken, the NLO charge and spin conductivities of SHG process \(\sigma_{ijk}^{\text{charge}}(2\mathbf{\omega};\mathbf{\omega},\mathbf{\omega})\) and \(\sigma_{ijk}^{\text{spin}}(2\mathbf{\omega};\mathbf{\omega},\mathbf{\omega})\) have following relations: \[\begin{split}\sigma_{yyy}^{\text{charge}}(2\mathbf{\omega};\mathbf{ \omega},\mathbf{\omega})&=-\sigma_{yxx}^{\text{charge}}(2\mathbf{\omega};\mathbf{ \omega},\mathbf{\omega})\\ &=-\sigma_{yxx}^{\text{charge}}(2\mathbf{\omega};\mathbf{\omega},\mathbf{\omega}) =-\sigma_{xy}^{\text{charge}}(2\mathbf{\omega};\mathbf{\omega},\mathbf{\omega})\end{split} \tag{10}\] and \[\begin{split}\sigma_{xxx}^{\text{spin}}(2\mathbf{\omega};\mathbf{\omega}, \mathbf{\omega})&=-\sigma_{xyy}^{\text{spin}}(2\mathbf{\omega};\mathbf{\omega}, \mathbf{\omega})\\ &=-\sigma_{xyy}^{\text{spin}}(2\mathbf{\omega};\mathbf{\omega},\mathbf{\omega}) =-\sigma_{yxx}^{\text{spin}}(2\mathbf{\omega};\mathbf{\omega},\mathbf{\omega}).\end{split} \tag{11}\] The other elements are absent. On the other hand, in even-number-layered NbSe\({}_{2}\), the NLO current is always absent owing to the inversion symmetry. These results are summarized in Table 1. In addition, the another derivation of these relations using crystal symmetry operation is presented in Appendix A. ## IV NLO conductivity under irradiation of BCL BCL can artificially control the symmetry of electronic system [74; 75 where \(A_{L(R)}\) is the amplitude of the LCP(RCP) light. \(\theta\) is the phase difference between LCP and RCP light. In general, the trajectory of BCL is a rose curve with \((n_{1}+n_{2})/\mathrm{gcd}(n_{1},n_{2})\)-fold rotation symmetry. Here, \(\mathrm{gcd}(n_{1},n_{2})\) means the great common divisor of two integers \(n_{1},n_{2}\). For example, there are 3-leaf BCL with 3-fold rotation symmetry \(C_{3}\), 4-leaf BCL with 4-fold rotation symmetry \(C_{4}\) and 5-leaf BCL with 5-fold rotation symmetry \(C_{5}\). The induced current density \(J_{i}^{\mathrm{BCL}}\) under the irradiation of BCL, i.e., \(E_{\mathrm{BCL}}(t)=E_{L}e^{in_{1}\omega t}+E_{R}e^{in_{2}\omega t}+\mathrm{c.c.}=\sum_{\alpha=L,R}\sum_{\alpha}E_{\alpha}e^{in_{\alpha}\omega t}\) with \((n_{L},n_{R})\equiv(n_{1},n_{2})\), can be written as \[J_{i}^{\mathrm{BCL}}=\sum_{ab}\sum_{n_{a}n_{b}}\sigma_{iab}^{(2)}((n_{a}+n_{b} )\omega;n_{a}\omega,n_{b}\omega)E_{a}E_{b}.\] Furthermore, the NLO conductivity for BCL irradiation becomes \[\begin{split}&\sigma_{i-\mathrm{BCL}-\mathrm{BCL}}^{(2)}(n_{1} \omega,n_{2}\omega)\\ &=\sum_{ab}\sigma_{iab}^{(2)}((n_{a}+n_{b})\omega;n_{a}\omega,n_{ b}\omega)\\ &=\sigma_{iLL}^{(2)}(2n_{1}\omega;n_{1}\omega,n_{1}\omega)+ \sigma_{iLR}^{(2)}((n_{1}+n_{2})\omega;n_{1}\omega,n_{2}\omega)\\ &\quad+\sigma_{iRL}^{(2)}((n_{2}+n_{1})\omega;n_{2}\omega,n_{1} \omega)+\sigma_{iRR}^{(2)}(2n_{2}\omega;n_{2}\omega),\end{split} \tag{13}\] where the subscript BCL and L (R) mean the injection of BCL and LCP (RCP), respectively. The NLO conductivities are numerically calculated by using Eqs. (8) and (9). Figure 2 (a) shows the trajectory of incident BCL with 3-leaf (\(n_{1}:n_{2}=1:2\), \(\theta=0\)). The 3-leaf BCL can break the mirror symmetry \(M_{x}\) and then induce three polarizations \(P_{\mathrm{BCL}}^{1}\), \(P_{\mathrm{BCL}}^{2}\) and \(P_{\mathrm{BCL}}^{3}\) along the directions of three leaves of 3-leaf BCL (see Fig. 2 (b)). Figure 2 (c) shows the \(\omega\) dependence of the real parts of second-order NLO conductivities \(\sigma_{i-\mathrm{BCL}-\mathrm{BCL}}^{(2)}(n_{1}\omega,n_{2}\omega)\) for monolayer NbSe\({}_{2}\) under irradiation of 3-leaf BCL. In previous work, we have shown that under LP light irradiation, the charge and spin Hall current can be generated only either in \(y\)- or \(x\)-direction for monolayer NbSe\({}_{2}\)[62]. However, since the 3-leaf BCL breaks \(M_{x}\) symmetry, the current can be generated in both \(x\)- and \(y\)-directions. Figure 3 (a) shows the trajectory of 3-leaf BCL with rotational angle \(\alpha\) (\(n_{1}:n_{2}=1:2\), \(\alpha\neq 0\)). Here, \(\alpha\) has a following relation with \(\theta\): \[\alpha=-\frac{n_{1}}{n_{1}+n_{2}}\theta, \tag{14}\] which is defined as rotational angle from \(x\)-axis. Figure 3 (b) shows \(\alpha\)-dependence on real parts of second-order NLO conductivities \(\sigma_{i-\mathrm{BCL-BCL}}^{(2)}(n_{1}\omega,n_{2}\omega)\) for monolayer NbSe\({}_{2}\) under irradiation of 3-leaf BCL of \(\hbar\omega=2.5\) eV. The charge current is generated along the three different axes of the induced polarizations by incident BCL of 3-leaf, resulting in the appearance of six equivalent peaks for \(\alpha\) (\(0\leq\alpha\)\(<\)\(2\pi\)). On the other hand, the magnitude of spin current has only three equivalent peaks. As shown in Supplementary material, the NLO conductivities for up- and down-spin states have three equivalent peaks, but point in opposite directions to each other. In general, the number of generated directions of charge and spin current are twice as different. In Appendix A, we derive the selection rules of second-order NLO conductivities of NbSe\({}_{2}\) under irradiation of generic BCL. In Appendix B, we provide the second-order NLO conductivities of monolayer NbSe\({}_{2}\) for the irradiation of BCL with 4-, 5-, 6- and 7-leaf. As shown in Fig. 8 of Appendix B, it is found that the peaks of NLO conductivities in the range of \(0<\hbar\omega<1\) shift lower with increase of leaves of BCL. These shifts are originated from the factor of \((n_{1}+n_{2})\) in the denominator of Eq. (9). If \(n_{1}:n_{2}=1:1\) with \(\theta=0\), the trajectory of incident light is identical to \(x\)-polarized light, i.e., \[A_{\mathrm{BCL}}(t)=A_{L}e^{i\alpha t}+A_{R}e^{i\alpha t}+\mathrm{c.c.}=\sqrt{ 2}A_{x}e^{i\alpha t}+\mathrm{c.c.}. \tag{15}\] Therefore, this case reproduces the results of SHG process [62]. The details are shown in Supplementary material [63]. In even-number-layered NbSe\({}_{2}\), the second-order NLO current is always absent because of the inversion symmetry [62]. However, the irradiation of BCL with odd-number of leaf or even-number of leaf with \(\alpha\neq 0\) can break the inversion symmetry, resulting in the generation of charge and spin current. As shown in Appendix C, for irradiation of RCP and LCP light with different optical angular frequencies, the NLO current is generated, i.e., no cancellation of current even in the presence of spatial inversion symmetry. Figure 4 (a) shows the real parts of \(\sigma_{i-\mathrm{BCL-BCL}}^{(2)}(n_{1}\omega,n_{2}\omega)\) for bilayer NbSe\({}_{2}\) under irradiation of 3-leaf BCL. Owing to the broken mirror symmetry \(M_{x}\), the NLO charge and spin current can be generated only either in \(x\)- or \(y\)-direction for bilayer NbSe\({}_{2}\). Thus, the irradiation of BCL can generate the current even in even-number-layered NbSe\({}_{2}\). In further, we should notice that in an energy range of \(0<\hbar\omega<0.1\), \(\mathrm{Re}[\sigma_{x-\mathrm{BCL-BCL}}^{\mathrm{charge}}]\) and \(\mathrm{Re}[\sigma_{y-\mathrm{BCL-BCL}}^{\mathrm{spin}}]\) have the peaks for bilayer NbSe\({}_{2}\). These peaks are attributed to interband optical transition between highest valence and lowest conduction bands of energy bands shown in Fig. 4 (b), which cause infrared light absorption. On the other hand, the infrared light absorption is absent in semiconductor TMDC such as MoS\({}_{2}\) because of the large band gap between valence and conduction bands (\(\Delta E\approx 2.0\) eV). Figure 4 (c) schematically summarizes the induced charge and spin current directions in bilayer NbSe\({}_{2}\) under irradiation of 3-leaf BCL. Since BCL breaks the inversion symmetry of electronic system, the NLO current is generated even in bilayer NbSe\({}_{2}\). Note that the charge current is induced along \(P_{\mathrm{BCL}}^{3}\) in Fig. 2 (b), but the spin current is induced in perpendicular to the charge current. In Appendix D, we provide the second-order NLO conductivities for trilayer NbSe\({}_{2}\) under BCL irradiation. The second-order NLO conductivities of trilayer NbSe\({}_{2}\) consist of three optical transition processes: (i) intralayer optical transition of monolayer NbSe\({}_{2}\), (ii) intra- and interlayer optical transition of bilayer NbSe\({}_{2}\) and (iii) interlayer optical transitions between monolayer and bilayer NbSe\({}_{2}\). Under irradiation of LP light, the process (ii) is absent. However, BCL of 3-leaf makes all three processes finite. ## V Shift- and injection-current conductivities DFG with \(\omega_{1}=-\omega_{2}\equiv\omega\neq 0\) induces the DC photocurrent, i.e., bulk photovoltaic effect. In Eq. (7), the DC photocurrent \(J_{i}^{(2)}\) can be obtained as \[\begin{split} J_{i}^{(2)}=&\sum_{jk}\{\sigma_{ijk} ^{(2)}(0;\omega,-\omega)E_{j}(\omega)E_{k}(-\omega)\\ &+\sigma_{ijk}^{(2)}(0;-\omega,\omega)E_{j}(-\omega)E_{k}(\omega )\}\\ =&\sum_{jk}\{\sigma_{ijk}^{(2)}(0;\omega,-\omega)E_{j }(\omega)E_{k}(-\omega)+\text{c.c.}\}\\ =&\sum_{jk}2\sigma_{ijk}^{(2)}(0;\omega,-\omega)E_{j }(\omega)E_{k}(-\omega),\end{split} \tag{16}\] where the DC photoconductivity based on Eqs. (8) and (9) follows the identities \(\sigma_{ijk}^{(2)}(0;-\omega,\omega)=-\sigma_{ijk}^{(2)}(0;\omega,-\omega)=[ \sigma_{ijk}^{(2)}(0;\omega,-\omega)]^{*}\). The numerical results are shown in Supplementary material [63], which strongly depends on the crystal symmetry, i.e., generation of current for odd-number-layered NbSe\({}_{2}\), but absence for even-number-layered NbSe\({}_{2}\). The bulk photovoltaic effect includes the generation of shift and injection current. In this section, we derive the shift/injection-current conductivities, and find that these conductivities depend on the crystal symmetry and polarization of incident light for monolayer NbSe\({}_{2}\). ### Shift and injection current under LP light irradiation The DC photocurrent can be separated into two different optically induced DC current, i.e., (i) shift current and (ii) injection current. (i) Shift current is induced polarization current owing to the shift of electron position by irradiating light [28]. The induced polarization of electrons \(P_{i}\) is written as \[P_{i}=e\int_{\text{BZ}}\frac{d\mathbf{k}}{(2\pi)^{3}}\sum_{m}f_{m}\xi_{mm}^{i}, \tag{17}\] where \(\xi_{mm}^{i}\) is the Berry connection of \(m\)-th band and \(\xi_{mm}^{i}=i\left<u_{mak}\right|\frac{\partial_{\mathbf{k}}}{\partial k}|u_{mak}\rangle\)[4]. Berry connection is also interpreted as the Wannier center of wave function [77; 78; 79]. Thus, the origin of the shift current is the change of the polarization \(P_{i}\), i.e., the Berry connection difference between valence and conduction bands \(\xi^{i}_{mm}-\xi^{i}_{ll}\) by photo-excitation. Note that \(l(m)\) is the unoccupied (occupied) band index. (ii) Injection current is induced net current for the distribution of asymmetric electron and hole velocities \(\Delta^{i}_{lm}=v^{i}_{ll}-v^{i}_{mm}\) owing to the population imbalance by photo-excitation in the momentum space [28]. Using a sum rule for the generalized derivative \(r^{j}_{ml;j}\) of position \(r^{j}_{ml}=\langle u_{mk}|\mathbf{r}\cdot\mathbf{e}_{j}|u_{lk}\rangle\) in the DC photoconductivity \(\sigma^{(2)}_{ijk}(0;\omega,-\omega)\)[4; 28; 39; 40; 41], the DC photoconductivity can be separated into shift/injection-current conductivities. Here, \(r^{j}_{ml;j}\) is given as \[\begin{split} r^{j}_{ml;i}&=\frac{\partial r^{j}_{ml }}{\partial k_{i}}-i[\xi^{i}_{mm}-\xi^{i}_{ll}]r^{j}_{ml}\\ &=-iR^{j}_{ml}(\mathbf{k})r^{j}_{ml},\end{split} \tag{18}\] which depends on the difference of Berry connections, i.e., contribution for shift current. Note that \(R^{i}_{ml}(\mathbf{k})\) is shift vector and given as \[R^{i}_{ml}(\mathbf{k})=\frac{\partial\phi^{j}_{ml}}{\partial k_{i}}+(\xi^{i}_{mm}- \xi^{i}_{ll}), \tag{19}\] where \(\phi^{j}_{ml}\) is phase of group velocity \(v^{j}_{ml}=\langle u_{mk}|\mathbf{v}\cdot\mathbf{e}_{j}|u_{lk}\rangle=|v^{j}_{ml}|e^{- i\phi^{j}_{ml}}\). For the numerical calculation, it is better to rewrite the generalized derivative \(r^{j}_{ml;j}\) as follow, \[\begin{split} r^{j}_{ml;i}&=\frac{r^{j}_{ml}\Delta^{ j}_{lm}+r^{j}_{ml}\Delta^{i}_{lm}}{\omega_{ml}}-i\hbar^{2}\sum_{n\neq j,l}\left( \frac{v^{j}_{mm}v^{j}_{ml}}{E_{ml}E_{mn}}-\frac{v^{j}_{mm}v^{i}_{ml}}{E_{ml}E_ {nl}}\right).\end{split} \tag{20}\] Using the relation of \(r^{j}_{ml}\) and \(v^{j}_{ml}\) for interband optical transition, i.e., \[r^{i}_{ml} =\begin{cases}\frac{v^{j}_{ml}}{i\omega_{ml}}&(m\neq l)\\ 0&(m=l)\end{cases}, \tag{21}\] Eq. (20) is rewritten as the expression with velocity. Thus, \(r^{j}_{ml;j}\) becomes \[\begin{split} r^{j}_{ml;i}&=-\hbar R^{i}_{ml}(\mathbf{k}) \frac{v^{j}_{ml}}{E_{ml}}\\ &=-\frac{2i\hbar^{2}}{E_{ml}}\left[\frac{v^{j}_{ml}(v^{j}_{ll}-v^{j}_ {mm})}{E_{ml}}+\frac{v^{j}_{ml}(v^{j}_{ll}-v^{j}_{mm})}{E_{ml}}\right.\\ &\left.+\sum_{n}\left(\frac{v^{j}_{ml}v^{j}_{ml}}{E_{mn}}-\frac{v^{j}_ {mm}v^{j}_{ml}}{E_{ml}}\right)\right],\end{split} \tag{22}\] where \(v^{i}_{ml}\) is \(\langle u_{mk}|j^{\text{charge}}_{i}|u_{lk}\rangle=\frac{1}{2}\left\langle u_{ mk}|\{-e\hat{\omega}_{0}\otimes\hat{I}_{N},\hat{v}_{l}\}|u_{lk}\rangle\right\rangle\) for charge current and \(\langle u_{mk}|j^{\text{spin}}_{i}|u_{lk}\rangle=\frac{1}{2}\left\langle u_{mk} |\{\frac{1}{2}\hat{\sigma}_{2}\otimes\hat{I}_{N},\hat{v}_{l}\}|u_{lk}\rangle\) for spin current. Thus, by substituting the second-term of Eq. (22) in the summation part of Eq. (9) with \(\omega_{1}=-\omega_{2}\equiv\omega\neq 0\) Figure 5: Real parts of charge and spin (a, d) shift- and (b, e) injection-current conductivities for monolayer NbSe\({}_{2}\) under LP and CP light irradiation, respectively. (c, f) \(\tau\)-dependences of shift (red) and injection (blue) current conductivities for linearly \(x\)-polarized and RCP light of \(\hbar\omega=2.5\) eV. Here, plots of the NLO conductivities are shown in logarithmic scale. The units of NLO charge and spin conductivities are \(e^{3}/\hbar\) and \(e^{2}\), respectively. the DC photoconductivity becomes \(\sigma^{(2)}_{ijk}(0;\omega,-\omega)=\sigma^{\rm shift}_{ijk}(0;\omega,-\omega)+ \sigma^{\rm injection}_{ijk}(0;\omega,-\omega)\). Here, \(\sigma^{\rm shift}_{ijk}(0;\omega,-\omega)\) is shift-current and \(\sigma^{\rm injection}_{ijk}(0;\omega,-\omega)\) is injection-current conductivities. For LP light irradiation, \(\sigma^{\rm shift}_{ijk}(0;\omega,-\omega)\) and \(\sigma^{\rm injection}_{ijk}(0;\omega,-\omega)\) are derived as [39] \[\sigma^{\rm shift}_{ijk}(0;\omega,-\omega)=-\frac{i\hbar e^{3}}{S}\sum_{\mathbf{k }}\sum_{mn}f_{nm}\frac{\alpha^{jk}_{mn}(\mathbf{k})R^{i}_{nm}(\mathbf{k})}{E^{2}_{mn}( E_{mn}-\hbar\omega-i\eta)} \tag{23}\] with transition intensity \[\alpha^{jk}_{mn}(\mathbf{k})=\frac{1}{2}(v^{j}_{mn}(\mathbf{k})v^{k}_{nm}(\mathbf{k})+v^{k }_{mn}(\mathbf{k})v^{j}_{nm}(\mathbf{k})), \tag{24}\] and \[\sigma^{\rm injection}_{ijk}(0;\omega,-\omega)=\tau\frac{i\hbar e^{3}}{S}\sum_{ \mathbf{k}}\sum_{mn}f_{nm}\frac{\alpha^{jk}_{mn}(\mathbf{k})\Delta^{i}_{mn}(\mathbf{k})}{ E^{2}_{mn}(E_{mn}-\hbar\omega-i\eta)}, \tag{25}\] where the dummy variables (\(m\to n,l\to m\)) are interchanged and \(\tau=\hbar/\eta\). For spin current, the superscripts of these conductivities are rewritten as \({\rm``spin-shift"}\) and \({\rm``spin-injection"}\). It should be noted that the spin-dependent position and velocity difference for the direction of generated current are \(r^{j}_{ml;({\rm spin})}\) and \(\Delta^{i-{\rm spin}}_{lm}\). According to Ref. [28], since these approaches of length [4] and velocity gauges [39] should be equivalent, we use the expression of velocity gauge for our calculation. Since the shift current strongly depends on the crystal symmetry, the charge and spin shift current can be generated in odd-number-layered NbSe\({}_{2}\), but absent in even-number-layered NbSe\({}_{2}\). The finite charge and spin shift-current conductivities can be obtained as \[\sigma^{\rm shift}_{yyy}(0;\omega,-\omega) =-\sigma^{\rm shift}_{yxx}(0;\omega,-\omega)\] \[=-\sigma^{\rm shift}_{yxy}(0;\omega,-\omega)=-\sigma^{\rm shift} _{xyy}(0;\omega,-\omega) \tag{26}\] and \[\sigma^{\rm spin-shift}_{xxx} (0;\omega,-\omega)=-\sigma^{\rm spin-shift}_{xyy}(0;\omega,-\omega)\] \[=-\sigma^{\rm spin-shift}_{yyy}(0;\omega,-\omega)=-\sigma^{\rm spin -shift}_{yyx}(0;\omega,-\omega), \tag{27}\] respectively. The other elements are absent for monolayer NbSe\({}_{2}\). These relations are equivalent to the results of SHG process [62]. Figure 5 (a) shows the charge and spin shift-current conductivities for monolayer NbSe\({}_{2}\) under irradiation of LP light. Because of the broken inversion symmetry, the charge and spin shift current can be generated in \(y\)- and \(x\)-directions for linearly \(x\)-polarized light, respectively. On the other hand, for linearly \(y\)-polarized light, the charge and spin shift current can be generated in \(y\)- and \(x\)-directions, respectively. We should note that the generated spin shift current is perpendicular to the charge current. The appearance of charge (spin) shift current along \(\langle x\rangle\)-direction can also be understood by inspecting \(\alpha^{jk}_{mn}(\mathbf{k})\) with \(jk=xx,yy\) and \(R^{i}_{nm}(\mathbf{k})\), which are contained in the intergrand of Eq. (23). When the product \(\alpha^{jk}_{mn}(\mathbf{k})R^{i}_{nm}(\mathbf{k})\) with \(jk=xx,yy\) is even with respect to \(\pi\)-rotation, we obtain the finite charge (spin) shift current along \(y\) (\(x\))-direction. The contour plots of \(\alpha^{jk}_{mn}(\mathbf{k})\) and \(R^{i}_{nm}(\mathbf{k})\) can be found in Fig. 11 in Appendix E. Since \(\alpha^{jk}_{mn}(\mathbf{k})\) with \(jk=xx,yy\) is even for \(\pi\)-rotation, the shift current is absent for \(R^{i}_{nm}(\mathbf{k})\) with odd parity. On the other hand, the finite elements of injection-current conductivities have a following relation: \[\sigma^{\rm spin-injection}_{xxx}(0;\omega,-\omega)=-\sigma^{\rm spin -injection}_{xyy}(0;\omega,-\omega)\] \[=-\sigma^{\rm spin-injection}_{xyy}(0;\omega,-\omega)=-\sigma^{\rm spin -injection}_{yxx}(0;\omega,-\omega), \tag{28}\] however, the other elements are absent for monolayer NbSe\({}_{2}\). Figure 5 (b) shows the spin injection-current conductivities for monolayer NbSe\({}_{2}\) under LP light irradiation. Owing to the broken inversion symmetry, the spin injection current can be generated in \(x\)-direction for \(x\)- and \(y\)-polarized light. As shown in Fig. 5 (c), the magnitude of the generated spin injection current is larger than that of spin shift current owing to \(\tau\)-dependence of injection current. Here, the plots of conductivities are shown in logarithmic scale. Thus, if \(\tau\) increases, the magnitude of injection current becomes larger, but that of shift current is robust, i.e., topologically protected. We also find that the parity of \(\alpha^{jk}_{mn}(\mathbf{k})\Delta^{i}_{mn}(\mathbf{k})\) in the integrand of Eq. (25) is involved in the generation of injection current. As shown in Fig. 11 in Appendix E, \(\alpha^{jk}_{mn}(\mathbf{k})\) with \(jk=xx,yy\) and \(\Delta^{x-{\rm spin}}_{mn}(\mathbf{k})\) are even for \(\pi\)-rotation. Thus, the spin injection current is generated in \(x\)-direction, because \(\alpha^{jk}_{mn}(\mathbf{k})\Delta^{i}_{mm}(\mathbf{k})\) with \(jk=xx,yy\) is even with respect to \(\pi\)-rotation. ### Shift and injection current under CP light irradiation Similarly, for irradiation of CP light, the shift/injection-current conductivities are given as \[\sigma^{\rm shift}_{iRL}(0;\omega,-\omega)=\frac{e^{3}}{\hbar S}\sum_{\mathbf{k}} \sum_{mn}f_{nm}\frac{\Omega^{xy}_{mn}(\mathbf{k})-\Omega^{yx}_{mn}(\mathbf{k})}{E_{mn}- \hbar\omega-i\eta}R^{i}_{nm}(\mathbf{k}) \tag{29}\] and \[\sigma^{\rm injection}_{iRL}(0;\omega,-\omega)=\frac{\tau e^{3}}{\hbar S}\sum_{ \mathbf{k}}\sum_{mn}f_{nm}\frac{\Omega^{xy}_{mn}(\mathbf{k})-\Omega^{yx}_{mn}(\mathbf{k})}{ E_{mn}-\hbar\omega-i\eta}\Delta^{i}_{mn}(\mathbf{k}) \tag{30}\] with Berry curvature \[\Omega^{xy}_{mn}(\mathbf{k})=-\Omega^{yx}_{mn}(\mathbf{k})=-\hbar^{2}\frac{{\rm Im}(v^ {x}_{mn}(\mathbf{k})v^{y}_{nm}(\mathbf{k}))}{E^{2}_{mn}}. \tag{31}\] Note that the subscript \(RL\) means irradiation of RCP light. If these conductivities include the spin current operator, Eqs. (29) and (30) are rewritten as spin-dependent shift- and injection-current conductivities. Here, \(R^{i}_{nm}(\mathbf{k})\) and \(\Delta^{i}_{nm}(\mathbf{k})\) includes the charge and spin operators. For RCP light, the finite elements of charge and spin shift-current conductivities are \(\sigma^{\rm shift}_{xRL}(0;\omega,-\omega)\) and \(\sigma^{\rm spin-shift}_{yRL}(0;\omega,-\omega)\). For LCP light, the charge and spin shift-current conductivities have opposite sign to those of RCP light, i.e., \[\begin{split}\sigma^{\rm shift}_{xRL}(0;\omega,-\omega)& =-\sigma^{\rm shift}_{xRL}(0;\omega,-\omega),\\ \sigma^{\rm spin-shift}_{yLR}(0;\omega,-\omega)&=- \sigma^{\rm spin-shift}_{yRL}(0;\omega,-\omega).\end{split} \tag{32}\] The other elements are absent for monolayer NbSe\({}_{2}\). Figure 5 (d) shows the charge and spin shift-current conductivities for monolayer NbSe\({}_{2}\) under irradiation of CP light. Because of the broken inversion symmetry, the charge shift current can be generated in \(x\)-direction for CP light. On the other hand, the spin shift current can be generated in \(y\)-direction for CP light. Comparing the results of CP light with those of LP light, the directions of generated charge and spin shift current are switched for monolayer NbSe\({}_{2}\): the charge shift current is generated in \(y\)-direction for LP light, but in \(x\)-direction for CP light. For spin shift current, the irradiation of LP and CP light causes the generation in \(x\)- and \(y\)-directions, respectively. Thus, the direction of generated shift current can be switched if LP light is altered to CP light. For CP light, the charge injection-current conductivities \(\sigma^{\rm injection}_{xRL}(0;\omega,-\omega)\) and \(\sigma^{\rm injection}_{xLR}(0;\omega,-\omega)\) are finite for monolayer NbSe\({}_{2}\), in further the relation of these finite conductivities is given as \[\sigma^{\rm injection}_{xRL}(0;\omega,-\omega)=-\sigma^{\rm injection}_{xLR} (0;\omega,-\omega). \tag{33}\] The other elements are absent for monolayer NbSe\({}_{2}\). Figure 5 (e) shows the charge injection-current conductivities for monolayer NbSe\({}_{2}\) under irradiation of CP light. Owing to the broken inversion symmetry, the charge injection current can be generated in \(x\)-direction for CP light. As shown in Fig. 5 (f), the magnitude of injection-current conductivity becomes larger with the increase of \(\tau\), but that of shift-current conductivity is robust owing to its topological properties. It is found that the generated injection current for CP light irradiation has stronger \(\tau\)-dependence than that of LP light. \(\Omega^{yy}_{nm}(\mathbf{k})\), \(R^{i}_{nm}(\mathbf{k})\) and \(\Delta^{i}_{nm}(\mathbf{k})\) of the integrands in Eqs. (29) and (30) are shown in Appendix E. Since the products are even for \(\pi\)-rotation, where \(\Omega^{yy}_{nm}(\mathbf{k})\), \(R^{i}_{nm}(\mathbf{k})\) and \(\Delta^{i}_{nm}(\mathbf{k})\) are odd, the shift and injection current can be generated for monolayer NbSe\({}_{2}\). However, when the products are odd for \(\pi\)-rotation, the current is absent. ### Selection rule by polarization The selection rules of shift- and injection-current conductivities by light polarization are summarized in Table 2. Figures 6 (a) and (b) show the schematics of generated shift current in monolayer NbSe\({}_{2}\) under irradiation of LP light. For \(y\)-polarized light, the charge and spin shift current can be generated in \(y\)- and \(x\)-directions. On the other hand, for \(x\)-polarized light, the charge and spin shift current are generated in \(y\)- and \(x\)-directions, respectively. For CP light irradiation, the charge and spin shift current can be generated in \(x\)- and \(y\)-directions as shown in Figs 6 (c) and (d). Thus, the directions of generated charge and spin shift current are switched when the polarization of incident light is altered from LP to CP. ## VI Summary In summary, we have theoretically studied the second-order NLO charge and spin current in the metallic TMDCs. As a example of metallic TMDCs, we have employed few-layered NbSe\({}_{2}\), which possesses the Ising-type SOC. For odd-number-layered NbSe\({}_{2}\), the inversion symmetry is broken, however, the inversion symmetry is preserved for even-number-layered NbSe\({}_{2}\). It is pointed out that the second-order NLO charge and spin current in the metallic TMDCs strongly depends on their crystal symmetry of system and polarization of incident light. The NLO current is finite for odd-number-layered NbSe\({}_{2}\), but absent for even-number-layered NbSe\({}_{2}\). Since BCL can control the symmetry of electronic system, NLO charge and spin current can be induced along the directions of leaves of BCL. Thus, under irradiation of BCL, the NLO current can be generated in not only odd-number-layered, but also even-number-layered NbSe\({}_{2}\). For even-number-layered NbSe\({}_{2}\), the peaks appear in low energy range, which cause the infrared light absorption. Such infrared light absorption is absent in semiconductor TMDC owing to the \begin{table} \begin{tabular}{c c c c c} & \(\sigma^{\rm shift}_{ijk}(0;\omega,-\omega)\) & \(\sigma^{\rm spin-shift}_{ijk}(0;\omega,-\omega)\) & \(\sigma^{\rm injection}_{ijk}(0;\omega,-\omega)\) & \(\sigma^{\rm spin-injection}_{ijk}(0;\omega,-\omega)\) \\ \hline \(ijk=xxx\) & 0 & \(\approx 10^{0}\) & 0 & \(\approx 10^{2}\) (\(\eta=0.001\) eV) \\ \(ijk=xyy,xyxy,yyx\) & 0 & \(\approx-10^{0}\) & 0 & \(\approx-10^{2}\) (\(\eta=0.001\) eV) \\ \(ijk=yyy\) & \(\approx 10^{1}\) & 0 & 0 & 0 \\ \(ijk=xx,xyx,xxy\) & \(\approx-10^{1}\) & 0 & 0 & 0 \\ \(ijk=xRL\) & \(\approx 10^{1}\) & 0 & \(\approx 10^{1}\) (\(\eta=0.001\) eV) & 0 \\ \(ijk=yRL\) & 0 & \(\approx 10^{1}\) & 0 & 0 \\ \(ijk=xLR\) & \(\approx-10^{1}\) & 0 & \(\approx-10^{1}\) (\(\eta=0.001\) eV) & 0 \\ \(ijk=yLR\) & 0 & \(\approx-10^{1}\) & 0 & 0 \\ \end{tabular} \end{table} Table 2: Shift- and injection-current conductivities for monolayer NbSe\({}_{2}\) under irradiation of LP and CP light. energy band gap around Fermi level. In further, we have shown that shift and injection current can be generated in odd-number-layered NbSe\({}_{2}\) under irradiation of LP and CP light. The topological shift current is robust to electron scattering of system, but the injection current strongly depends on the scattering. The generated shift current can be switched if LP light is altered to CP light. In Supplementary Material, the second-order NLO current can be generated even in MoS\({}_{2}\) as a reference of TMDC semiconductors [63]. Thus, TMDCs such as NbSe\({}_{2}\) and MoS\({}_{2}\) can be used for the source of second-order NLO charge and spin current. Our results can serve to design spin current harvesting and opt-spintronics devices on the basis of 2D materials. ###### Acknowledgements. This work was supported by JSPS KAKENHI (Nos. 22H05473, JP21H01019, JP18H01154) and JST CREST (No. JPMJCR19T1). ## Appendix A Crystal symmetry and the second-order NLO conductivity The second-order NLO conductivity strongly depends on the crystal symmetry [28; 62]. Table 1 shows that the NLO charge and spin conductivities of SHG process are finite for odd-number-layered NbSe\({}_{2}\), but absent for even-number-layered NbSe\({}_{2}\). The finite NLO conductivities are attributed to the broken inversion symmetry. In this section, we discuss general NLO conductivities by considering the crystal symmetry of system, which are consistent with the numerical results. Even-number-layered NbSe\({}_{2}\) has the inversion symmetric operator \(\hat{P}=\hat{M}_{x}\hat{M}_{y}\hat{M}_{z}\). Here, the \(3\times 3\) matrices of \(\hat{M}_{x}\), \(\hat{M}_{y}\) and \(\hat{M}_{z}\) are expressed as \[\hat{M}_{x}=\begin{pmatrix}-1&0&0\\ 0&1&0\\ 0&0&1\end{pmatrix},\,\hat{M}_{y}=\begin{pmatrix}1&0&0\\ 0&-1&0\\ 0&0&1\end{pmatrix},\,\hat{M}_{z}=\begin{pmatrix}1&0&0\\ 1&0&0\\ 0&0&-1\end{pmatrix}.\] Because of the inversion symmetry \(P\), position operator \(\hat{\mathbf{r}}\), momentum operator \(\hat{\mathbf{p}}\) and spin operator \(\hat{\mathbf{s}}=\frac{\hbar}{2}(\hat{\mathbf{\sigma}}_{x},\hat{\mathbf{\sigma}}_{y},\hat {\mathbf{\sigma}}_{z})\) have the following properties: \[\hat{P}\hat{\mathbf{r}}=-\hat{\mathbf{r}},\,\,\hat{P}\hat{\mathbf{p}}=-\hat{\mathbf{p}},\,\, \hat{P}\hat{\mathbf{s}}=\hat{\mathbf{s}}. \tag{10}\] The charge current density \(J_{i}^{\text{charge}}\) and \(j\)- and \(k\)-elements of incident electric fields \(E_{j}(\omega_{1})\) and \(E_{k}(\omega_{2})\) after the operation Figure 6: Schematics of generated charge and spin shift current for (a) \(y\)-polarized light and (b) \(x\)-polarized light in monolayer NbSe\({}_{2}\). For (c) RCP and (d) LCP light, generation of charge and spin shift current is caused in monolayer NbSe\({}_{2}\). Figure 7: Symmetric operator for an axial vector. Schematics of mirror and inversion symmetry operations \(\hat{M}_{x},\hat{M}_{y}\), \(\hat{M}_{z}\) and \(\hat{P}\) for (a) spin-up (red) and spin-down (blue) states \(s_{z}=\pm\frac{\hbar}{2}\) and (b) electric fields of RCP (magenta) and LCP (cyan) light. of \(\hat{P}\) become \[J_{i}^{\rm charge}\xrightarrow{\hat{P}}J_{i}^{\rm charge}=-J_{i}^{ \rm charge},\] \[E_{j}(\omega_{1})E_{k}(\omega_{2})\xrightarrow{\hat{P}}\tilde{E}_ {j}(\omega_{1})\tilde{E}_{k}(\omega_{2})=(-1)^{2}E_{j}(\omega_{1})E_{k}(\omega_ {2}),\] where \(i\) is the propagation of charge current (\(i=x,y,z\)) and \((j,k)\) are the polarizations of incident light. Note that \(\tilde{A}\) means the arbitrary physical quantity \(A\) after the symmetry operation. For second-order NLO response, \(J_{i}^{\rm charge}\) is proportional to \(E_{j}(\omega_{1})E_{k}(\omega_{2})\), which given as \[J_{i}^{\rm charge}=\sigma_{ijk}^{\rm charge}(\omega_{1}+\omega_{2};\omega_{1}, \omega_{2})E_{j}(\omega_{1})E_{k}(\omega_{2}). \tag{10}\] Comparing the current and electric fields before and after \(\hat{P}\) operation, the NLO conductivities become \[\sigma_{ijk}^{\rm charge}(\omega_{1}+\omega_{2};\omega_{1},\omega_{2})=-\sigma _{ijk}^{\rm charge}(\omega_{1}+\omega_{2};\omega_{1},\omega_{2}).\] Thus, in even-number-layered NbSe\({}_{2}\), the NLO charge conductivity \(\sigma_{ijk}^{\rm charge}(\omega_{1}+\omega_{2};\omega_{1},\omega_{2})\) is absent for all combinations of \(ijk\). The spin current density \(J_{i}^{\rm spin}\) is given as \[J_{i}^{\rm spin}=\sigma_{ijk}^{\rm spin}(\omega_{1}+\omega_{2};\omega_{1}, \omega_{2})E_{j}(\omega_{1})E_{k}(\omega_{2}). \tag{11}\] As seen in Fig. 7 (a), in even-number-layered NbSe\({}_{2}\), the spin state is invariant after \(\hat{P}\). Thus, \(J_{i}^{\rm spin}\) after \(\hat{P}\) becomes \[J_{i}^{\rm spin}\xrightarrow{\hat{P}}J_{i}^{\rm spin}=-J_{i}^{\rm spin},\] which causes the absence of NLO spin conductivity \(\sigma_{ijk}^{\rm spin}(\omega_{1}+\omega_{2};\omega_{1},\omega_{2})\) for even-number-layered NbSe\({}_{2}\). These results are consistent with the numerical calculations shown in Table 1. In odd-number-layered NbSe\({}_{2}\), where \(P\) is broken, the system has the mirror symmetry \(M_{x}M_{z}\). \(J_{i}^{\rm charge}\) and \(E_{j}(\omega_{1}),E_{k}(\omega_{2})\) after the mirror symmetry operation \(\hat{M}_{x}\hat{M}_{z}\) become \[J_{i}^{\rm charge}\xrightarrow{\hat{M}_{x}\hat{M}_{z}}J_{i}^{ \rm charge}=(-1)^{\hat{\delta}_{bx}+\hat{\delta}_{x}}J_{i}^{\rm charge},\] \[E_{j}(\omega_{1})\xrightarrow{\hat{M}_{x}\hat{M}_{z}}\hat{E}_ {j}(\omega_{1})=(-1)^{\hat{\delta}_{bx}+\hat{\delta}_{z}}E_{j}(\omega_{1}),\] \[E_{k}(\omega_{2})\xrightarrow{\hat{M}_{x}\hat{M}_{z}}\hat{E}_ {k}(\omega_{2})=(-1)^{\hat{\delta}_{bx}+\hat{\delta}_{z}}E_{k}(\omega_{2}),\] where \(\delta_{bx}\) is Kronecker delta: \[\delta_{bx} = \begin{cases}1&(i=x)\\ 0&(i\neq x)\end{cases}. \tag{12}\] Thus, \(\hat{M}_{x}\hat{M}_{z}\) makes the following \(ijk\)-elements of \(\sigma_{ijk}^{\rm charge}(\omega_{1}+\omega_{2};\omega_{1},\omega_{2})\) in Eq. (10) finite, i.e., \(ijk=xxy,xyx,xyz,zxy,xxx,yyz,xxy,yzx,yyz,xyz,xyz,xyz,zzy\). The other elements are absent for odd-number-layered NbSe\({}_{2}\). In addition, the system has 3-fold rotation symmetry operation \(\hat{C}_{3}\), which is given as \[\hat{C}_{3}=\begin{pmatrix}-\frac{1}{2}&\frac{\sqrt{3}}{2}&0\\ -\frac{\sqrt{3}}{2}&-\frac{1}{2}&0\\ 0&0&1\end{pmatrix}.\] Owing to \(\hat{C}_{3}\), the finite NLO charge conductivities have a relation for odd-number-layered NbSe\({}_{2}\), i.e., \[\sigma_{yyy}^{\rm charge}(\omega_{1}+\omega_{2};\omega_{1},\omega _{2})=-\sigma_{xyz}^{\rm charge}(\omega_{1}+\omega_{2};\omega_{1},\omega_{2})\] \[=-\sigma_{xyx}^{\rm charge}(\omega_{1}+\omega_{2};\omega_{1}, \omega_{2})=-\sigma_{yxx}^{\rm charge}(\omega_{1}+\omega_{2};\omega_{1},\omega_{ 2}).\] In Fig. 7 (a), the spin state becomes opposite after \(\hat{M}_{x}\hat{M}_{z}\), which causes \[J_{i}^{\rm spin}\xrightarrow{\hat{M}_{x}\hat{M}_{z}}J_{i}^{\rm spin}=-(-1)^{ \hat{\delta}_{bx}+\hat{\delta}_{z}}J_{i}^{\rm spin}.\] Thus, the following \(ijk\)-elements of \(\sigma_{ijk}^{\rm spin}(\omega_{1}+\omega_{2};\omega_{1},\omega_{2})\) in Eq. (11) are finite for odd-number-layered NbSe\({}_{2}\): \(ijk=xxx,xxz,xy,xxx,xzz,yxy,yyx,yyz,zxy,zxx,zzy,zzz\). The other elements are absent. Also, for odd-number-layered NbSe\({}_{2}\), the following relation of \(\sigma_{ijk}^{\rm spin}(\omega_{1}+\omega_{2};\omega_{1},\omega_{2})\) is obtained after \(\hat{C}_{3}\), i.e., \[\sigma_{xxx}^{\rm spin}(\omega_{1}+\omega_{2};\omega_{1},\omega _{2})=-\sigma_{xyy}^{\rm spin}(\omega_{1}+\omega_{2};\omega_{1},\omega_{2})\] \[=-\sigma_{xyy}^{\rm spin}(\omega_{1}+\omega_{2};\omega_{1},\omega _{2})=-\sigma_{yyx}^{\rm spin}(\omega_{1}+\omega_{2};\omega_{1},\omega_{2}).\] These results indicate that spin current can be generated in odd-number-layered NbSe\({}_{2}\) when charge current is absent. When \(\omega_{1}=\omega_{2}\equiv\omega\neq 0\), the analytic results are consistent with the numerical results of SHG process shown in Table 1. Next, by considering the crystal symmetry, we shall discuss the generation of NLO charge and spin current in NbSe\({}_{2}\) under CP light irradiation. For CP light, \(J_{i}^{\rm charge}\) is given as \[J_{i}^{\rm charge}=\sigma_{IRL}^{\rm charge}(\omega_{1}+\omega_{2};\omega_{1}, \omega_{2})E_{R}(\omega_{1})E_{L}(\omega_{2}). \tag{13}\] For even-number-layered NbSe\({}_{2}\), the product of electric fields of CP light \(E_{R}(\omega_{1})E_{L}(\omega_{2})\) after \(\hat{P}\) becomes \[E_{R}(\omega_{1})E_{L}(\omega_{2})\xrightarrow{\hat{P}}\tilde{E}_{R}(\omega_{1}) \tilde{E}_{L}(\omega_{2})=E_{R}(\omega_{1})E_{L}(\omega_{2}),\] where as shown in Fig. 7 (b), \(E_{R}(\omega_{1}),E_{L}(\omega_{2})\) are invariant for \(\hat{P}\). Comparing the current and the electric fields of incident light before and after \(\hat{P}\), \(\sigma_{ijk}^{\rm charge}(\omega_{1}+\omega_{2};\omega_{1},\omega_{2})\) becomes zero. Since \(J_{i}^{\rm spin}\) is invariant after \(\hat{P}\), \(\sigma_{iRL}^{\rm spin}(\omega_{1}+\omega_{2};\omega_{1},\omega_{2})\) is absent as same as NLO charge conductivity. In odd-number-layered NbSe\({}_{2}\), \(E_{R}(\omega_{1})E_{L}(\omega_{2})\) after \(\hat{M}_{x}\hat{M}_{z}\) becomes \[E_{R}(\omega_{1})E_{L}(\omega_{2})\xrightarrow{\hat{M}_{x}\hat{M}_{z}}E_{R}( \omega_{1})\tilde{E}_{L}(\omega_{2})=E_{L}(\omega_{1})E_{R}(\omega_{2}).\] Thus, \(\hat{M}_{x}\hat{M}_{z}\) makes \(\sigma^{\rm charge}_{xRL}(\omega_{1}+\omega_{2};\omega_{1},\omega_{2})\) and \(\sigma^{\rm charge}_{zRL}(\omega_{1}+\omega_{2};\omega_{1},\omega_{2})\) finite, but \(\sigma^{\rm charge}_{yRL}(\omega_{1}+\omega_{2};\omega_{1},\omega_{2})\) is absent. Since the spin state becomes opposite owing to \(\hat{M}_{x}\hat{M}_{z}\), the NLO spin conductivity \(\sigma^{\rm spin}_{yRL}(\omega_{1}+\omega_{2};\omega_{1},\omega_{2})\) is finite, but the other elements are absent. These results are consistent with the numerical calculation summarized in Table 2. In further, under irradiation of BCL (\(A_{\rm BCL}(t)=A_{L}e^{in_{1}\omega t}+A_{R}e^{in_{2}\omega t-i\theta}+\text{c.c.}\)), the mirror symmetry \(M_{x}\) is broken. Thus, \(\hat{M}_{y}\hat{M}_{z}\) remains in the electronic system. \(J^{\rm charge}_{i}\) and \(E_{\rm BCL}(t)\) after \(\hat{M}_{y}\hat{M}_{z}\) become \[J^{\rm charge}_{i}\xrightarrow{\hat{M}_{y}\hat{M}_{z}}J^{\rm charge }_{i}=(-1)^{\delta_{y}+\delta_{z}}J^{\rm charge}_{i},\] \[E_{\rm BCL}(t)\xrightarrow{\hat{M}_{y}\hat{M}_{z}}\tilde{E}_{\rm BCL }(t)=E_{\rm BCL}(t).\] Comparing the current and the electric field of incident light before and after \(\hat{M}_{y}\hat{M}_{z}\), the NLO charge conductivity \(\sigma^{\rm charge}_{x-{\rm BCL}-{\rm BCL}}(n_{1}\omega,n_{2}\omega)\) becomes finite. However, the other elements are absent for even-number-layered NbSe\({}_{2}\). As seen in Fig. 7 (a), for \(\hat{M}_{y}\hat{M}_{z}\), the spin state becomes opposite. Thus, \(\sigma^{\rm spin}_{y-{\rm BCL}-{\rm BCL}}(n_{1}\omega,n_{2}\omega)\) and \(\sigma^{\rm spin}_{z-{\rm BCL}-{\rm BCL}}(n_{1}\omega,n_{2}\omega)\) are finite, but \(\sigma^{\rm spin}_{x-{\rm BCL}-{\rm BCL}}(n_{1}\omega,n_{2}\omega)\) is absent. These analytic results show that thanks to incident BCL, the NLO charge and spin current can be generated even in even-number-layered NbSe\({}_{2}\). Odd-number-layered NbSe\({}_{2}\) has \(M_{x}M_{z}\). Since \(M_{x}\) is broken under BCL irradiation, the electronic system has \(\hat{M}_{z}\). For \(\hat{M}_{z}\), \(J^{\rm charge}_{i}\) and \(E_{\rm BCL}(t)\) become \[J^{\rm charge}_{i}\xrightarrow{\hat{M}_{z}}J^{\rm charge}_{i}=( -1)^{\delta_{z}}J^{\rm charge}_{i},\] \[E_{\rm BCL}(t)\xrightarrow{\hat{M}_{z}}\tilde{E}_{\rm BCL}(t)=E _{\rm BCL}(t).\] Owing to the comparison between the generated current and incident light before and after \(\hat{M}_{z}\), \(\sigma^{\rm charge}_{x-{\rm BCL}-{\rm BCL}}(n_{1}\omega,n_{2}\omega)\) and \(\sigma^{\rm charge}_{y-{\rm BCL}-{\rm BCL}}(n_{1}\omega,n_{2}\omega)\) become finite for odd-number-layered NbSe\({}_{2}\), but \(\sigma^{\rm charge}_{z-{\rm BCL}-{\rm BCL}}(n_{1}\omega,n_{2}\omega)\) is absent. Since the spin state is invariant after \(\hat{M}_{z}\), \(\sigma^{\rm spin}_{x-{\rm BCL}-{\rm BCL}}(n_{1}\omega,n_{2}\omega)\) and \(\sigma^{\rm spin}_{y-{\rm BCL}-{\rm BCL}}(n_{1}\omega,n_{2}\omega)\) are finite, but \(\sigma^{\rm spin}_{z-{\rm BCL}-{\rm BCL}}(n_{1}\omega,n_{2}\omega)\) is absent. ## Appendix B NLO charge and spin conductivities for irradiation of BCL with multi-leaves In this section, we consider BCL of 4-, 5-, 6- and 7-leaf with \(\theta=0\) in Eq. (12). Figure 8 (a) shows the trajectories of these incident BCL. Figures 8 (b), (c), (d) and (e) show the NLO charge and spin conductivities for monolayer NbSe\({}_{2}\) with SOC under irradiation of BCL. Since the BCL can control the symmetry of electronic system, these NLO charge and spin conductivities are finite for monolayer NbSe\({}_{2}\), which are similar to the results of BCL with 3-leaf (\(n_{1}:n_{2}=1:2\)). Thus, unlike the irradiation of LP light, the NLO charge and spin current is generated in both \(x\)- and \(y\)-directions along the induced polarizations by irradiating BCL. We should note that the peaks of the NLO charge and spin conductivities shift lower with increase of leaves of BCL. The shift originates from the denominator \(E_{mn}-(n_{1}+n_{2})\hbar\omega-i\eta\) in Eq. (9). With increase of \((n_{1}+n_{2})\), the denominator \(E_{mn}-(n_{1}+n_{2})\hbar\omega-i\eta\) becomes smaller, thus the shift occurs. ## Appendix C Finite conductivity for CP light with different optical angular frequency The shift current under irradiation of CP light contains the product of Berry curvature \(\Omega^{xy}_{mn}(\mathbf{k})\) and shift vector \(R^{i}_{nm}(\mathbf{k})\) as shown in Eq. (29). The Berry curvatures between RCP and LCP light are antisymmetric, thus the shift current for irradiation of RCP and LCP light is generated in opposite directions. When the optical angular frequencies of RCP and LCP light are same, as shown in Fig. 9 (a), the cancellation of generated shift current occurs for RCP and LCP light, i.e., absence of current [80; 42; 81]. Figure 9 (b) shows the schematics of shift current under the irradiation of LCP light with \(2n_{1}\omega\) and RCP light with \(2n_{2}\omega\). If LCP and RCP light has the different optical angular frequencies, i.e., \(n_{1}\neq n_{2}\), the amounts of shifts for photoexcited electrons under irradiation of LCP and RCP light are not same, i.e., no cancellation of shift current. In Eq. (13), the NLO conductivity for BCL irradiation is rewritten as the summation of conductivities for different four combinations of CP light, i.e., 'LL', 'LR', 'RL', 'RR'. Owing to the irradiation of CP light with different optical angular frequencies, the first and fourth terms are finite, but other terms are cancelled. Thus, for irradiation of BCL, the second-order NLO current is generated even in even-number-layered NbSe\({}_{2}\). ## Appendix D NLO charge and spin current in trilayer NbSe\({}_{2}\) under irradiation of BCL As shown in Fig. 10 (a), we consider \(n_{1}:n_{2}=1:2\) and \(\theta=0\) for BCL with 3-leaf. Figure 10 (b) shows the NLO charge and spin conductivities for trilayer NbSe\({}_{2}\) under irradiation of the BCL. Since the BCL breaks the mirror symmetry with respect to \(y-z\) plane, these NLO charge and spin conductivities are finite for trilayer NbSe\({}_{2}\). Here, it should be noted that the peaks of the NLO conductivities appear around \(\hbar\omega=0.01\) eV shown by yellow squares. The appearance of peaks occurs owing to the photoexcitation between the nearest valence and conduction bands of trilayer NbSe\({}_{2}\) as shown in Fig. 10 (c), which causes infrared light absorption. In further, for trilayer NbSe\({}_{2}\), the NLO conductivities under irradiation of 3-leaf BCL consist of three optical transition processes: (i) intralayer optical transition of monolayer NbSe\({}_{2}\), (ii) intra- and interlayer optical transitions of bilayer NbSe\({}_{2}\) and (iii) interlayer optical transition between monolayer and bilayer NbSe\({}_{2}\). For irradiation of LP light, the process (ii) is absent [62], however BCL with 3-leaf makes all three processes finite. Appendix E Contour plots of \(\alpha_{mn}^{jk}(\mathbf{k})\), \(R_{nm}^{i}(\mathbf{k})\), \(\Delta_{mn}^{i}(\mathbf{k})\), \(\Omega_{mn}^{jk}(\mathbf{k})\) We discuss the origin of finite shift and injection current by considering the contour plots of transition intensity \(\alpha_{mn}^{jk}(\mathbf{k})\), shift vector \(R_{nm}^{i}(\mathbf{k})\), velocity difference between electron and hole \(\Delta_{mn}^{i}(\mathbf{k})\), and Berry curvature \(\Omega_{mn}^{jk}(\mathbf{k})\). All these factors appear as the integrand of the conductivities. For simplicity, we shall focus on the results of monolayer NbSe\({}_{2}\), because other cases such bi- and trilayer NbSe\({}_{2}\) are similar. In Eq. (23), the charge (spin) shift-current conductivity for LP light contains the product \(\alpha_{mn}^{jk}(\mathbf{k})R_{nm}^{i}(\mathbf{k})\) (\(\alpha_{mn}^{jk}(\mathbf{k})R_{nm}^{i-\text{spin}}(\mathbf{k})\)). If the product \(\alpha_{mn}^{jk}(\mathbf{k})R_{nm}^{i}(\mathbf{k})\) is even with respect to \(\pi\)-rotation, the shift-current conductivities become finite. According to the contour plots of transition intensity shown in Fig. 11 (a), \(\alpha_{mn}^{jk}(\mathbf{k})\) with \(jk=xx,yy\) is even for \(\pi\)-rotation. Similarly, as shown in Fig. 11 (b), the shift vectors \(R_{nm}^{y}(\mathbf{k})\) and \(R_{nm}^{x-\text{spin}}(\mathbf{k})\) are also even for \(\pi\)-rotation. Therefore, the charge and spin shift current are generated in \(y\)- and \(x\)-directions, respectively. The remaining elements \(R_{nm}^{x}(\mathbf{k})\) and \(R_{nm}^{y-\text{spin}}(\mathbf{k})\) are odd for \(\pi\)-rotation, i.e., the absence of shift current. In Eq. (25), the charge (spin) injection-current conductiv Figure 9: Schematics of photoexcitation for CP light irradiation and corresponding shift vectors. (a) For irradiation of RCP (red) and LCP (blue) light with same optical angular frequency \(\omega\), RCP and LCP make same amount of shift but opposite direction, i.e., absence of current. (b) For irradiation of RCP and LCP light with different optical angular frequencies \(n_{1}\omega\neq n_{2}\omega\), the amount of shifts by RCP and LCP is different and no cancellation, i.e., generation of current. ity for LP light contains \(\alpha^{jk}_{mn}(\mathbf{k})\Delta^{i}_{mn}(\mathbf{k})\) (\(\alpha^{jk}_{mn}(\mathbf{k})\Delta^{i-\mathrm{spin}}_{mn}(\mathbf{k})\)). Since \(\alpha^{jk}_{mn}(\mathbf{k})\) with \(jk=xx,yy\) is even with respect to \(\pi\)-rotation, velocity difference \(\Delta^{i}_{mn}(\mathbf{k})\) should be even in order to generate the injection current. In actual, as shown in Fig. 11 (c), \(\Delta^{x-\mathrm{spin}}_{mn}(\mathbf{k})\) has even parity. Thus, the spin injection current is generated in \(x\)-direction. The remaining elements of \(\Delta^{i}_{mn}(\mathbf{k})\) are odd, i.e., the absence of injection current. Similarly, in Eqs. (29) and (30), the charge (spin) shift/injection-current conductivities for RCP light contain the products \(\Omega^{xy}_{mn}(\mathbf{k})R^{i}_{nm}(\mathbf{k})\) (\(\Omega^{xy}_{mn}(\mathbf{k})R^{i-\mathrm{spin}}_{mn}(\mathbf{k})\)) and \(\Omega^{xy}_{mn}(\mathbf{k})\Delta^{i}_{mn}(\mathbf{k})\) (\(\Omega^{xy}_{mn}(\mathbf{k})\Delta^{i-\mathrm{spin}}_{mn}(\mathbf{k})\)), respectively. When the products are even with respect to \(\pi\)-rotation, the shift and injection current is generated for RCP light. Figures 11 (d), (e) and (f) show the contour plots of Berry curvature, shift vector and velocity difference for monolayer NbSe\({}_{2}\) under RCP light irradiation. From Fig. 11 (d), \(\Omega^{xy}_{mn}(\mathbf{k})\) and \(\Omega^{xy}_{mn}(\mathbf{k})\) are odd for \(\pi\)-rotation. Since \(R^{x}_{nm}(\mathbf{k})\) and \(R^{y-\mathrm{spin}}_{mn}(\mathbf{k})\) are odd parities, the charge and spin shift current is generated in \(x\)- and \(y\)-directions, respectively. Similarly, since \(\Delta^{x}_{mn}(\mathbf{k})\) is odd parity, the charge injection current is generated in \(x\)-direction.
2305.06955
How chiral forces shape neutron-rich Ne and Mg nuclei
We compute the structure of the exotic even nuclei $^{20-34}$Ne and $^{34-40}$Mg using interactions from chiral effective field theory (EFT). Our results for the ground-state rotational bands in $^{20-32}$Ne and $^{36-40}$Mg agree with data. We predict a well-deformed $^{34}$Ne and find that $^{40}$Mg exhibits an oblate deformed band close to the prolate ground-state, indicating the emergence of shape co-existence at the neutron dripline. A global sensitivity analysis shows that the subleading singlet $S$-wave contact and a pion-nucleon coupling strongly impact deformation in chiral EFT.
Andreas Ekström, Christian Forssén, G. Hagen, G. R. Jansen, T. Papenbrock, Z. H. Sun
2023-05-11T16:30:49Z
http://arxiv.org/abs/2305.06955v1
# How chiral forces shape neutron-rich Ne and Mg nuclei ###### Abstract We compute the structure of the exotic even nuclei \({}^{20-34}\)Ne and \({}^{34-40}\)Mg using interactions from chiral effective field theory (EFT). Our results for the ground-state rotational bands in \({}^{20-32}\)Ne and \({}^{36-40}\)Mg agree with data. We predict a well-deformed \({}^{34}\)Ne and find that \({}^{40}\)Mg exhibits an oblate deformed band close to the prolate ground-state, indicating the emergence of shape co-existence at the neutron dripline. A global sensitivity analysis shows that the subleading singlet \(S\)-wave contact and a pion-nucleon coupling strongly impact deformation in chiral EFT. _Introduction.--_ Neutron-rich nuclei beyond the magic neutron number \(N=20\) are interesting because of the breakdown of this shell closure and the interplay between nuclear deformation and weak binding in the so-called island of inversion [1; 2; 3; 4; 5; 6; 7; 8; 9]. In neon (proton number \(Z=10\)) the dripline nucleus is \({}^{34}\)Ne [10; 11], and signatures of rigid rotation (with relatively low-lying \(J^{\pi}=2^{+}\) states whose energies decreases with increasing \(N\)) are found for isotopes with \(N=20,22\)[12; 13; 14; 15]. In magnesium (\(Z=12\)) shape co-existence [16; 17; 18] has been observed in \({}^{32}\)Mg [19], the dripline is thought to be beyond \(N=28\), and nuclei are deformed for \(N\geq 20\)[20; 21; 22; 23; 24; 25]. The structure of the weakly bound dripline nucleus \({}^{40}\)Mg is puzzling and intriguing: The lowest \(0^{+}\) and \(2^{+}\) states are known and it is not clear how to place a state associated with a second observed \(\gamma\)-ray transition into the spectrum [26]. Recent calculations suggest that the low-lying spectrum signals shape co-existence, and that the coupling to continuum degrees of freedom impacts its structure [27; 28]. In this Letter we revisit even neon and magnesium nuclei from an _ab initio_ perspective [29], building on the most recent works [30; 31; 32], and aiming at two goals: First, we use the chiral interaction 1.8/2.0 (EM) of Ref. [33] to more accurately predict the structure of the dripline nuclei \({}^{34}\)Ne and \({}^{40}\)Mg; we also employ an ensemble of chiral potentials to arrive at quantified uncertainties. Second, we employ emulators and a global sensitivity analysis to investigate how chiral interactions impact deformation in this region of the nuclear chart. The question "What drives nuclear deformation?" has captivated generations of nuclear physicists. We briefly summarize relevant milestones: Bohr [34], Bohr and Mottelson [35], and Nilsson [36] explained deformations as the surface vibrations of a liquid drop and the motion of independent nucleons confined inside [37]. In the 1960s, Baranger and Kumar [38] performed Hartree-Fock Bogoliubov calculations in two active shells and showed that the competition of pairing and quadrupole interactions determine nuclear deformation [39]. A decade later Federman and Pittel [40] demonstrated that deformation in the shell model sets in when isoscalar neutron-proton interactions dominate over isovector pairing interactions. Dufour and Zuker [41] revisited deformation in the nuclear shell model and found it useful to decompose the Hamiltonian into monopole and multipole parts [42; 43]. Here, the monopole essentially is the one-body normal-ordered term of the shell-model interaction, while the multipole terms are two-body operators; they contain the residual pairing and quadrupole interactions. These results have been succinctly summarized by Zuker's "Multipole proposes, monopole disposes"[44], i.e., the competition between pairing and quadrupole interactions might suggest deformation while the monopole--the effective spherical mean field--acts as a referee. We clearly have a good phenomenological understanding about nuclear deformation but lack a more microscopic understanding of which parts of the nucleon-nucleon interaction impact deformation. While the pairing interaction is readily identified with the nucleon-nucleon interaction in the \({}^{1}S_{0}\) partial wave, the origin of the quadrupole interaction is opaque. With view on Ref. [40], one might be tempted to identify the quadrupole interaction with the isoscalar \({}^{3}D_{2}\) partial wave (which is attractive). However, the quadrupole interaction is long range--in contrast to the short-range nucleon-nucleon interaction--and it is applicable only in model spaces consisting of one-to-two shells [38]. Thus, our understanding of nuclear deformation is still limited to a low-resolution picture. The _ab initio_ computations [30; 31; 32; 45; 46; 47] reproduced deformed nuclei but did not investigate how they are shaped by the underlying forces. In this work, we seek to understand what impacts deformation at the highest resolution scale possible today, i.e., based on chiral effective field theory (EFT) [48; 49; 50]. This is currently as close as we can get in tying low-energy nuclear structure to quantum chromodynamics (QCD) without actually solving QCD. _Hamiltonian, methods, and model space.--_ We use the intrinsic Hamiltonian \[H=T-T_{\rm CoM}+V_{NN}+V_{NNNN}. \tag{1}\] Here \(V_{NN}\) is the nucleon-nucleon (\(NN\)) potential, \(V_{NNN}\) the three-nucleon (\(NNN\)) potential, \(T\) the total kinetic energy, and \(T_{\rm CoM}\) the kinetic energy of the center of mass. We employ the chiral \(NN\) and \(NNN\) interaction 1.8/2.0 (EM) [33], which yields accurate binding energies and spectra of light-, medium-, and heavy-mass nuclei [51; 52; 53; 54; 55; 56]. We also used an ensemble of about 100 chiral (next-to-next-to-leading order) NNLO \(NN\) and \(NNN\) interactions with explicit Delta degrees of freedom (with non-local regulators and a momentum cutoff of 394 MeV/\(c\)[57; 58]). This ensemble consists of non-implausible interactions filtered out by history matching [59; 60; 61] to scattering phase shifts, deuteron properties, the \({}^{3}\)H, \({}^{4}\)He, \({}^{16}\)O binding energies and charge radii, and ground- and excited states in \({}^{22,24,25}\)O [62]. We assigned posterior weights conditional on the \(J^{\pi}=2^{+}\) and \(4^{+}\) rotational states in \({}^{24}\)Ne and then used importance resampling [63; 64] to make posterior predictive distributions for rotational states in other nuclei. Hu _et al._[61] used a similar approach to predict the neutron-skin thickness of \({}^{208}\)Pb. Our coupled-cluster computations [65; 66; 67; 68] start from an axially symmetric Hartree-Fock reference state with prolate deformation [69; 32]. The inclusion of full \(NNN\) forces increases the computational cost significantly, and we would therefore like to work with the normal-ordered two-body approximation [70; 71; 72]. However, the normal-ordered two-body Hamiltonian based on a deformed reference state breaks rotational symmetry. To avoid this problem we follow Frosini _et al._[73] and first perform a spherical Hartree-Fock computation based on a uniform occupation of the partially filled shells. The resulting density matrix is then used to make the normal-ordered two-body approximation, and the Hamiltonian is finally transformed back to the harmonic oscillator basis. This spherical two-body Hamiltonian is the starting point for our axially symmetric Hartree-Fock computation which yields the reference state \(|\Phi_{0}\rangle\). Our Hartree-Fock computations use a spherical harmonic oscillator basis of up to thirteen major shells while the \(NNN\) interaction is further restricted by an energy cut \(E_{\rm 3max}=16\hbar\Omega\). To gauge the convergence of our results we varied the harmonic oscillator frequency (\(\hbar\omega\)) from 10-16 MeV. Due to computational cost our angular-momentum projected coupled-cluster calculations are restricted to 8-9 major shells, which are sufficient to converge quadrupole deformed states in neon isotopes [32]. We employ the coupled-cluster singles-and-doubles approximation [67]. For a more accurate angular-momentum projection than in Ref. [32] we use the bi-variational coupled-cluster energy functional [74; 75; 67] \[E^{(J)}=\frac{\langle\widetilde{\Psi}|P_{J}H|\Psi\rangle}{\langle\widetilde{ \Psi}|P_{J}|\Psi\rangle}. \tag{2}\] Here \(P_{J}\) is the angular-momentum projection operator (and contains a rotation operator \(R\)), \(|\Psi\rangle\equiv e^{T}|\Phi_{0}\rangle\) is the right coupled-cluster state and \(\langle\widetilde{\Psi}|\equiv\langle\Phi_{0}|(1+\Lambda)e^{-T}\) is the corresponding left ground-state. In the singles-and-doubles approximation of coupled cluster, the excitation operator \(T\) and the de-excitation operator \(\Lambda\) are truncated at the two-particle-two-hole (\(2p\)-\(2h\)) level. We evaluate Eq. (2) via the disentangled approach by Qiu _et al._[76]. This approach applies the Thouless theorem [77] to act with the rotation operator \(R\) on the symmetry broken reference state, i.e., \(\langle\Phi_{0}|R=\langle\Phi_{0}|R|\Phi_{0}\rangle\langle\Phi_{0}|e^{V}\), with \(V\) being a 1\(p\)-\(1h\) de-excitation operator. Next, one expands \(e^{V}e^{T}=W_{0}+W_{1}+W_{2}\)-. The series of \(np\)-\(nh\) excitation operators \(W_{n}\) does not truncate and includes up to \(Ap\)-\(\Lambda h\) excitations for a nucleus with mass number \(A\). We only keep the disentangled amplitudes \(W_{0},W_{1}\), and \(W_{2}\) and compute them as the solution of a set of ordinary differential equations [76]. The truncation at \(W_{2}\) implies that the projection operator \(P_{J}\) is not treated exactly, and angular momentum is only approximately a good quantum number. The Supplemental Material [78] might be useful to experts. _Results for neon and magnesium isotopes.--_ Figures 1 and 2 show the computed energies \(E(2^{+})\) and \(E(4^{+})\) of the lowest \(2^{+}\) and \(4^{+}\) states in the even nuclei \({}^{20-32}\)Ne and \({}^{34-40}\)Mg, respectively, and compares with available data. For neon isotopes, the angular-momentum projected coupled-cluster results based on the 1.8/2.0 (EM) interaction include estimates of uncertainties coming from truncated model-spaces by taking the spread of the results obtained for \(N_{\rm max}=6-8\) and \(\hbar\omega=10-16\) MeV. Using the Delta-full NNLO interaction ensemble we aim to quantify all relevant uncertainties. We employ a fixed model-space of \(N_{\rm max}=7\) and \(\hbar\omega=14\) MeV and assign normally distributed method errors with relative (one sigma) errors of 10% (5%) for \(2^{+}\) (\(4^{+}\)) excitation energies (corresponding to about \(100-150\) keV). Similarly, we assign a 10% relative EFT truncation error for all excitation energies. Overall, theory agrees with data, though the uncertainties are substantial. For nuclei outside of the island of inversion the ensemble of NNLO Delta-full interactions give somewhat too compressed spectra when comparing centroids to data and to the 1.8/2.0 (EM) interaction (which includes \(NN\) forces at N3LO). Our most precise results are obtained for \({}^{32,34}\)Ne, and the centroids for the ensemble of Delta-full interactions agree with the results for the 1.8/2.0 (EM) interaction. For \({}^{34}\)Ne no data is available, and we predict that the \(2^{+}\) and \(4^{+}\) states are similar to the corresponding excited states in \({}^{32}\)Ne. The computed \(R_{42}\equiv E(4^{+})/E(2^{+})\) values [79] of \({}^{34}\)Ne are \(3.37\pm 0.13\) for the ensemble (68% confidence interval) and 3.38 for 1.8/2.0 (EM); both are close to the value 10/3 of a rigid rotor. Thus, we expect \({}^{34}\)Ne to be as rotational as \({}^{32}\)Ne. Overall, _ab initio_ theory agrees with data, with the exception of \(E(2^{+})\) in \({}^{30}\)Ne which is significantly higher than experiment and therefore indicates an artificially large \(N=20\) shell gap. The persistence of the \(N=20\) shell gap in this region of the nuclear chart was also seen in the projected generator-coordinate-method computations of \({}^{30}\)Ne [31] and coupled-cluster computations of charge radii of neutron-rich neon and magnesium isotopes [69]. These findings hint at deficiencies of the employed interactions in correctly describing the boundaries of the island of inversion. One could speculate that our computed \(0^{+}\) ground-state of \({}^{30}\)Ne corresponds to the observed spherical shape co-existent \(0^{+}\) state in \({}^{32}\)Mg at 1 MeV of excitation energy [19]. For \({}^{34-40}\)Mg we employ the 1.8/2.0 (EM) interaction and use angular-momentum projected Hartree-Fock in larger model-spaces. This simplification is justified based on Ref. [32] and the comparison of the rotational bands obtained from Hartree-Fock and coupled-cluster theory in the neon nuclei. For the dripline nucleus \({}^{40}\)Mg we also include coupling to the particle continuum by using a Woods-Saxon basis consisting of bound and scattering states for the neutron \(p_{3/2}\) partial wave, following Ref. [80]. The results are close to data where those are available, see Fig. 2. One expects an inversion of the \(p_{3/2}\) and \(f_{7/2}\) single-particle orbitals close to the magnesium dripline. This is supported by the observation that \({}^{37}\)Mg is a deformed \(p-\)wave halo nucleus [81] and mean-field computations accounting for deformation and continuum coupling [82; 83]. Indeed our calculations for \({}^{38,40}\)Mg show an inversion of the \(\Omega^{\pi}=7/2^{-}\) and \(\Omega^{\pi}=1/2^{-}\) single-particle orbitals (where \(\Omega\) denotes the single-particle angular-momentum component along the axial symmetry axis). We find that \({}^{34-40}\)Mg are all prolate in their ground-state, with rotational bands that are close to data (see Fig. 2). Interestingly, for \({}^{40}\)Mg we also find an oblate Hartree-Fock state that is close in energy to the prolate ground state. Performing coupled-cluster calculations for these two references we find that the oblate band head is about 3 MeV above the prolate ground-state, indicating an onset of shape co-existence and a possible interpretation of the third observed excited state [26]. This picture is also consistent with the Monte-Carlo shell-model computations of Tsunoda _et al._[27]. Figure 2 shows both the prolate and oblate \(2^{+}\) and \(4^{+}\) states, and we observe that the rotational structure of these two bands are very similar and close to that of a rigid rotor. _Global sensitivity analysis of deformation.--_ We want to illuminate how the individual terms of the chiral interaction at NNLO impact deformation in the island-of-inversion nuclei \({}^{32,34}\)Ne and \({}^{34}\)Mg, and compare this to the deformed and stable nucleus \({}^{20}\)Ne. To that Figure 1: Energies of lowest \(2^{+}\) and \(4^{+}\) states in the even nuclei \({}^{20-34}\)Ne, computed using angular-momentum projected coupled-cluster with the interaction 1.8/2.0 (EM) [33] (red diamonds with uncertainties from finite model-spaces), and posterior predictive distributions from importance resampling using the ensemble of Delta-full (\(\Delta\)) interactions including sampling of method and model errors (68% and 90% credible intervals as a thick and thin vertical bar, respectively, and the median marked as a white circle) compared to data (black horizontal bars). Figure 2: Energies of lowest \(2^{+}\) and \(4^{+}\) states in even nuclei \({}^{34-40}\)Mg, as a function of the oscillator frequency and for various model spaces, computed using projected Hartree-Fock with the chiral interaction 1.8/2.0 (EM) [33] and compared to data (dashed horizontal lines). For \({}^{40}\)Mg we show both the prolate and oblate rotational bands; the latter band head is about 3 MeV above the prolate ground state. purpose we perform a variance-based global sensitivity-analysis [84; 85] of the ratio \(R_{42}\) in \({}^{20,32}\)Ne and \({}^{34}\)Mg. We partition the total variance of \(R_{42}\) into variances conditional on each of the low-energy constants in chiral EFT. The dimensionless ratio of a conditional variance and the total variance in \(R_{42}\) is called the _main effect_, and a greater value indicates a greater sensitivity of \(R_{42}\) to the corresponding low-energy constant. We consider all 17 low-energy constants of the Delta-full chiral EFT interaction model in the sensitivity analysis: The leading-order \(S\)-wave contacts \(\tilde{C}_{{}^{3}S_{1}}\) and \(\tilde{C}_{{}^{1}S_{0}}^{(\tau)}\) with \(\tau=nn,np,pp\) denoting the isospin projections \(1,0,-1\), the subleading contacts \(C_{{}^{1}S_{0}}\), \(C_{{}^{3}S_{1}}\), \(C_{{}^{3}P_{0}}\), \(C_{{}^{1}P_{1}}\), \(C_{{}^{3}P_{1}}\), and \(C_{{}^{3}P_{0}2}\) (acting in a partial wave as indicated by the subscript), and \(C_{E_{1}}\) acting in the the off-diagonal triplet \(S-D\) channel. In addition there are four subleading pion-nucleon couplings \(c_{1,2,3,4}\), as well as the \(c_{D}\) and \(c_{E}\) couplings governing the strengths of the short-range three-nucleon potential. The variance integrals underlying the sensitivity analysis are evaluated on a hypercubic domain centered on the \(\Delta\)NNLO\({}_{\rm G}\)(394) parameterization [58]. Drawing on recent Bayesian analyses [86; 87] we use \(\pm 0.05\) GeV\({}^{-1}\) as the relevant range for each of the pion-nucleon couplings \(c_{i}\) and \(\pm 0.05\times 10^{2}\) GeV\({}^{-2}\) for the sub-leading constants \(C_{i}\). The leading-order contact couplings \(\tilde{C}_{i}\) are somewhat small (in units of \(10^{4}\) GeV\({}^{-4}\)), in accordance with naturalness expectations, and their intervals are limited to \(\pm 0.005\times 10^{4}\) GeV\({}^{-4}\). We use Monte Carlo integration to evaluate the variance integrals and this requires \(18\cdot 2^{16}\approx 10^{6}\) samples to keep the sampling uncertainty small. Our results are robust when re-scaling all side-lengths of the hypercube by factors of 1/2 and 2. Larger domains result in noticeable higher-order sensitivities which we did not analyze further. It is sufficiently accurate to solve for the excited-state energies \(E(2^{+})\) and \(E(4^{+})\) in \({}^{20,32}\)Ne and \({}^{34}\)Mg using projection-after-variation Hartree-Fock [31; 32]. However, the Monte Carlo sampling in a global sensitivity analysis requires prohibitively many projected Hartree-Fock computations. Thus, we develop emulators, i.e., computationally efficient and accurate models that mimic the full _ab initio_ calculation [85; 88], using eigenvector continuation [89] of angular-momentum projected Hartree-Fock. All emulators were trained following a strategy similar to Ref. [85], i.e., using \(N_{\rm train}=68\) exact Hartree-Fock states each for the \(0^{+}\), \(2^{+}\), \(4^{+}\) states and training values for the low-energy constants drawn according to a space-filling latin hypercube design within 20-30% variation of of their \(\Delta\)NNLO\({}_{\rm G}\)(394) values. A comparison with 400 exact Hartree-Fock calculations indicates at most 1% discrepancies (on the \(1\sigma\)-level) for the \(R_{42}\) emulators and even better for the emulation of excitation energies. The Supplemental Material [78] might be useful to experts. Figure 3 shows the main effects for \(R_{42}\) in \({}^{20,32}\)Ne and \({}^{34}\)Mg. A majority of the output have \(R_{42}\approx 10/3\) which indicates an axially-deformed rigid rotor and emergent symmetry-breaking. We find that more than 97% of the variance in \(R_{42}\) is explained via main effects. For all three nuclei, about 40% of the deformation is driven by the subleading pion-nucleon coupling \(c_{3}\) and singlet \(S\)-wave contact. The former determines the strength of the NNLO two-pion exchange in the nucleon-nucleon and the three-nucleon potentials. Together with the main effect of \(\tilde{C}\), proportional to the leading singlet \(S\)-wave contact, we can explain most of the observed variance in \(R_{42}\). For \({}^{34}\)Mg, with two protons more than \({}^{20,32}\)Ne, deformation becomes more sensitive to the isospin-breaking \(S\)-wave contact in the proton-proton channel. Towards the neutron dripline, deformation is more sensitive to the short-range three-nucleon forces and the pion-nucleon coupling \(c_{2}\). We can also use the ensemble employed in Fig. 1 to probe what impacts deformation. The relevant parts of the nuclear interaction can be identified by studying correlations between the observable \(R_{42}\) in \({}^{32}\)Ne and individual low-energy constants. We find that the correlation is strongest for \(C_{{}^{1}S_{0}}\) (with a correlation coefficient \(r=0.73\), i.e., an increase in the repulsive \(C_{{}^{1}S_{0}}\) increases deformation), but it is also sizeable for the three-nucleon contact \(c_{E}\). Comparing these results with the conditional variances from the global sensitivity analysis confirms the importance of pairing via the \({}^{1}S_{0}\) channel. We note that the domain of low-energy constants used in the global Figure 3: Main effects of the \(R_{42}\) deformation measure in \({}^{20,32}\)Ne and \({}^{34}\)Mg as obtained in a global sensitivity-analysis of \(10^{6}\) emulations of projected Hartree-Fock computations using Delta-full chiral EFT at NNLO. Left panel: main effects of the low-energy constants. Right three panels: Histograms of \(R_{42}\) for \({}^{20}\)Ne (top), \({}^{32}\)Ne (middle),and \({}^{34}\)Mg (bottom). sensitivity analysis is smaller than the non-implausible volume spanned by the interaction ensemble from history matching. _Conclusions.--_ We reported on _ab initio_ computations of neutron-rich neon and magnesium nuclei. For neon, an ensemble of chiral interactions and the 1.8/2.0 (EM) interaction yield accurate results (except for \(N=20\)), and we predict that \({}^{34}\)Ne is well deformed. For \({}^{34-38}\)Mg our calculations with the 1.8/2.0 (EM) interaction are close to data, and we predict a prolate ground-state rotational band and an excited oblate band in \({}^{40}\)Mg. A global sensitivity analysis and a study of correlations reveal that a few low-energy constants strongly impact deformation. This is the first step in understanding nuclear deformation at a high resolution scale. This work was supported by the U.S. Department of Energy, Office of Science, Office of Nuclear Physics, under Award Nos. DE-FG02-96ER40963 and DE-SC0018223, by SciDAC-5 (NUCLEI collaboration), by the Quantum Science Center, a National Quantum Information Science Research Center of the U.S. Department of Energy, by the European Research Council (ERC) under the European Unions Horizon 2020 research and innovation program (Grant Agreement No. 758027), the Swedish Research Council (Grants No. 2017-04234, No. 2020-05127 and No. 2021-04507). Computer time was provided by the Innovative and Novel Computational Impact on Theory and Experiment (INCITE) programme. This research used resources of the Oak Ridge Leadership Computing Facility located at Oak Ridge National Laboratory, which is supported by the Office of Science of the Department of Energy under contract No. DE-AC05-00OR22725 and resources provided by the Swedish National Infrastructure for Computing (SNIC) at Chalmers Centre for Computational Science and Engineering (C3SE), and the National Supercomputer Centre (NSC) partially funded by the Swedish Research Council through grant agreement no. 2018-05973.
2301.13111
A First Rigorous Attempt to Explain Charge Transport in a Protein-Ligand complex
Recent experimental evidence shows that when a protein (or peptide) binds to its ligand pair, the protein effectively "switches on" enabling long-range charge transport within the protein. Astonishingly, the protein-ligand complex exhibits conductances in the order of nanosiemens over distances of many nanometers and macroscopic Ohm's law emerges. Here, we investigate this emergent phenomenon via the framework of many-body (fermionic) quantum statistical principles. We propose a simple model which gives rise to an Ohm's law with vanishing quantum effects in its thermodynamic limit (with respect to length scales). Specifically, we consider protein-ligand complexes as a two-band 1D lattice Hamiltonian system in which charge carriers (electrons or holes) are assumed to be quasi-free. We investigate theoretically and numerically the behavior of the microscopic current densities with respect to varying voltage, temperature and length within reasonable physiological parameter ranges. We compute the current observable at each site of the protein-ligand lattice and demonstrate how the local microscopic charge transport behavior generates the macroscopic current. The overall framework contributes to the search for unifying principles for long-range charge transport and associated emergent laws in protein complexes, which is crucial for bioelectronics.
Roisin Dempsey Braddell, Jone Uria-Albizuri, Jean-Bernard Bru, Serafim Rodrigues
2023-01-05T13:08:49Z
http://arxiv.org/abs/2301.13111v1
A First Rigorous Attempt to Explain Charge Transport in a Protein-Ligand complex ## 1 Abstract Recent experimental evidence shows that when a protein (or peptide) binds to its ligand pair, the protein effectively "switches on" enabling long-range charge transport within the protein. Astonishingly, the protein-ligand complex exhibits conductances in the order of nanosiemens over distances of many nanometers and macroscopic Ohm's law emerges. Here, we investigate this emergent phenomenon via the framework of many-body (fermionic) quantum statistical principles. We propose a simple model which gives rise to an Ohm's law with vanishing quantum effects in its thermodynamic limit (with respect to length scales). Specifically, we consider protein-ligand complexes as a two-band 1D lattice Hamiltonian system in which charge carriers (electrons or holes) are assumed to be quasi-free. We investigate theoretically and numerically the behavior of the microscopic current densities with respect to varying voltage, temperature and length within reasonable physiological parameter ranges. We compute the current observable at each site of the protein-ligand lattice and demonstrate how the local microscopic charge transport behavior generates the macroscopic current. The overall framework contributes to the search for unifying principles for long-range charge transport and associated emergent laws in protein complexes, which is crucial for bioelectronics. _Keywords_: charge transport, protein-ligand conductivity, many-body quantum statistical principles, emergence of Ohm's law ## 2 Introduction Charge transport at atomic scales constitutes one of the most fundamental processes for sustaining life on earth. It mediates the exchange of energy and matter between a living system and its environment. Within a living system, proteins mediate charge transport and there are about 42 million proteins in a single cell. These form a complex network of circuits capable of a plethora of functions: precise catalytic actions, highly specific substrate recognition, analyte binding, directional electron tunneling and energy conversion to name but a few. This has fuelled an ambitious quest in both academia and industry to achieve a theoretical understanding and technological control of the flow of charges, with the ultimate aim of developing advanced technologies. Indeed, in recent times we have witnessed the development of radical electronic chips and photonic integrated circuits where charge transport makes possible information exchange, information processing and data storage [12, 20]. However, current technologies are approaching their physical limits of miniaturization. This, in turn, imposes limits on computational speed, energy efficiency, storage because of the emergence of noise and quantum effects. Polymers and bio-polymers (i.e. proteins and peptides) have emerged as potential candidates for nano-scale bio-electronics, which could circumvent the shortcomings of previous technologies and would additionally enable bio-compatibility (i.e. nano circuits that interface with cells or entire organs to monitor or treat disease). Indeed, the discovery of electrically conductive polymers, which led to the award of Nobel Prize in chemistry in the year 2000, has revolutionized several technologies: corrosion inhibitors, compact capacitors, antistatic coating and smart windows that control the amount of light that flows through [8, 19, 11, 21, 28]. In parallel, research on the conductivity of peptides and proteins is radically transforming our understanding of bio-polymer charge transport and is inspiring novel bio-electronics (e.g. molecular switches, bio-rectifiers and bio-transistors [29, 9]). Moreover, these new findings are revealing that protein function is probably not just associated with protein shape and their conformations, but also their conductivity. Indeed, a series of astonishing recent experiments have found that when a protein (e.g. streptavidin, which in its natural state lacks known electrochemical activity) binds to its ligand pair (e.g. streptavidin-biotin complex) then the protein effectively "switches on" enabling charge transport within the protein. Surprisingly, the protein-ligand complex exhibits conductances in the order of nanosiemens over distances of many nanometers, orders of magnitude more than could be accounted for by electron tunneling. Additionally, it displays a linear voltage-current relationship (Ohm's law) super-imposed by telegraph noise [32]. These findings warrant a theoretical underpinning since the current models in the literature for protein conductivity are unable to explain this phenomenon. Indeed, over the years, several theoretical and computational models have been developed to explain biological charge transfer (exchange of electrons occurring between an ionically conductive electrolyte and a protein in contact with the electrolyte) and charge transport (electrons flowing through a protein in the absence of electrolyte or electrolyte participation) [5]. The majority of these models are underpinned by the Marcus Theory of Electron Transfer [22, 23] and extensions of it, know as Marcus-Hush theory [13]. This unifying theory expresses how electrons transition from an electron donor (D) to an acceptor (A) or sequences of D-A, mediated by oxidation-reduction (redox) centers (e.g. metal atoms, cofactors, amino acids/aromatic side chains, electrodes, etc) or electron sinks/sources (e.g. electrolytes). The primary goal of models built upon the Marcus-Hush theories have been to explain short-range electron transitions either via quantum-tunneling or hopping. Examples of such models are, superexchange (SE), flickering resonance (FR), thermally-activated hopping (TH), diffusion-assisted hopping (DH) [14]. Other computational methods, include density functional theory calculations [31] or detailed molecular dynamics simulations [26, 27]. n contrast, presently there is no unifying theory for long-range biological charge transport akin to the Marcus-Hush theory of electron transfer. Present approaches use sequential SE, FR, TH, DH steps, if for each step, the distance between redox centers are sufficiently short for effective tunneling (\(\leq 20\)A ), the energy levels of the redox centers are similar and there is strong coupling between electronic states [4]. Long-range charge transport is hypothesized to occur through the formation of bands (e.g. mediated via formation of new bonds or crystalline structures) thus giving rise to electronic states that are delocalized within the entire peptide. In this case, it's conceivable to model free (conducting) electrons (or holes) as classical particles moving in continuous bands of delocalized electronic states, possibly leading to Ohm-like characteristics or nonlinear current-voltage as in semi-conductors. Thus, it is fundamental to develop unifying theories from first principles of quantum mechanics, which can under appropriate thermodynamical limits give rise to Ohm-like laws and as a by-product provide insights about possible mechanisms for long-range biological charge transport. Fortunately, in a parallel literature, a community of Mathematical-Physicists (including one of the authors) have been developing novel mathematically rigorous charge transport theories based on many-body (fermionic) quantum statistical principles [6, 25, 15, 7, 6, 16, 17, 25]. These developments aimed to explain experiments showing that quantum effects vanish rapidly and macroscopic laws for charge transport emerge at length scales larger than a few nanometers. Specifically, Ohm's law survives for nanowires (silicon - Si doped with phosphorous - P) at 20 nm and even at low temperatures 4.2K) [1, 10]. The present manuscript attempts to exploit these novel theoretical frameworks but now focused towards explaining long-range charge transport in protein-ligand complexes that exhibit Ohmic characteristics superimposed with telegraph noise [32]. As a first attempt, we provide the simplest possible model of a finite length protein-ligand, where we assume the protein-ligand complex as a single molecule and we do not consider telegraph noise. Since we build upon these novel theoretical developments we will treat the underlying many-body system within the algebraic formulation for lattice fermion systems. Such systems can easily become computationally expensive or completely intractable and thus to make mathematical calculations amenable, we assume that the model is _quasi-free_; meaning the inter-particle interaction is sufficiently weak. Then by invoking the rigorous mathematical results on our proposed lattice model we determine from the expectation values of microscopic current densities the classical Ohmic currents as well as possible semiconducting behaviors. This leads us with a preliminary insight into potential quantum effects. Moreover, by looking at the occupation numbers of the lattice of the "protein-ligand" we observe directly the mechanism of charge transport. Description of the Many-Fermion Model The aim of the present manuscript is to provide a first modelling attempt to explain protein-ligand charge-transport that exhibits conductances in the order of nanosiemens over distances of many nanometers [32]. Therein, protein-ligand complexes are placed within their natural aqueous environment and conductances are measured with a Pd substrate (i.e. a working electrode) and in tandem with a scanning tunneling microscope (STM). Surprisingly, the protein-ligand complexes within their natural electrolyte environment do not exhibit charge transfer (between the protein-ligand and electrolyte) but rather display long-range charge-transport with Ohmic characteristics superimposed with telegraph noise. Effectively, the protein seems to "switch on" when coming in contact with binding agents (i.e. ligands) that appear to inject charge carriers into their interiors. Inspired by these experimental observations, we propose a simplified model where we consider the protein-ligand as a single one dimensional lattice where fermions (electrons) may hop from a conducting band 1 to a second nonconducting band 0. The non-conducting band attempts to model the possibility that fermions may become trapped (via a hopping term, similar to a classic semi-conductor). The measuring apparatus, that is, the Pd substrate and STM electrode tip are modelled as two standard conductors via two half-lines of arbitrarily large length, respectively on the right and left sides of the lattice (i.e. the protein-ligand complex), serving only as large fermionic reservoirs. The model does not consider the aqueous environment since the experimental measurements are undertaken in an electrical potential region that does not induce ion current flows [32]. Although charge carriers like electrons (or holes) have spin 1/2, we do not consider spin effects and thus the magnetic properties of the systems are not studied. This leads us to a two band lattice model described via the second quantization formalism as discussed in subsequent sections. ### The Model at Equilibrium We first consider the Hamiltonian at equilibrium associated with our proposed two-band 1D lattice model. Omitting the spin of charge carriers, we thus use the one-particle Hilbert space defined by: \[\mathfrak{h}:=\ell^{2}(\mathbb{Z}\times\{0,1\}):=\left\{(\psi(x))_{x\in \mathbb{Z}\times\{0,1\}}\subseteq\mathbb{C}:\sum_{x\in\mathbb{Z}\times\{0,1 \}}\left|\psi\left(x\right)\right|^{2}<\infty\right\} \tag{1}\] while the fermionic Fock space is denoted by \[\mathcal{F}:=\bigoplus_{n=0}^{\infty}P_{n}\mathfrak{h}^{\otimes n} \tag{2}\] with \(P_{n}\) being the projection to anti-symmetric functions of \(\mathfrak{h}\). Denote by \(a_{x,\mathrm{b}}^{*}\) (resp. \(a_{x,\mathrm{b}}\)) the creation (resp. annihilation) operator of a spinless fermion at \(x\in\mathbb{Z}\) in the band \(\mathrm{b}\in\{0,1\}\) acting on the fermionic Fock space. We will now describe a general Hamiltonian for this particular system with parameters \(\alpha_{x},x\in\{\mathrm{p},\mathrm{r}\}\) which give the strength of the relevant terms for fermions in the protein (\(\mathrm{p}\)) and reservoirs (\(\mathrm{r}\)) respectively. How the strength of these terms is chosen is discussed in section 3.3. #### 3.1.1 Conducting Band Let \(l\in\mathbb{N}\), \(\epsilon_{\mathrm{p}}\geq 0\) and \(\mu_{\mathrm{p,1}}\in\mathbb{R}\). We define the conducting band a 1D quantum system of length \(2l\times\mathbf{a}\), \(2l+1\) being the number of lattice sites and \(\mathbf{a}\) (in nm) being the lattice spacing, by the following Hamiltonian: \[H_{\mathrm{p,1}}:=\epsilon_{\mathrm{p}}\left(2N_{\mathrm{p,1}}-\sum_{y,x\in \mathbb{Z}\cap[-l,l]:|x-y|=1}a_{y,1}^{*}a_{x,1}\right)-\mu_{\mathrm{p,1}}N_{ \mathrm{p,1}}, \tag{3}\] where \[N_{\mathrm{p,1}}:=\sum_{x=-l}^{l}a_{x,1}^{*}a_{x,1} \tag{4}\] is the so-called particle number operator in band 1. The above Hamiltonian has two well-identified parts: The first term in the RHS of (3) gives the kinetic energy of fermions, this being the (second quantization of the) usual discrete Laplacian with hopping strengths \(\epsilon_{\mathrm{p}}\geq 0\) (in eV). The remaining part in the RHS of (3) gives the basic energy level of the band 1 inside the 1D quantum system, \(\mu_{\mathrm{p,1}}\in\mathbb{R}\) (in eV) being the so-called chemical potential associated with the conducting band. #### 3.1.2 Insulating Band and Band Hopping Given \(l\in\mathbb{N}\) and a chemical potential \(\mu_{\mathrm{p,0}}\in\mathbb{R}\) (or Fermi energy), the Hamiltonian in the insulating band 0 of the quantum system (of length \(2l\times\mathbf{a}\)) is given as a basic energy level \(-\mu_{\mathrm{p,0}}N_{0}\), where \[N_{\mathrm{p,0}}:=\sum_{x=-l}^{l}a_{x,0}^{*}a_{x,0} \tag{5}\] is the particle number operator in the band 0. In particular, no hopping between lattice sites of the band 0 is allowed. However, we add a band-hopping term, allowing electrons to hop from one band to another via the hopping strength \(\gamma\geq 0\) (in eV). Thus, we have the following Hamiltonian: \[H_{\mathrm{p,0}}:=-\mu_{\mathrm{p,0}}N_{0}-\gamma\sum_{x=-l}^{l}\left(a_{x,0} ^{*}a_{x,1}+a_{x,1}^{*}a_{x,0}\right). \tag{6}\] #### 3.1.3 The Fermion Reservoirs Given \(l,L\in\mathbb{N}\) with \(L\geq l\), \(\epsilon_{\mathrm{r}}\geq 0\) and \(\mu_{\mathrm{r}}\in\mathbb{R}\), the system is assumed to be between two fermion reservoirs, the Hamiltonian of which is given by \[H_{\mathrm{r}}:=\epsilon_{\mathrm{r}}\left(2N_{\mathrm{r}}-\sum_{x,y\in \mathbb{Z}\cap([-L,l-1]\cup[l+1,L]):|x-y|=1}a_{y,1}^{*}a_{x,1}\right)-\mu_{ \mathrm{r}}N_{\mathrm{r}} \tag{7}\] where \[N_{\mathrm{r}}:=\sum_{x=-L}^{l-1}a_{x,1}^{*}a_{x,1}+\sum_{x=l+1}^{L}a_{x,1}^{* }a_{x,1} \tag{8}\] is the particle number operator in the reservoirs. We assume the reservoirs have a single band. Similar to the 1D quantum system, \(\epsilon_{\mathrm{r}}\) and \(\mu_{\mathrm{r}}\) (in eV) give the hopping strength and chemical potential inside the reservoirs, respectively. Note that the chemical potentials in the left and right reservoirs (resp. in \(\{-L,\ldots,l-1\}\) and \(\{l+1,\ldots,L\}\)) are the same in this case. Different chemical potentials for each reservoir are possible, leading to similar, albeit slightly more complex, behaviors. Thus, we refrain from considering this case in our model to keep scientific discussions as simple as possible. #### 3.1.4 The Full Hamiltonian Keeping in mind the electric connection between the 1D quantum system, the substrate and the tip in [32], the full Hamiltonian is equal to \[H_{L}:=H_{\mathrm{p},1}+H_{\mathrm{p},0}+H_{\mathrm{r}}+H_{\mathrm{r-p}} \tag{9}\] where the Hamiltonian \[H_{\mathrm{r-p}}:=-\vartheta\left(a_{-l-1,1}^{*}a_{-l,1}+a_{-l,1}^{*}a_{-l-1, 1}+a_{l+1,1}^{*}a_{l,1}+a_{l,1}^{*}a_{l+1,1}\right)\] allows fermions to hop between the reservoirs and the band 1 of the 1D quantum system via the hopping strength \(\vartheta\geq 0\) (in eV). For any \(L,l\in\mathbb{N}\) with \(L\geq l\), all Hamiltonians can be seen as linear operators acting only on the restricted (fermionic) Fock space \[\mathcal{F}_{L}:=\bigoplus_{n=0}^{\infty}P_{n}\mathfrak{h}_{L}^{\otimes n} \equiv\mathcal{C}^{2^{2(2L+1)}}\subseteq\mathcal{F} \tag{10}\] constructed from the one-particle Hilbert space \[\mathfrak{h}_{L}\doteq\{\psi\in\mathfrak{h}:\forall x\in\mathbb{Z}\times\{0,1\}\backslash\left[-L,L\right],\ \psi(x)=0\}\equiv\ell^{2}(\mathbb{Z}\cap[-L,L]\times\{0,1\}). \tag{11}\] The dimension of the Fock space \(\mathcal{F}_{L}\) grows exponentially with respect to \(L\in\mathbb{N}\), rapidly making a priori numerical computations expensive for \(L\gg 1\). However, a key assumption of our proposed quantum model is its quasi-free nature. This means that the many-fermion system can be entirely described within the one-particle Hilbert space, which is in this case equal to \(\mathfrak{h}_{L}\). In particular, the numerical computations are therefore done on a space of dimension \(\mathrm{dim}\mathfrak{h}_{L}=2\left(2L+1\right)\), instead of \(\mathrm{dim}\mathcal{F}_{L}=2^{2\left(2L+1\right)}\) as for general (possibly interacting) fermion systems. For more details, see Supplementary Material, section 8. #### 3.1.5 The Gibbs State In the algebraic formulation of quantum mechanics, a state \(\rho\) is a continuous linear functional on \(\mathcal{B}\left(\mathcal{F}_{L}\right)\), the Banach space of bounded linear operators on \(\mathcal{F}_{L}\), that is positive (\(\rho\left(A\right)\geq 0\) if \(A\geq 0\)) and normalized (\(\rho\left(\mathbf{1}\right)=1\)). If the system is at equilibrium at initial time \(t=0\), the equilibrium state is given by the Gibbs state associated with the full Hamiltonian \(H_{L}\), defined at temperature \(\mathrm{T}>0\) (in K) and sufficiently large \(L\in\mathbb{N}\) by \[\rho\left(A\right):=\frac{\mathrm{Trace}_{\mathcal{F}_{L}}\left(A\mathrm{e}^ {-\left(\mathrm{k}_{B}\mathrm{T}\right)^{-1}H_{L}}\right)}{\mathrm{Trace}_{ \mathcal{F}_{L}}\left(\mathrm{e}^{-\left(\mathrm{k}_{B}\mathrm{T}\right)^{-1} H_{L}}\right)},\qquad A\in\mathcal{B}\left(\mathcal{F}_{L}\right), \tag{12}\] where \(\mathrm{k}_{B}\) is the Boltzmann constant (in \(\mathrm{eV.K^{-1}}\)). As usual, it is convenient to use the parameter \(\beta:=\left(\mathrm{k}_{B}\mathrm{T}\right)^{-1}>0\), which is interpreted as the inverse temperature of the system (in \(\mathrm{eV^{-1}}\)). This state can be studied in the one-particle Hilbert space \(\mathfrak{h}_{L}\) (11) because it is a (gauge-invariant) quasi-free state, allowing us to greatly simplify the complexity of the equations. See Section 8.3 for more details. ### The model driven by an electric potential #### 3.2.1 Dynamics induced by electric potentials Application of a voltage across the system results in a perturbed Hamiltonian \[H_{L}\left(\eta\right):=H_{L}+\eta E \tag{13}\] with \[E:=-\sum_{x=-L}^{l-1}a_{x,1}^{*}a_{x,1}+\sum_{x=-l}^{l}\frac{x}{l}\left(a_{x,1 }^{*}a_{x,1}+a_{x,0}^{*}a_{x,0}\right)+\sum_{x=l+1}^{L}a_{x,1}^{*}a_{x,1},\] \(\eta\in\mathbb{R}\) (in \(\mathrm{V}\)) being a parameter controlling the size of the voltage. Note that the electric potential difference is increases linearly across the protein. Note also that we take symmetric potential, meaning that the left reservoir (in \(\{-L,\ldots,l-1\}\)) has its chemical potential (or Fermi energy) shifted by \(-\eta\), while the chemical potential of the right reservoir (in \(\{l+1,\ldots,L\}\)) is shifted by \(+\eta\). The applied voltage on the quantum system is therefore \(2\eta\). A non-symmetric choice is of course possible, leading to similar, albeit slightly more complex, dynamical behaviors. The symmetric choice is taken for the sake of simplicity. In the algebraic formulation of quantum mechanics (cf. the Heisenberg picture), the dynamics of the system is a continuous group \((\tau_{t}^{(\eta)})_{t\in\mathbb{R}}\) defined on the finite-dimensional algebra \(\mathcal{B}(\mathcal{F}_{L})\) by \[\tau_{t}^{(\eta)}\left(A\right):=\mathrm{e}^{ith^{-1}H_{L}(\eta)}A\mathrm{e}^{ -ith^{-1}H_{L}(\eta)}\,\qquad A\in\mathcal{B}(\mathcal{F}_{L}), \tag{14}\] at any time \(t\in\mathbb{R}\) (in \(\mathrm{s}\)). In particular, a physical quantity represented by an observable \(A\) becomes time-dependent. The expectation value of this physical quantity value is given by the real number \(\rho(\tau_{t}^{(\eta)}(A))\), its variance by \(\rho(\tau_{t}^{(\eta)}(A)^{2}-\rho(\tau_{t}(A))^{2})\), and so on, since the initial state of the system (\(t=0\)) is given by the Gibbs state \(\rho\) defined by (12). Like the Gibbs state, the resulting dynamics are quasi-free and it can thus be studied in the one-particle Hilbert space \(\mathfrak{h}_{L}\) (11). This feature allows us to greatly simplify the complexity of the equations. See Section 8.4 for more details. #### 3.2.2 Current Observables The definition of current observables is model dependent in general. To compute them, it suffices to consider the discrete continuity equation (in terms of observables) for the fermion-density observable \[n_{x}(t)\doteq\tau_{t}^{(\eta)}(a_{x,b}^{*}a_{x,b}) \tag{15}\] at lattice site \(x\in\left\{-L,\ldots,L\right\}\) and time \(t\in\mathbb{R}\) in the band \(b\in\left\{0,1\right\}\): \[\partial_{t}n_{x,b}(t)=\tau_{t}^{(\eta)}\left(i\hbar^{-1}\left[H_{L}\left(\eta \right),a_{x,b}^{*}a_{x,b}\right]\right) \tag{16}\] where \(\left[A,B\right]\doteq AB-BA\) is the usual commutator. For any fixed \(x\in\left\{-L,\ldots,L\right\}\) and \(b=1\), one straightforwardly computes from the Canonical Anticommutation Relations (24) that \[\partial_{t}n_{x,1}(t) = \gamma\tau_{t}^{(\eta)}(I_{(x,x)}^{(0)})+\epsilon_{\mathrm{p}} \sum_{y\in\mathbb{Z}\cap\left[-l,l\right]:\left|x-y\right|=1}\tau_{t}^{(\eta) }(I_{(x,y)}^{(1)})\] \[+\theta\tau_{t}^{(\eta)}\left(\delta_{x,-l}I_{(-l-1,x)}^{(1)}+ \delta_{x,l}I_{(l+1,x)}^{(1)}+\delta_{x,-l-1}I_{(x,-l)}^{(1)}+\delta_{x,l+1}I _{(x,l)}^{(1)}\right)\] \[+\epsilon_{\mathrm{r}}\sum_{y\in\mathbb{Z}\cap\left[-L,l-1\right] \cup\left[l+1,L\right]:\left|x-y\right|=1}\tau_{t}^{(\eta)}(I_{(x,y)}^{(1)}),\] (\(L>l>1\)) where, for any \(x,y\in\left\{-L,\ldots,L\right\}\) and \(b\in\left\{0,1\right\}\), \[I_{(x,y)}^{(b)}:=i\hbar^{-1}\left(a_{x,1}^{*}a_{y,b}-a_{y,b}^{*}a_{x,1}\right). \tag{18}\] Observe that the positive signs in the right-hand side of (17) come from the fact that the particles are positively charged, \(I_{(x,y)}\) being the observable related to the flow of positively charged, zero-spin, particles from the lattice site \(x\) to the lattice site \(y\). Negatively charged particles can of course be treated in the same way. In fact, there is no experimental data on the sign of the charge carriers in [32], even if it is believed that the proximity of bands associated with oxidizable amino acid residues to the metal Fermi energy suggests that hole transport is more likely. We calculate the current inside the system, that is, we calculate the second term in the RHS of equation (18). Therefore, the current density observable in \(\left\{-l,\ldots,l\right\}\) produced by the electric potential difference \(2\eta\geq 0\) at any time \(t\in\mathbb{R}\) is thus equal to \[\mathbb{J}\left(t,\eta\right)\doteq\frac{\epsilon q_{e}}{2l}\sum_{x=-l}^{l-1} \left\{\tau_{t}^{(\eta)}(I_{(x+1,x)}^{(1)})-\tau_{t}^{(0)}(I_{(x+1,x)}^{(1)})\right\} \tag{19}\] with \(q_{e}\) being the charge (in C) of the fermionic charge carrier (electron or hole). Note that we remove the possible free current on the systems when there is no electric potential, even if one can verify in this specific case that it is always zero at equilibrium. Observe also that we do not consider (i) currents in the left and right fermion reservoirs, (ii) contact currents from the reservoirs to the 1D quantum system (in \(\left\{-l,\ldots,l\right\}\)) and (iii) currents from the conducting band 1 to the trapping band 0. These currents can easily be deduced from Equation (17): for (i)-(iii), see the terms in (17) with \(\epsilon_{\mathrm{r}}\), \(\vartheta\) and \(\gamma\) respectively. #### 3.2.3 Current Expectations and Fluctuations Expectation values of all currents are obtained by applying the state of the system. For instance, if the initial state of the system is the Gibbs state \(\rho\) defined by (12), the expectation of the current density (in A) produced by the electric potential difference \(2\eta\geq 0\) (in V) in \(\left\{-l,\ldots,l\right\}\) at any time \(t\in\mathbb{R}\) (in s) and temperature \(\mathrm{T}>0\) (in K) equals \[\mathbb{E}\left(\mathbb{J}\left(t,\eta\right)\right):=\rho\left(\mathbb{J} \left(t,\eta\right)\right). \tag{20}\] Quantum fluctuations of an observable \(A\) are naturally given by the variance \[\mathrm{Var}(A)=\mathbb{E}[A^{2}]-\mathbb{E}[A]^{2}. \tag{21}\] Applied to the current observables, this leads to the current variance. More generally, all moments associated with current observables can be defined in a similar way, according to usual probability theory. We refrain from considering higher moments than the variance in the sequel in order to keep things as simple as possible. This is however an important question to be studied in future research works in the context of the telegraph noise. Indeed, as shown in [32, Fig. S7], the telegraph noise is characterized by a bimodal distribution of currents, which could be investigated by some statistical methods involving 3rd and 4th order moments (such as the kurtosis and skewness). Note finally that the current density is an average of currents over the line \(\left\{-l,\ldots,l\right\}\) and it should converge rapidly to a deterministic value, as \(l\rightarrow\infty\). Similar to the central limit theorem in standard probability theory, one could study the rescaled variance \[2l\mathrm{Var}\left(\mathbb{J}\left(t,\eta\right)\right)=\frac{1}{2l}\left( \mathbb{E}\left(\left(2l\mathbb{J}\left(t,\eta\right)\right)^{2}\right)- \mathbb{E}\left(2l\mathbb{J}\left(t,\eta\right)\right)^{2}\right),\] which should converge to a fixed value. As shown in [17] for a one-band model, this quantity should be directly related with the rate of convergence of the current density, as \(l\rightarrow\infty\). This is however not studied in detail here. ### Choosing the Parameters of the System We subsequently fine tune the proposed model by appropriately choosing a physiologically suitable parameter set. However, because of the novelty of the protein-ligand complexes experiment outlined in [32], the associated microscopic parameters and data are still scant. Nevertheless, our aim is to provide a general theoretical framework, where particular model instances can first lead to qualitative agreement and future improvements to quantitative and complete explanation of this phenomenon. Consequently, we choose parameters that are physically plausible and with the correct order of magnitude. In general, we will assume "room temperature", i.e., \(\mathrm{T}\approx 300\mathrm{\ K}\) and this determines the value of \(\beta:=\left(\mathrm{k}_{B}\mathrm{T}\right)^{-1}\approx 38.7\mathrm{\ eV}^{-1}\). The lattice length, \(l\), will be set in accordance to the peptides/proteins considered in [32]. However, we first need to fix the lattice spacing \(\mathbf{a}\) of our model. In crystals, the lattice spacing is of the order of a few angstroms, and it is natural to assume roughly the same order of magnitude inside molecules. In our prototypical example, we set1\(\mathbf{a}\simeq 0.3\mathrm{\ nm}\). Following [32], we consider a 1D quantum system of several nanometers, more precisely of a size between \(2\mathrm{\ nm}\) and \(10\mathrm{\ nm}\) or even a little more. This corresponds to a 1D quantum system of length \(2l\) between about \(6.66\) and \(33.33\), i.e., \(l\in[3,17]\). For typical experiments we choose \(l=5\) (i.e., \(3\mathrm{\ nm}\)) and explore the effect of varying the length of the 1D quantum system. Concerning the length of the reservoirs, they have to be as large as possible in the sense that \(L\gg l\) and our choices are constrained by the computational tractability. Moreover, we choose \(L\gg l\) in such a way as to eliminate artificial "finite size" effects, as discussed in Section 5.1. Appropriate ranges of \(\epsilon_{\mathrm{p}}\) and \(\epsilon_{\mathrm{r}}\), that is the hopping strength of a fermion in the 1D quantum system (in \(\{-l,\ldots,l\}\), band 1) and fermion reservoirs respectively, can be derived from the lattice spacing \(\mathbf{a}\) and the effective mass of charge carriers \(m^{*}\simeq Cm_{e}\), where \(m_{e}\) is the electron mass. If \(C=1\), then a general hopping strength equals \[\epsilon:=\frac{\hbar^{2}}{m^{*}\mathbf{a}^{2}}=\frac{\hbar^{2}}{Cm_{e} \mathbf{a}^{2}}\simeq 0.85\ \mathrm{eV}.\] We set \(\epsilon_{\mathrm{r}}=0.85\) inside the reservoirs and take epsilon slightly smaller inside the protein, \(\epsilon_{\mathrm{p}}=0.65\ \mathrm{eV}\) (\(C\simeq 1.3\)). In real systems, the effective mass of an electron is usually larger than the electron mass, but since we have no concrete information at our disposal we only make the reservoir more conducting than the conducting band of the 1D quantum system. We note that changes to epsilon on the magnitude of \(0.1-0.2\ \mathrm{eV}\) do not have a significant effect on the behavior of the system, unlike changes to the \(\gamma\) parameter. Similarly, the hopping strength \(\vartheta\) controlling the hopping strength between the 1D quantum system and fermionic reservoirs should be of the same order. For simplicity, we choose \(\vartheta=(\epsilon_{\mathrm{p}}+\epsilon_{\mathrm{r}})/2\), meaning a very good hopping contact between the 1D quantum system and the two reservoirs. In real systems, this variable could be modified to encode bad connections between reservoirs and the 1D quantum system. The choice of \(\gamma\) fundamentally alters the nature of the model, with the 1D quantum system acting as a conductor or exhibiting semi-conductor like behaviors depending on the choice of \(\gamma\). We explore both situations. Finally, note that the voltage applied in the experiments described in [32] is of the order of one tenth of volts. For instance, for 0.1 V applied on the 1D quantum system, we are still in the linear response regime without any telegraph noise, see [32, Fig. 1]. Naively, this looks coherent with the energy scale used here, which is of the order of tenth of electronvolts. Figure 1: Comparison of current evolution and absolute difference of the current between timesteps of 0.01 ps for the conductor type model (\(\gamma=0.01\)) and semi-conductor type model (\(\gamma=0.125\)) for \(\eta=0.1\). For \(\gamma=0.01\) the time taken to reach the stationary regime is 0.1 ps, for \(\gamma=0.125\) the time taken is approximately 0.25 ps. ## 4 Methods Using the quasi-free property of the model one can calculate the evolution of the expectation value of the current density. For quasi-free equilibrium states and dynamics, the phase space can be reduced from the Fock space, which is of dimension \(2^{2(2L+1)}\), to a one-particle phase space of dimension \(2(2L+1)\). This is achieved by replacing the Hamiltonian with an equivalent one-particle Hamiltonian which defines dynamics on the one-particle Hilbert space \(\mathfrak{h}_{L}\) (11). This allows for an efficient computation of quantities (20) and (21). See Supplementary Material, section 8, for more details. The model was written in Python and is largely built with the NumPy package. The code is well-suited for calculating higher-order powers of observables (in this case, the current observable) which are needed to investigate the statistical properties of these observables. The limitation here is increasing computational complexity (see for instance Supplementary material section 8.6) which could rapidly become a problem with the extension of this method to higher dimensions. However, this approach is well-adapted to nanometric quantum systems as found, for example, in biological systems. The calculations of charge transport were performed on a MacBook Pro with a 2,4 GHz Quad-Core Intel Core i5-processor. ## 5 Results The nature of the model is altered by hopping terms in (6) between both bands, the strength of which is the parameter \(\gamma\). For \(\gamma=0\), or \(\gamma\) close to \(0\) the system behaves as a 1D-conductor. The expectation value (20) of the current is directly proportional to the applied voltage until it saturates, by reaching a maximum value. As \(\gamma\) increases the model starts to exhibit semiconductor like behavior, with low current until a threshold value of voltage at which the conducting band begins to exhibit partially filled charge-carrier states. After the threshold voltage, the current (20) increases in an approximately linear manner until it saturates. Figure 2: Evolution of the current for different lengths of the charge-carrier reservoirs for \(\eta\in\{0.1,0.2\}\) and a variety of different reservoir lengths showing the finite duration of the stationary regime and subsequent current collapse. ### Reservoirs and Finite Size Effects Numerical simulations were performed for numerous different parameters. For a given set of parameters, we calculate the mean current by disregarding transients and specifically averaging the current over the course of \(0.1\) ps in the _steady regime_, i.e., in the time interval in which the current change is negligible. For practical purposes, we say the current change is negligible when it deviates by less than \(10^{-3}\mu A\) over \(0.1\) ps. See Fig. 1(b). In practice, the steady regime exists for a finite interval of times depending on the sizes of the reservoirs, which are directly tuned by the parameter \(L\in\mathbb{N}\). This can be seen in Fig. 2, where the length of the steady regime always increases with respect to \(L\) and would be stationary for all sufficiently large times in the limit \(L\to\infty\). We do not provide here a mathematical proof of this conjecture, but this feature was present in all our numerical computations. In fact, our numerical computations inevitably induce finite size effects. This can be well understood via the depletion of charge carriers in the finite reservoirs, which yield a current collapsing as seen in Fig. 2 for finite \(L\) and sufficiently large times. This depletion is explicitly demonstrated in Fig 3 which gives the fermionic occupation number \[\mathbb{E}\left(\tau_{t}^{(\eta)}\left(a_{x,1}^{*}a_{x,1}\right)\right):=\rho \left(\tau_{t}^{(\eta)}\left(a_{x,1}^{*}a_{x,1}\right)\right) \tag{22}\] at time \(t\geq 0\) in each lattice site \(x\in\{-L,\ldots,L\}\) in the band \(1\). One can observe that the depletion of the occupation number is directly correlated to the collapsing of the current, as expected with a naive classical viewpoint. ### Emergence of Semi-Conductors Although the experiments performed in [32] show that a set of protein-ligand complexes display Ohm-like conductivity (i.e. possibly via the formation of new molecular bonds that give rise to delocalized electronic states akin to conductors) we also predict formation of semi-conductor type characteristics. Although this has not been experimentally observed so far, Figure 3: Occupation number evolution for the conductor case (\(\gamma=0.01\)) during stationary regime (a) and after current collapse (b). Larger times are encoded via bolder blue/orange lines. The dashed lines represent the values for the limit state, i.e., the Gibbs state when the full system is in presence of an electric potential \(\eta=0.1\). Figure 4: Ohm’s law for the 1D-Conductor: As \(\gamma\) is increased the range of values of the voltage for which the induced current varies linearly with respect to voltage decreases. As \(\gamma\) increases further the 1D quantum system begins to exhibit semi-conductor like behavior, with high-resisitivty at lower voltages and a transition to conductor like Behavior at a threshold voltage. our model predicts the existence of protein-ligand complexes with such a behavior, as anticipated for this kind of model. This is achieved by first noticing that the two bands in the model are linked with each other via the terms \[-\gamma\left(a_{x,0}^{*}a_{x,1}+a_{x,1}^{*}a_{x,0}\right),\qquad\gamma\geq 0,\ x \in\left\{-l,\ldots,l\right\},\] for all lattice sites inside the system of lattice number \(2l\) (corresponding to a 1D quantum system of length \(2l\times\mathbf{a}\), \(\mathbf{a}\) being the lattice spacing in nm), see (6). The parameter \(\gamma\) allows us to control the nature of the conductor. For \(\gamma=0\), the two bands are disconnected and the system behaves as a regular conductor. In particular, the current increases linearly with voltage for sufficiently small electric potentials. This behavior is seen for \(\gamma\ll 1\) and the same behavior holds true for small \(\gamma\neq 0\), see, e.g., Fig. 4(a). All these phenomena are of course expected and can in fact be mathematically showed in great generality, see, e.g., [7, 6], even in the limit of infinitely large \(L\gg l\gg 1\). For \(\gamma\neq 0\) the linear relationship between current and voltage progressively disappears as the average resistivity increases at lower voltages. The origin of this behavior can be studied by looking at the occupation numbers of the different lattice sites for the limit Gibbs state at different values of \(\eta\), see Figure 5. For non-zero gamma as \(\eta\) increases the occupation numbers for the limit Gibbs state in the non-conducting band also increase. As this insulating band becomes saturated at the \(x=-l,l\) lattice-sites, the 1D quantum system once again starts to act as a conductor. For sufficiently strong \(\gamma\), this phenomenon starts to be visible in terms of the charge transport within the (conducting) band 1: In Figure 5(b), we anticipate a change of current response at a saturation voltage of approximately 0.15V, which is precisely the case, as shown in 4. In fact, at \(\gamma\approx 0.1\) the system starts to exhibit this semi conductor-like behavior, see Figures 4(c) and 4(e). This behavior is concomitant with the current fluctuations given in Figs. 4(b)-4(f): The high electric resistance for small voltages is associated with an increase of the current fluctuations until some saturation regime (corresponding to the voltage threshold) from which the fluctuations decrease again, as in the conductor case. Note that the current fluctuations once again increase in any case for sufficiently large voltage beyond the linear (resp. piece wise linear) regime in the conductor (resp. semi-conductor) case. To complement this discussion, Figs. 6 shows the difference between the semiconductor (\(\gamma=0.125\)) and conducting cases (\(\gamma=0.01\)) in terms of time-dependent occupation numbers, at a fixed \(\eta=0.1\). In this last case, the second band inside the 1D-quantum system is slightly time dependent, in contrast with the trivial case \(\gamma=0\). But, the effects of the band 0 on currents are negligible while they significantly alter the transport properties at \(\gamma=0.125\), as explained above. The saturation phenomenon leading to the semi-conductor like behavior can also be seen by comparing Figs. 6(a), 6(c), 6(e) for \(\gamma=0.125\) and \(\eta=0.1\) with Fig. 7c for the same \(\gamma\) but at \(\eta=0.2\), keeping in mind Fig 4(e). ### Dependence on the Length of the System In 2012, the accuracy of the macroscopic laws of charge transport have been shown for nanowires (of a few nanometers scale) of Si doped with phosphorus atoms, even at very low temperature (4.2 K) [1]. This implies Figure 5: Plots of the fermionic occupation numbers for the semiconductor (\(\gamma=0.125\)). At lower voltages the second band has an effect on the current. As the voltage increases the fermionic density in the insulting band starts to saturate at \(\eta\approx 0.15\), at which point the conducting Behavior dominates. Figure 6: Fermionic occupation number at \(\eta=0.1\) as a function of the lattice sites for several times. Larger times are encoded via bolder blue lines. The dashed lines represent the values for the expected limit Gibbs state, when the full system is in presence of an electric potential. The 1D quantum system lies between the sites \(-5\) and \(5\). that quantum effects can vanish extremely rapidly with respect to growing space scales. From a mathematical perspective, the expectation of microscopic current densities with respect to growing space scales converges, as proven in [16; 6]. Furthermore, quantum uncertainty of microscopic electric current densities around their (classical) macroscopic value decays exponentially fast with respect to the volume of the region of the lattice where an external electric field is applied. This was specifically shown for non-interacting lattice fermions with disorder (one-band, any dimension) [25; 17]. Our model does not have any random external potential, but two bands inside the system of length \(2l\), which corresponds in our unit to \[2l\times\mathbf{a}=2l\times 3\mathring{\mathrm{A}},\] see Section 3.3. Therefore, we analyze how fast the convergence of the current density converges to a fixed value. In Fig. 8, we confirm the exponential convergence of the current density to a fixed value as conjectured from [25; 17]. In particular, in these numerical experiments, \(2\) nm is already a large quantum object in the sense that the limit point is already reached. Note, finally, that the semiconductor and conductor cases have the same behavior with respect to such convergences. Figure 8: Exponential fits for the conducting (a) and semi-conducting (b) 1D quantum systems. Figure 7: Fermionic occupation number at \(\eta=0.2\) as a function of the lattice sites for several times. Larger times are encoded via bolder blue lines. The dashed lines represent the values the expected limit Gibbs state, when the full system is in presence of an electric potential. The 1D quantum system lies between the sites \(-5\) and \(5\). ### Dependence on Temperature The energy scale used here are of the order of tenth of volts electronvolts. This is coherent with the voltage applied in the experiments described in [32]. In this energy scale, a room temperature, i.e., \(\mathrm{T}\approx 300\:\mathrm{K}\), corresponding to an inverse temperature \(\beta:=\left(\mathrm{k}_{B}\mathrm{T}\right)^{-1}\approx 38.7\:\mathrm{eV}^{-1}\), already refers to a very low temperature regime. Consequently, the transport properties of the system are basically independent of temperatures below room temperatures. On the one hand, this is coherent with observations done in [18] on charge transport in some 1D quantum system. On the other hand, the energy scale used here may be inaccurate in many situations. We therefore explain the behavior of the model in the high temperature regime, which basically refers to \(\beta\leq 1\), even if it corresponds to nonphysical temperatures (\(\mathrm{T}\geq 11604\:\mathrm{K}\)) in our energy scale. This is performed in Fig. 9, showing an expected decrease of currents (equivalently an increase of the electric resistance) as the temperature increases. This is true for all regimes (conductor and semi-conductor). In the same way, the variance increases with the temperature, since thermal fluctuations are of course added to the purely quantum ones. See Fig. 9. In contrast with the case \(\gamma=0\) with no insulating band \(0\), the two-band system has very interesting behavior for small temperatures: The current fluctuations are basically increasing, as naively anticipated, except in the semi-conductor regime for small temperatures. See Fig. 10. Remarkably, the current reaches a maximum value for some non-zero temperature before decreasing to zero in the limit \(\beta\to 0\). See again Figs. 9 and 10. It means in this case that the first excited state of the model interestingly favors the existence of currents, as compared with the ground states. It seems that this occurs for all \(\gamma\neq 0\) and this effect increases with \(\gamma\) to the point that even the current variance starts to become non-monotone. This is a non-trivial information on the spectral properties of the model. ## 6 Conclusion A first modelling attempt, based on many-body (fermionic) quantum statistical principles, is given to explain how Ohm's law emerges in long-range charge transport within protein-ligand complexes, the conductances of which have been measured in aqueous solutions and via STM and Pd substrate [32]. To this end, we build upon the mathematically rigorous framework (explored by one of the authors and collaborators), which provides proof for the convergence of the expectations of microscopic current densities and also crucially explains how the microscopic (quantum) effects on charge transport vanishes with respect to growing space scales [6, 16]. This is coherent with [17, 25] showing the exponentially fast (with respect to growing scales) disappearance of quantum effects on electric currents for non-interacting lattice fermions with disorder (one-band, any dimension) [17, 25]. This leads us to propose for the protein-ligand complexes a simple quasi-free two band lattice model described via the second quantization formalism, where the non-conducting band attempts to capture the possibility that fermions may become trapped (via a hopping term, similar to a classic semi-conductor). Noteworthy, the model does not consider aqueous environment (natural to proteins) since the experiments we seek to Figure 9: Temperature dependence for the 1D quantum system in the conducting regime. The dashed line is the asymptotic for infinite temperature, corresponding to computations at \(\beta=0\). Figure 10: Temperature dependence for the properties of 1D quantum system in the conducting and semiconducting regimes. explain were undertaken in an electrical potential region where ion current flows are not expected (i.e. no charge transfer is anticipated) [32]. By fixing physiological parameter sets (although not all microscopic parameters are available due to the novelty of these experiments) we then study the charge transport of our _nanometric_ quantum protein-ligand system in a mathematically rigorous way, without approximations (up to numerical precision) or a priori assumptions. The behavior of charge transport given here is therefore perfectly reliable, without any possible discussion up to the original definition of the model. Ultimately, we show how charge transport in a protein-ligand complex (modelled as a two-band lattice quantum system) leads to the emergence of Ohm-like law, therefore providing a tentative explanation for the experiments in [32]. We also show that there is exponentially fast convergence of the expectations of microscopic current densities. This should be related to the existence of non-vanishing current variance of linear response currents, consistent with previous studies under natural conditions on the (random) disorder [17, 25]). We expect that this is also true also for several-band models. Admittedly however, the model's voltage-current curve exhibits a discrepancy of several orders of magnitude when compared to that of the experiment. Specifically, the voltage applied in the experiments described in [32] is of the order of tenth of volts and the systems remains in the linear response regime. If this is correct, then the energy scales of the model should be of the same order. This yields a quantitative discrepancy since the rigorously computed currents are of the order of \(\mu\)A (see Figs. 4(a)-4(f)), whereas the measured currents in [32] are of the order of \(10^{-1}\) nA, \(10^{4}\) times smaller. This issue cannot be a priori solved by reducing the hopping strength \(\vartheta\geq 0\) between the reservoirs and the band 1 of the 1D quantum object (1D quantum system). Preliminary numerical computations show that the value of \(\vartheta\) has to be extremely small to reduced the current by 4 orders of magnitude, at the cost of almost isolating the 1D quantum system from the reservoirs and destroying the main transport properties described here. Since there is no approximations or a priori assumptions on the model, there is probably an issue related to all the energy scales, meaning that the voltage seen by the protein/peptide is far less than tenth of volts. On the other hand, lower energy scales change the meaning of the low temperature regime, which may be in contradiction with previous results [18]. See discussions in Section 5.4. Several reasons could be behind this discrepancy, to list a few: 1. This may indicate that there is underlying biophysical phenomenon that the present model does not capture, which we will investigate in future works; 2. This could be related to the aqueous environment surrounding the peptide/protein-ligand complexes enabling charge transfer, although the original experiments claim otherwise [32]; 3. While we did investigate a range of parameter regions, there is still the possibility that some parameters could be far off. Nevertheless, our framework provides a novel approach to study rigorously charge transport (under quantum and thermal fluctuations) and to understand the breakdown of the classical (macroscopic) conductivity theory at microscopic scales. The theory can be applied to both biological transport and other physical systems and in this sense it is a unifying approach. In particular, the theory allows us to explain the emergence of electronic states that are delocalized within a protein/peptide or physical systems. Hence we provide predictions for both conducting and semiconducting like properties, although semi-conductor type characteristics has not yet been observed since only six protein-ligand complexes have been tested so far [32]. Nevertheless, these predictions could be tested in future experiments. The basic nature of the model allows it to be readily adjusted to explore a variety of different phenomena. For instance, an external potential such as \[\sum_{x=-l}^{l}v\left(x\right)a_{x,1}^{\ast}a_{x,1},\] with \(v\left(x\right)\in\left[-1,1\right]\) being some (i.i.d.) random variables (cf. the Anderson model), could have been considered without breaking the quasi-free property of the model. This term can be used to verify and study the linear dependence of the resistivity with respect to the length \(l\) of the 1D quantum system (another version of Ohm's law). Note indeed that this property on the resistivity does not hold true for our two-band model (like for the one-band one, i.e., for \(\gamma=0\)), since the current does not seem to vanish in the limit of infinite \(l\) (Fig. 8) while it is expected [25, 17] to be equal to the macroscopic one at relatively small lengths \(l\). In the immediate future we will investigate the discrepancy of the model (as explained above) and we will study the emergence of the telegraph noise as observed in the protein-ligand complex experiments (see [32, Fig. S7]). Presently, as far as we know, there is no theory based from first principles of Quantum mechanics to explain telegraph noise. Current theories are mainly phenomenological, modelled via a (Markovian continuous-time) stochastic process that jumps discontinuously between two distinct values, as it is shown for currents in [32, Fig. S7]. The telegraph noise is in particular characterised by a bimodal distribution of currents. Since this can be characterised via some statistical methods involving higher moments (like the kurtosis and skewness) besides the variance, our approach can be used to understand the appearance of the telegraph noise via a purely quantum microscopic theory. In the future, it will be interesting to develop theories to explain long-range (micrometer-scale) conduction through peptides/proteins and through supramolecular structures, such as coiled-coils, \(\beta\)-sheets, \(\alpha\)-helices, collagen, or elastin mimics etc. This is important because the molecular and organic electronics community are currently developing systems that merge conjugated molecules with amino acids, which form electronically delocalized supramolecular structures that facilitate long-range charge transport. Thus a theory that supports this will be fundamentally interesting and potentially significant for the development of bioelectronic materials. The limiting factor of our proposed framework is the rapid increase in computational complexity for systems with higher dimensions. Finally, our code is freely accessible at: [https://github.com/RoisinMary/quasifree_chargetransport.git](https://github.com/RoisinMary/quasifree_chargetransport.git) and can readily run in a standard computer (e.g. we run it on a MacBook Pro with a 2,4 GHz Quad-Core Intel Core i5-processor). Acknowledgment J.-B. Bru is supported by the Basque Government through the grant SR, JBB and RB acknowledge support from Ikerbasque (The Basque Foundation for Science), and the Basque Government through the BERC 2022-2025 program and by the Ministry of Science and Innovation: BCAM Severo Ochoa accreditation CEX2021-001142-S / MICIN / AEI / 10.13039/501100011033. SR further acknowledges of the RTI2018-093860-B-C21 funded by (AEI/FEDER, UE) and acronym "MathNEURO". JBB meanwhile acknowledges of the grant PID2020-112948GB-I00 funded by MCIN/AEI/10.13039/501100011033 and by "ERDF A way of making Europe", as well as the COST Action CA18232 financed by the European Cooperation in Science and Technology (COST). JUA acknowledges support from the Spanish Government, grants PID2020-117281GB-I00 and PID2019-107444GA-I00, partly from European Regional Development Fund (ERDF), and the Basque Government, grant IT1483-22. ## 8 Appendix The quasi-free nature of the model allows a significant reduction of the dimension for the numerical calculation. In this appendix we shortly recall the necessary theory for the convenience of the reader. ### The One-particle Picture We use the algebraic formulation of quantum many-body problems, which is, from a conceptual point of view, the natural one, as the underlying physical system is many-body. Moreover, it has some advantageous technical aspects, both specific (like the possibility of using Bogoliubov-type inequalities in important estimates) and general ones (like the very powerful theory of KMS states). However, in the present paper we only deal with non-interacting fermions and in this specific example, one can equivalently use the one-particle picture, as explained in detail in [25, Section C.3]. For instance, the Hamiltonian defined on the fermionic Fock space \(\mathcal{F}\) (2) is the second quantization of the following one-particle Hamiltonian acting on the one-particle Hilbert space \(\mathfrak{h}\) (1): Let \(\Delta\) be the usual 1D-discrete Laplacian defined on \(\ell^{2}(\mathbb{Z})\) by \[[\Delta(\varphi)](x):=-2\varphi(x)+\sum_{z\in\mathbb{Z},\ |z|=1}\varphi(x+z)\, \qquad x\in\mathbb{Z},\ \varphi\in\ell^{2}(\mathbb{Z}). \tag{23}\] We consider a conducting band \(b=1\) and an insulating band \(b=0\) as well as spinless fermions. Thus, the one-particle Hilbert space \(\mathfrak{h}\) (1) is in this case equal to \[\mathfrak{h}:=\ell^{2}(\mathbb{Z}\times\{0,1\})\equiv\ell^{2}(\mathbb{Z}) \oplus\ell^{2}(\mathbb{Z}).\] In particular, if \(\varphi=(\varphi_{0},\varphi_{1})\in\mathfrak{h}\) then \(\varphi_{0}\) corresponds to the insulating band \(b=0\), while \(\varphi_{1}\) refers to the conducting band \(b=1\). For any set \(\Lambda\subseteq\mathbb{Z}\), we define the orthogonal projection \(P_{\Lambda}\) from \(\ell^{2}(\mathbb{Z})\) to \(\ell^{2}(\Lambda)\). For any \(n\in\mathbb{N}\), let \(\Lambda_{n}:=\{-n,\ldots,n\}\) and fix \(L,l\in\mathbb{N}\) with \(L>l\). Then, the Hamiltonians \[H_{\mathrm{p}}:=H_{\mathrm{p},1}+H_{\mathrm{p},0}\in\mathcal{B}\left(\mathcal{ F}\right),\qquad H_{\mathrm{r-p}}\in\mathcal{B}\left(\mathcal{F}\right)\qquad \text{and}\qquad H_{\mathrm{r}}\in\mathcal{B}\left(\mathcal{F}\right)\] are the second quantization of the one-particle Hamiltonians \[h_{\mathrm{p}} := \left(\begin{array}{cc}P_{\Lambda_{l}}&0\\ 0&P_{\Lambda_{l}}\end{array}\right)\left(\begin{array}{cc}-\mu_{\mathrm{p},0} \mathbf{1}&-\gamma\mathbf{1}\\ -\gamma\mathbf{1}&-\epsilon_{\mathrm{p}}\Delta-\mu_{\mathrm{p},1}\mathbf{1} \end{array}\right)\left(\begin{array}{cc}P_{\Lambda_{l}}&0\\ 0&P_{\Lambda_{l}}\end{array}\right)\] \[h_{\mathrm{r-p}} := \left(\begin{array}{cc}0&0\\ 0&P_{\{-l-1,-l,l,l+1\}}\end{array}\right)\left(\begin{array}{cc}0&0\\ 0&-2\vartheta\mathbf{1}-\vartheta\Delta\end{array}\right)\left(\begin{array}{ cc}0&0\\ 0&P_{\{-l-1,-l,l+1\}}\end{array}\right)\] \[h_{\mathrm{r}} := \left(\begin{array}{cc}0&0\\ 0&P_{\Lambda_{L}\backslash\Lambda_{l}}\end{array}\right)\left(\begin{array}[] {cc}0&0\\ 0&-\epsilon_{\mathrm{r}}\Delta-\mu_{\mathrm{r}}\mathbf{1}\end{array}\right) \left(\begin{array}{cc}0&0\\ 0&P_{\Lambda_{L}\backslash\Lambda_{l}}\end{array}\right),\] respectively. All other quantities defined in Section 3, including current observables (see [25, Section C.3]), can be written as the "second quantized" version of one-particle objects, like the velocity operator in the case of currents. This is possible because all our quantities are related to operators that are quadratic in the fields. ### Car \(C^{*}\)-Algebras In the algebraic formulation of fermion systems, the CAR \(C^{*}\)-algebra associated with a Hilbert space \(\mathcal{H}\), denoted by \(\mathrm{CAR}\left(\mathcal{H}\right)\), is the \(C^{*}\)-algebra generated by a unit \(\mathbf{1}\) and a family \(\{a(\varphi)\}_{\varphi\in\mathcal{H}}\) of elements satisfying Conditions (a)-(b): (a) The map \(\varphi\mapsto a(\varphi)^{*}\) is (complex) linear. (b) The family \(\{a(\varphi)\}_{\varphi\in\mathcal{H}}\) satisfies the Canonical Anticommutation Relations (CAR): For all \(\varphi_{1},\varphi_{2}\in\mathcal{H}\), \[a(\varphi_{1})a(\varphi_{2})+a(\varphi_{2})a(\varphi_{1})=0,\quad a(\varphi_{ 1})a(\varphi_{2})^{*}+a(\varphi_{2})^{*}a(\varphi_{1})=\langle\varphi_{1}, \varphi_{2}\rangle_{\mathfrak{h}}\mathbf{1}. \tag{24}\] If \(\mathcal{H}\) is finite dimensional then it is well-known that \(\mathrm{CAR}\left(\mathcal{H}\right)\) is \(*\)-isomorphic to the \(C^{*}\)-algebra \(\mathcal{B}(\mathcal{F}\left(\mathcal{H}\right))\) of all bounded linear operators acting on the fermionic Fock space \(\mathcal{F}\left(\mathcal{H}\right)\) constructed from \(\mathcal{H}\). In particular, \[\mathrm{CAR}\left(\mathfrak{h}_{L}\right)\equiv\mathcal{B}(\mathcal{F}_{L}) \subsetneq\mathrm{CAR}\left(\mathfrak{h}\right),\qquad L\in\mathbb{N}, \tag{25}\] see (10)-(11). Note that \(\mathrm{CAR}\left(\mathcal{H}\right)\subsetneq\mathcal{B}(\mathcal{F}\left( \mathcal{H}\right))\) for infinite-dimension Hilbert spaces \(\mathcal{H}\), making the algebraic approach more general. In particular, \(\mathrm{CAR}\left(\mathfrak{h}\right)\subsetneq\mathcal{B}(\mathcal{F})\). See, e.g., [15] for more details. Note that the creation and annihilation operators \(\{a_{x,b},a_{x,b}^{*}\}_{(x,b)\in\mathbb{Z}\times\{0,1\}}\) in the definitions of Hamiltonians in Section 3 correspond to \[a_{x,b}:=a\left(\mathfrak{e}_{x,b}\right)\qquad\text{and}\qquad a_{x,b}^{*}:=a \left(\mathfrak{e}_{x,b}\right)^{*} \tag{26}\] for each lattice site \(x\in\mathbb{Z}\) and band \(b\in\{0,1\}\), where \(\{\mathfrak{e}_{x}\}_{x\in\mathbb{Z}}\) is the canonical orthonormal basis of \(\mathfrak{h}\doteq\ell^{2}(\mathbb{Z}\times\{0,1\})\) defined by \(\mathfrak{e}_{x,b}(y,c):=\delta_{x,y}\delta_{b,c}\) for all \(x,y\in\mathbb{Z}\) and \(b,c\in\{0,1\}\). ### Quasi-Free States Gauge-invariant quasi-free states on a CAR \(C^{*}\)-algebra \(\mathrm{CAR}\left(\mathcal{H}\right)\) are positive and normalized linear functionals \(\rho\in\mathrm{CAR}\left(\mathcal{H}\right)^{*}\) such that, for all \(N_{1},N_{2}\in\mathbb{N}\) and \(\varphi_{1},\dots,\varphi_{N_{1}+N_{2}}\in\mathcal{H}\), \[\rho\left(a(\varphi_{1})^{*}\cdots a(\varphi_{N_{1}})^{*}a(\varphi_{N_{1}+N_{2} })\cdots a(\varphi_{N_{1}+1})\right)=0 \tag{27}\] if \(N_{1}\neq N_{2}\), while in the case \(N_{1}=N_{2}\equiv N\), \[\rho\left(a(\varphi_{1})^{*}\cdots a(\varphi_{N})^{*}a(\varphi_{2N})\cdots a( \varphi_{N+1})\right)=\det\left[\rho\left(a(\varphi_{k})^{*}a(\varphi_{N+l}) \right)\right]_{k,l=1}^{N}. \tag{28}\] See, e.g., [2, Definition 3.1], which refers to a more general notion of quasi-free states. The gauge-invariant property corresponds to (27), whereas [2, Definition 3.1, Condition (3.1)] only imposes the quasi-free state to be even, which is a strictly weaker property than being gauge-invariant. Quasi-free states are therefore particular states that are uniquely fixed by two-point correlation functions. In fact, a quasi-free state \(\rho\) is uniquely defined by its so-called one-particle density matrix \(\mathrm{D}_{\rho}\in\mathcal{B}(\mathcal{H})\) of the state \(\rho\), which is defined through the conditions \[\left\langle\varphi_{1},\mathrm{D}_{\rho}\varphi_{2}\right\rangle_{\mathcal{ H}}=\rho(a^{*}\left(\varphi_{2}\right)a\left(\varphi_{1}\right)),\qquad \varphi_{1},\varphi_{2}\in\mathcal{H}. \tag{29}\] Note that \(\mathrm{D}_{\rho}\) satisfies \(0\leq\mathrm{D}_{\rho}\leq\mathbf{1}_{\mathcal{H}}\). Since the full Hamiltonian \(H_{L}\in\mathrm{CAR}\left(\mathfrak{h}_{L}\right)\equiv\mathcal{B}(\mathcal{ F}_{L})\) (see (9) and (25)) is the second quantization of an explicit one-particle Hamiltonian \[h_{L}:=h_{\mathrm{p}}+h_{\mathrm{r}}+h_{\mathrm{r-p}}\] acting on the one-particle Hilbert space \(\mathfrak{h}_{L}\) (11), one proves that the Gibbs state (12) is a quasi-free state with one-particle density matrix equal to \[\mathrm{D}_{\rho}=(1+\mathrm{e}^{\beta\mathfrak{h}_{L}})^{-1}\in\mathcal{B} \left(\mathfrak{h}_{L}\right) \tag{30}\] at fixed inverse temperature \(\beta>0\). See, e.g., [24, Section 2.3]. In particular, all correlation functions of the many-fermion system can be numerically computed within the one-particle Hilbert space \(\mathfrak{h}_{L}\), thanks to (27)-(28) and (30). ### Quasi-Free Dynamics In the algebraic formulation of quantum mechanics (cf. the Heisenberg picture), dynamics occur in a CAR \(C^{*}\)-algebra, \(\mathrm{CAR}\left(\mathcal{H}\right)\) via a strongly continuous group \((\tau_{t})_{t\in\mathbb{R}}\) of \(*\)-automorphisms of \(\mathrm{CAR}\left(\mathcal{H}\right)\). A quasi-free dynamical system associated with a one-particle Hamiltonian \(h\) acting on \(\mathcal{H}\) is a strongly continuous group \((\tau_{t})_{t\in\mathbb{R}}\) of \(*\)-automorphisms of \(\mathrm{CAR}\left(\mathcal{H}\right)\) uniquely defined by the conditions \[\tau_{t}(a(\varphi))=a\left(\mathrm{e}^{ith}\varphi\right),\qquad\varphi\in \mathcal{H}.\] The time evolution of a state \(\rho\) is given by \((\rho\circ\tau_{t})_{t\in\mathbb{R}}\) in the Schrodinger picture of quantum mechanics. If the dynamics \((\tau_{t})_{t\in\mathbb{R}}\) and the state \(\rho\) are both quasi-free then, for all times \(t\in\mathbb{R}\), \(\rho\circ\tau_{t}\) is a quasi-free state with one-particle density matrix equal to \[\mathrm{D}_{\rho\circ\tau_{t}}=\mathrm{e}^{-ith}\mathrm{D}_{\rho}\mathrm{e}^{ ith}\in\mathcal{B}(\mathcal{H}).\] In other words, \(\mathrm{D}_{\rho\circ\tau_{t}}\) is the solution to the Liouville equation in the one-particle Hilbert space \(\mathcal{H}\). Define now the Hamiltonian \(h_{L}\left(\eta\right)\) by \[h_{L}\left(\eta\right):=h_{L}+\eta e\] for \(\eta\in\mathbb{R}\) and natural numbers \(L>l\), where \(e\) is the operator defined, for any \(\varphi\in\mathfrak{h}\) and \((x,\mathrm{b})\in\mathbb{Z}\times\left\{0,1\right\}\), by \[\left[e\left(\varphi\right)\right](x,\mathrm{b}) := -\mathbf{1}\left[-L\leq x\leq l-1\right]\varphi\left(x,\mathrm{b}\right)\] \[+\frac{x}{l}\mathbf{1}\left[-l\leq x\leq l\right]\varphi\left(x, \mathrm{b}\right)\] \[+\mathbf{1}\left[l+1\leq x\leq L\right]\varphi\left(x,\mathrm{b} \right),\] \(\mathbb{I}\left[\mathcal{S}\right]\) being the characteristic function of a set \(\mathcal{S}\). The Hamiltonian \(h_{L}\left(\eta\right)\) can again be seen as an operator acting on the one-particle Hilbert space \(\mathfrak{h}_{L}\) (11). In the present paper, the dynamics \((\tau_{t}^{\left(\eta\right)})_{t\in\mathbb{R}}\) is defined by (14) on either \(\text{CAR}\left(\mathfrak{h}\right)\subsetneq\mathcal{B}(\mathcal{F})\) or \(\text{CAR}\left(\mathfrak{h}_{L}\right)\equiv\mathcal{B}(\mathcal{F}_{L})\) for natural numbers \(L>l\). Since the full Hamiltonian \(H_{L}\left(\eta\right)\in\mathcal{B}(\mathcal{F}_{L})\) (see (13)) with electric potentials is the second quantization of \(h_{L}\left(\eta\right)\), one proves that the dynamics (14) is quasi-free: \[\tau_{t}^{\left(\eta\right)}(a(\varphi)):=\mathrm{e}^{ith^{-1}H_{L}\left(\eta \right)}a(\varphi)\mathrm{e}^{-ith^{-1}H_{L}\left(\eta\right)}=a\left(\mathrm{ e}^{ith_{L}\left(\eta\right)}\varphi\right),\qquad\varphi\in\mathfrak{h}_{L}. \tag{31}\] See, e.g., [24, Section 2.3]. In particular, the Gibbs state (12) evolves in the Schrodinger picture within the set of quasi-free states and all physical quantities can be deduced from one-particle considerations. In this paper, we take advantage of this fact to significantly reduce the complexity of the calculations. ### Example of Current Densities Recall that elementary current observables are defined by (18), that is, \[I_{(x,y)}^{\left(b\right)}:=i\hbar^{-1}\left(a_{x,1}^{*}a_{y,b}-a_{y,b}^{*}a_{ x,1}\right) \tag{32}\] for natural numbers \(L>l\), \(x,y\in\left\{-L,\ldots,L\right\}\) and \(b\in\left\{0,1\right\}\). Therefore, by using (26) and (29)-(31), we can compute its expectation in the Gibbs state \(\rho\) at time \(t\in\mathbb{R}\) as follows: \[\rho(\tau_{t}^{\left(\eta\right)}(I_{(x,y)}^{\left(b\right)})) =2\hbar^{-1}\Im\text{m}\left\{\rho\circ\tau_{t}^{\left(\eta\right) }(a_{y,b}^{*}a_{x,1})\right\}\] \[=2\hbar^{-1}\Im\text{m}\left\{\rho\left(a(\mathrm{e}^{ith_{L} \left(\eta\right)}\mathfrak{e}_{y,b})^{*}a(\mathrm{e}^{ith_{L}\left(\eta \right)}\mathfrak{e}_{x,1})\right)\right\}\] \[=\left\langle\mathfrak{e}_{x,1},\mathrm{e}^{-ith_{L}\left(\eta \right)}(1+\mathrm{e}^{\beta\mathfrak{h}_{L}})^{-1}\mathrm{e}^{ith_{L}\left( \eta\right)}\mathfrak{e}_{y,b}\right\rangle_{\mathcal{H}} \tag{33}\] for natural numbers \(L>l\), \(x,y\in\left\{-L,\ldots,L\right\}\), \(b\in\left\{0,1\right\}\) and \(\eta\in\mathbb{R}\). Using this formula and an explicit formulation of the one-particle Hamiltonian \(h_{L}\left(\eta\right)\), we compute the expectation (20) of the current density observable \(\mathbb{I}\left(t,\eta\right)\) defined by (19). In order to calculate the variance \[\text{Var}(\mathbb{I}\left(t,\eta\right))=\rho\left(\mathbb{I}\left(t,\eta \right)^{2}\right)-\rho\left(\mathbb{J}\left(t,\eta\right)\right)^{2} \tag{34}\] of this current, we additionally use Equation (28) in the following case: \[\rho\left(a_{x,1}^{*}a_{y,1}a_{u,1}^{*}a_{v,1}\right)=\rho\left(a_{x,1}^{*}a_{ y,1}\right)\rho\left(a_{u,1}^{*}a_{v,1}\right)+\rho\left(a_{y,1}a_{u,1}^{*} \right)\rho\left(a_{x,1}^{*}a_{v,1}\right), \tag{35}\] for any \(x,y,u,v\in\mathbb{Z}\), because the Gibbs state \(\rho\) is quasi-free. ### Discussion of the Implementation Here computations were performed to find the variance but the code can easily be extended to compute higher moments which could be instrumental in identifying, for example, a bi-modal distribution which is characteristic of telegraph noise. Code automatically implementing the simplifying CAR relations significantly reduces the difficulty of computing higher-order moments. However, because of the increasing number of linear operations needed such calculations can become expensive for \(L\) large (see Fig. 11).
2310.05032
PASSION: Permissioned Access Control for Segmented Devices and Identity for IoT Networks
In recent years, there has been a significant proliferation of industrial Internet of Things (IoT) applications, with a wide variety of use cases being developed and put into operation. As the industrial IoT landscape expands, the establishment of secure and reliable infrastructure becomes crucial to instil trust among users and stakeholders, particularly in addressing fundamental concerns such as traceability, integrity protection, and privacy that some industries still encounter today. This paper introduces a privacy-preserving method in the industry's IoT systems using blockchain-based data access control for remote industry safety monitoring and maintaining event information confidentiality, integrity and authenticity.
Hisham Ali, Mwrwan Abubakar, Jawad Ahmad, William J. Buchanan, Zakwan Jaroucheh
2023-10-08T06:28:32Z
http://arxiv.org/abs/2310.05032v1
# PASISON: Permissioned Access Control for Segmented Devices and Identity for IoT Networks ###### Abstract In recent years, there has been a significant proliferation of industrial Internet of Things (IoT) applications, with a wide variety of use cases being developed and put into operation. As the industrial IoT landscape expands, the establishment of secure and reliable infrastructure becomes crucial to instil trust among users and stakeholders, particularly in addressing fundamental concerns such as traceability, integrity protection, and privacy that some industries still encounter today. This paper introduces a privacy-preserving method in the industry's IoT systems using blockchain-based data access control for remote industry safety monitoring and maintaining event information confidentiality, integrity and authenticity. Industry Safety, Internet of Things (IoT), Distributed Ledge (DL), Blockchain, Hyperledger Fabric. ## I Introduction Trust infrastructures within IoT networks are important in maintaining the trustworthiness of the data and of the devices used. Maintaining data privacy is challenging in centralised systems, as there is often a lack of policy-based data access control [1, 2]. By defining and enforcing access policies, only authorised entities can access specific data, enhancing confidentiality and safeguarding sensitive information from potential breaches. This centralised approach allows for fine-grained control over data access, enhancing privacy protection. With blockchain solutions, we either have a permission-less approach to IoT trust, which uses a public ledger, or we can use a permission approach, such as with Hyperleder Fabric. Overall, permissioned ledgers can provide a selective access control mechanism where only trusted participants can access the distributed ledger, reducing the risk of malicious actors disrupting the system. The aim of this paper is to propose an integrated approach for implementing access control within segmented device and identity infrastructures for IoT devices. Its core contribution is the definition and implementation of a blockchain-based access control mechanism for clear segregation between users and IoT devices, ensuring robust security, data integrity, and controlled access through policies. This paper is organised as follows: Section 2 delves into current related works, providing context for our paper. Section 3 offers essential background information, covering Hyperledger Fabric, the MQTT protocol, and the publish-subscribe communication model. In Section 4, we present our system infrastructure design and its practical implementation. Section 5 conducts an evaluation, encompassing security considerations, data privacy preservation, data integrity, scalability, and throughput latency. Finally, Section 6 serves as the paper's conclusion, summarising our findings and also provides valuable suggestions for future research and development. ## II Related Work The problem with the centralised authentication approach is that it requires to store authentication data on a centralised local server, which is prone to a single point of failure [3]. Similar to our work, many other approaches proposed blockchain-based authentication and access control methods [4]. For instance, the work presented in [4] proposed a conceptual framework aimed at establishing a data-sharing system that incorporates access control mechanisms based on blockchain technology for IoT devices. The system employs three distinct smart contracts to facilitate the efficient administration of access control. These contracts include one for access control provisioning, one for authentication, and another for decision-making. Nevertheless, the implementation of a public blockchain like Ethereum in the proposed system will incur expenses for transaction processing. Similarly, the authors in [5] proposed a Capability-Based Access Control (CapBAC) scheme by utilising the public Ethereum blockchain technology. The authors proposed to fix some of the BlendAC issues using a fine-grained access control model, however, no cost or performance metric was discussed. Furthermore, many studies have indicated that Role-Based Access Control (RBAC) exhibits limitations in terms of flexibility and scalability when confronted with the access control demands of IoT environments [6]. However, there are still many researches and studies that have put forth the concept of a blockchain-based Role-Based Access Control system in different IoT domains, such as in [7][8]. Current IoT systems rely on the centralised data management model or client-server architecture to handle authentication and authorization and to control access to IoT systems and their data. Therefore, this can put an extra cost on designing the security architecture of an IoT system due to the high costs associated with the cloud service that is needed to validate the identity of devices and applications involved in the IoT systems [9]. All information and data are gathered and shared for complete understanding, reliable delivery, and intelligent processing. This presents issues with respect to transmission costs, trust, data value, and privacy [10]. Blockchain is a decentralised, distributed ledger technology that has great potential to tackle the security, privacy, and scal ability concerns of critical infrastructure in the IoT [11]. Since blockchain is considered an immutable ledger for transactions, it can track millions of IoT devices and provide highly secure communications and coordination between them [12]. To use blockchain technology to secure IoT devices, each device can have a unique address to send transactions. So, IoT objects don't have to trust each other because they use a consensus algorithm that lets nodes connected to the blockchain work in a trusted way. IoT devices, gateways, cloud computing, and blockchain technologies are the four layers of the decentralised architecture. ## III Passion Permissioned Approach PASSION integrates Hyperledger Fabric [13, 14] and ensures privacy and confidentiality features, enabling participants to transact securely and share sensitive information selectively. It offers a high degree of control over the network, allowing participants to set policies, define roles, and manage access to the ledger [15]. The main elements of Hyperledger Fabric that are adopted are (Figure 1): 1. Membership Service Provider (MSP): The MSP is responsible for managing the certificates and identities of network participants. 2. Certificate Authority (CA): The CA is responsible for issuing and validating digital certificates that identify network participants. 3. Client: The client application interacts with the network by submitting transactions to peers and querying the ledger. 4. Channel: Channels are private sub-networks that allow multiple parties to transact with one another in a secure and confidential way. Channels allow a subset of nodes through the anchor node to link different Users/participants (organisations) which compose the consortium. The ledger of a channel can be accessed only by those organisations that are part of the channel. Therefore, participants can only see network features based. _Chaincode:_ Chaincode is the smart contract written in a programming language that runs on the Hyperledger Fabric network. Organisations may include different nodes connected through one channel or multiple (multi-chaincode). Notably, nodes could have multiple chain codes (engaged with multi-channel) to store the transaction data in an immutable ledger. 5. Peer Nodes: Peer nodes maintain copies of the ledger, execute transactions, endorse transactions, and participate in consensus. * Endorser Peers (Endorsement): In (6), A transaction proposal is sent to endorsing peers, which simulate the transaction and validate its correctness according to the smart contract. If the proposal is endorsed, the endorsing peers sign it and send it back to the client. * Orderer Peers (Ordering Service): The endorsed transactions are grouped into blocks and (7) sent to the ordering service. The ordering service receives endorsed transactions from peer nodes and orders them into a block (Creates a block), which is then broadcast to all peers in the network (delivers the block to each peer node). _Consensus:_ The ordering service uses a consensus algorithm to ensure that all nodes in the network agree on the order of the blocks. This ensures that the ledger is consistent across all nodes in the network. * Validation Peers: In (8), Peers validate the block and transactions contained within it, checking the digital signatures and endorsement policies. If the block is valid, the transactions are committed to the ledger. * Update the ledger (9). ### _Hyperledger Fabric-Based IoT Architecture components_ The architecture for Hyperledger Fabric IoT includes the following components: * _Applications:_ The applications layer includes the software and services that consume and process the data stored on the blockchain. This can include data analytics, machine learning algorithms, and other applications that provide insights or actions based on the analysis of the data. * _Application Support Layer:_ The data processing layer plays a critical role in IoT systems as it enables the extraction of meaningful insights from the large amounts of data generated by connected devices. The layer typically consists of several components, including Data collection; Data filtering and preprocessing; Data storage; Data analytics; and Data visualisation and reporting. * _IoT devices/sensors:_ This layer consists of the physical devices and sensors that collect data from the environment. * _Network Layer:_ The network layer, also known as the transmission layer, is responsible for managing network connectivity-related tasks such as authentication, authorisation, accountability, and IoT transport management data. It acts as a bridge between the perception and application layers, transmitting the data collected from physical objects. The transmission medium can be either wireless or wired, and it is responsible for connecting smart devices, network devices, and networks together. Moreover, the following are such examples: * _Gateway:_ The gateway acts as a bridge between the IoT devices and the blockchain network and is responsible for collecting data from the devices and sending it to the blockchain. * _Blockchain network:_ This layer includes the distributed network of nodes that form the Hyperledger Fabric blockchain. The nodes are responsible for processing transactions, validating data, and reaching consensus. * _Smart contracts:_ Smart contracts are the business logic that defines how the data from the IoT devices is processed, stored, and shared on the blockchain. They can be used to define data schemas, process data, and enforce rules for accessing and sharing data. ### _Mqtt_ MQTT (Message Queuing Telemetry Transport) is a lightweight and efficient messaging protocol designed for low-bandwidth, unreliable networks [16]. It enables communication between devices and applications in the Internet of Things (IoT) and other scenarios where a simple and efficient messaging system is needed. _Key features of MQTT include:_ * Publish-Subscribe Model: MQTT uses a publish-subscribe model, where devices can publish messages to topics, and other devices (subscribers) can receive those messages by subscribing to specific topics. * QoS Levels: MQTT supports three Quality of Service (QoS) levels to ensure message delivery reliability: * _QoS 0:_ At most once - Fire and forget; no acknowledgment is sent. * _QoS 1:_ At least once - Messages are guaranteed to be delivered, but duplicates may occur. * _QoS 2:_ Exactly once - Messages are ensured to be delivered only once and in the correct order. * Lightweight: MQTT is designed to be efficient and lightweight, making it suitable for resource-constrained devices and low-bandwidth networks. * Persistent Session: MQTT supports persistent sessions, allowing subscribers to receive messages sent while they were offline when they reconnect. * Retained Messages: Publishers can set messages as "retained," meaning that the last published message on a topic will be saved and delivered to new subscribers when they connect. * Scalability: MQTT is scalable and can support a large number of clients, making it suitable for various IoT and real-time messaging applications. MQTT has become widely used in IoT applications due to its simplicity, efficiency, and ability to handle unreliable network conditions. It has several implementations and is supported by many platforms and programming languages. ### _The publish-subscribe communication model_ The publish-subscribe communication model consists of three main components: publishers, subscribers, and a message broker (or messaging system). _Publishers_ are responsible for generating and sending messages to the messaging system, and they do so without having knowledge of the _subscribers_' identities [17]. Instead, publishers publish messages on specific topics. On the other hand, subscribers express their interest in receiving messages Fig. 1: Hyperledger Fabric-based IoT and Transaction Flow Fig. 2: The publish-subscribe IoT communication model with an MQTT Broker related to particular topics and register themselves with the messaging system accordingly. When a publisher sends a message to a topic, the message _broker_ acts as an intermediary. It receives the message and ensures that all registered subscribers interested in that topic receive the message. The message broker facilitates communication between publishers and subscribers without requiring direct interaction between them. This decoupling allows for a flexible and scalable communication approach, making the publish-subscribe model well-suited for various applications, including real-time data streaming, Internet of Things (IoT) systems, financial services, and social media platforms. In PASSION, Hyperledger Fabric clients play the roles of subscribers and publishers, utilising public and private keys for secure interactions with the MQTT broker. Subscribers connect to the broker to subscribe to specific topics and receive data from publishers, which are resource-constrained IoT devices like sensors. Publishers authenticate with the MQTT broker to publish their sensor readings on designated topics. When establishing a connection, both subscribers and publishers receive a challenge from the smart contract, ensuring only authorised addresses receive it. Using their private keys, they cryptographically prove their identity by signing the challenge and interacting with the smart contract. This challenge acts as a one-time password, providing secure access to the broker. By integrating Hyperledger Fabric, our system ensures a reliable and secure communication model suited for IoT and real-time data applications. ## IV Implementation In this section, we showcase the proposed infrastructure employed in our study, including the setups and decision-making processes. We also delve into the specifics of the implementation that influenced our experiments on the network model. We have carried out a real-life application, which entails utilising IoT sensors to acquire environmental data. #### Iv-1 Environment Setup and Test This section includes network performance and functionality tests. A network has been set up to demonstrate the integration of blockchain and IoT, comprising three organisations: Org1, Org2 and Org3 (IoT). The application client receives camera streaming data, temperature, humidity, and gas data from the MQTT broker and updates the blockchain network accordingly. The organisation's TLS-CA server offers TLS (Transport Layer Security) to all blockchain nodes, including the ordering and CA (Certification Authority) servers and users, to secure the network connection. In addition, X509 certificates are issued to all the members and actors in the organisation's blockchain network by the CA server. Then, the benchmark engine interacts with chain code to deploy, run, analyse, and generate network performance reports. Table I defines the network nodes present in our experimental network. Overall, Hyperledger Fabric is run within a Docker container on a Ubuntu instance. #### Iv-2 Interaction System Implementation The implementation extends the asset-transferbasic/chain code-javascript using a _Solo Ordering Service_, provided by the Hyperledger Fabric and test network. This performance tests the smart contract on a Fabric network using Caliper. The basic workflow of this whole system is: _Implementation of smart contracts:_ We used Hyperledger Fabric smart contracts as a proof of concept (chaincode). The Hyperledger Fabric network was selected because of the features mentioned that related to its unique design as modular and extensible, delivering confidentiality, scalability and privacy. We employ the JavaScript programming language to deploy smart contracts in Hyperledger Fabric systems. Finally, we test it by installing it, approving its definition, committing it to the channel, and invoking the chain code. By executing the given chain code, user interaction within the Hyperledger Fabric ledger is feasible. Notably, One node can have multiple smart contracts, making it possible to monitor different sensor readings from this node (the user). The chain code is in charge of dealing with various data queries. As a result, the system implementation begins by defining certain chaincode operations, such as querying and retrieving data lineage. The chain code allows the authorised user to get a URL using his identity to obtain sensor information. In other words, sensor information in the ledger is only accessible to users who have been given permission to use it since the chain code can only give out authority after the user has been verified. The chaincode is intended to facilitate Fig. 3: Industrial data security with an MQTT Broker, using Hyperledger Fabric, maintaining data integrity and privacy. various data and traceability processes inside the ledger and attached-chain storage. The proposed system's chain code-specific operations include storing data on an item's world state, querying item checksums, retrieving an object with the relevant transaction ID, extracting the version of an object based on its transaction ID, retrieving the lineage of the data item, retrieving the history of a data object, querying the key-range of the list of items (AssetsID), retrieving the specific sensor's information, and providing a specific version of an object. Assets represent the variable value of items that may be exchanged on blockchain platforms during transaction execution. The implemented system is made up of distributed peer nodes that serve as the hub for communication among network parts, as shown in Figure 1. The suggested model's performance was tested in terms of system throughput, send rate, latency, and resource usage (memory, CPU, network). The scope of the investigation was expanded to examine the latency and scalability of different transaction loads, transaction duration TPS, and Asset batch sizes. The benchmark involves evaluating 'getAssetsFrom- Batch' gateway transactions for the fixed-asset smart contract; the endorsement policy and the network are implemented within LevelDB and CouchDB state databases. Fabric supports two alternatives for a key-value store, CouchDB and LevelDB, to maintain the current state. Both are key-value stores; while LevelDB is an embedded database, CouchDB uses a client-server model (accessed using a REST API over a secure HTTP) and supports a document/JSON data model. Each transaction obtains a collection of assets from the world state database, which is comprised of a random selection of available UUIDs (Universal Unique identifiers). Following rounds, increase the batch size of assets acquired from the world state database with a fixed load. The measurements were carried out using a command-line interface (CLI) by configuring the Caliper benchmarking tool using the benchmark workspace, network module, and workload to monitor the system's performance. The test was carried out by simulating a specific transaction load through Org1 "User A" and Org2 "User B" and IoT (Org3). The edge server saved the identities of all connected nodes and authenticated them inside a trustworthy Hyperledger Fabric environment by applying the mutual authentication mechanism. The suggested model's performance was evaluated for a variety of workloads and environmental conditions. Furthermore, a diverse set of interaction performances was observed to investigate the improvement or deterioration induced by different model parameters and setups. We evaluated Hyperledger Fabric V2.3.0 benchmarking, real-time data reporting, and resource consumption statistics were gathered and monitored. The following steps provide examples of various functions through Hyperledger infrastructure configuration and network performance benchmarking: 1. Bring up the test network and create the channel; 2. Package and install the smart contract; 3. Approve a chain code definition; 4. Commit the chain code definition to the channel; 5. Invoke the chaincode; and 6. Run Caliper Benchmark and get the network performance report by monitoring IoT network latency, send rate and throughput as shown in Figure 4 ## V Evaluation Hyperledger Fabric IoT architecture can provide several benefits, such as improved security, scalability, and transparency. It can also enable the development of new business models and services that leverage the data collected from IoT devices. However, implementing this architecture requires a deep understanding of both blockchain and IoT technologies, as well as experience in integrating and deploying these technologies in a real-world setting. IoT devices generate various types of data, most of which is unstructured. For instance, cameras capture images and videos, microphones record external sounds, and sensors detect physical signals like gas, temperature and humidity, all Fig. 4: Running the Caliper benchmark and obtaining the performance report for the IoT network of which are converted into digital data. These real-time data cannot be directly stored in relational databases; they must be pushed promptly to authorised users. Voice and video data are streamed, encoded, and sent to the cloud server via WiFi or 4G, resulting in the generation of a resource URL that users can use to access the data. On the other hand, for sensor data, the device sends the data to a topic through an MQTT-based service or other protocols, and the server pushes the message to the client after authorising it through the Hyperledger Fabric network. This kind of data is mainly used to control IoT devices and perform various operations. Clients can send requests to the server through a restful API based on HTTP(s), and the server can send control signals back to the device through MQTT or other protocols: * Data Privacy-preserving. Data privacy and data trading are crucial in today's digital landscape. Blockchain technologies, such as Hyperledger Fabric, enhance data privacy by securely storing and sharing information in a decentralised and immutable manner. Access control features in Hyperledger Fabric ensure that only authorised parties can access and modify data and smart contracts. In our proposed model, Hyperledger Fabric channels facilitate secure data trading between broker users with granular access control, granting specific permissions to individual users or organisations. This empowers organisations to protect sensitive information and securely trade data. * Data integrity. Ensuring data integrity is crucial for industrial safety, with goals of assurance, completeness, consistency, and dependability throughout the data life cycle. Blockchain's decentralised system, using hashed blocks, guarantees data integrity and mitigates challenges from cloud services connected to IoT devices. Configuring IoT devices as direct blockchain nodes enhances data reliability, removing human intervention and external system reliance. This approach reinforces the integrity of industrial IoT data, promoting a secure and trustworthy environment. * Scalability, Throughput and Latency. Hyperledger Fabric's Execute-Order-Validate method, which separates transaction execution and ordering, is a key advantage. This separation boosts scalability, enhances performance, and reduces node workload. Unlike other blockchain designs, Fabric's approach introduces parallel transaction processing, addressing smart contract non-determinism. This results in higher throughput and lower latency, creating an efficient and high-performance blockchain ecosystem. In summary, this approach promotes privacy, trust, scalability, and access control in secure IoT data systems, setting the stage for secure information exchange while maintaining privacy and trust. ## VI Conclusion The PASISON approach defines permissioned approach to access control for segmented Devices and identity for IoT networks and allows an entity to carefully control the usage of devices and data within a trusted infrastructure. As part of our future work, we plan to expand the IoT network, evaluate hardware capabilities, and explore innovative consensus methods and scalability for improved data processing and responsiveness during safety-critical events.
2305.08978
An assessment of measuring local levels of homelessness through proxy social media signals
Recent studies suggest social media activity can function as a proxy for measures of state-level public health, detectable through natural language processing. We present results of our efforts to apply this approach to estimate homelessness at the state level throughout the US during the period 2010-2019 and 2022 using a dataset of roughly 1 million geotagged tweets containing the substring ``homeless.'' Correlations between homelessness-related tweet counts and ranked per capita homelessness volume, but not general-population densities, suggest a relationship between the likelihood of Twitter users to personally encounter or observe homelessness in their everyday lives and their likelihood to communicate about it online. An increase to the log-odds of ``homeless'' appearing in an English-language tweet, as well as an acceleration in the increase in average tweet sentiment, suggest that tweets about homelessness are also affected by trends at the nation-scale. Additionally, changes to the lexical content of tweets over time suggest that reversals to the polarity of national or state-level trends may be detectable through an increase in political or service-sector language over the semantics of charity or direct appeals. An analysis of user account type also revealed changes to Twitter-use patterns by accounts authored by individuals versus entities that may provide an additional signal to confirm changes to homelessness density in a given jurisdiction. While a computational approach to social media analysis may provide a low-cost, real-time dataset rich with information about nationwide and localized impacts of homelessness and homelessness policy, we find that practical issues abound, limiting the potential of social media as a proxy to complement other measures of homelessness.
Yoshi Meke Bird, Sarah E. Grobe, Michael V. Arnold, Sean P. Rogers, Mikaela I. Fudolig, Julia Witte Zimmerman, Christopher M. Danforth, Peter Sheridan Dodds
2023-05-15T19:40:28Z
http://arxiv.org/abs/2305.08978v1
# An assessment of measuring local levels of homelessness ###### Abstract Although nearly 600,000 people experience homelessness in the United States every year, efforts to address this public health crisis are limited by the underperformance of standard methods to estimate localized and nationwide homelessness. Recent studies suggest social media activity can function as a proxy for measures of state-level public health, detectable through straightforward applications of natural language processing. We present results of our efforts to apply this approach to estimate homelessness at the state level throughout the US during the period 2010-2019 and 2022 using a dataset of roughly 1 million geotagged tweets containing the substring "homeless." Correlations between homelessness-related tweet counts and ranked per capita homelessness volume, but not general-population densities, suggest a relationship between the likelihood of Twitter users to personally encounter or observe homelessness in their everyday lives and their likelihood to communicate about it online. An increase to the log-odds of the word "homeless" appearing in an English-language tweet, as well as an acceleration in the increase in average tweet sentiment, suggest that tweets about homelessness are also affected by trends at the nation-scale. Additionally, changes to the lexical content of tweets over time suggest that reversals to the polarity of national or state-level trends may be detectable through an increase in political or service-sector language over the semantics of charity or direct appeals. Although tweet sentiment does not correlate to changes in homelessness volume, an analysis of user account type undertaken to explain nationwide sentiment dynamics revealed changes to Twitter-use patterns by accounts authored by individuals versus entities that may provide an additional signal to confirm changes to homelessness density in a given jurisdiction. While a computational approach to social media analysis may provide a low-cost, real-time dataset rich with information about nationwide and localized impacts of homelessness and homelessness policy, we find that practical issues abound, limiting the potential of social media as a proxy to complement other measures of homelessness. ## I Introduction In the United States--the world's wealthiest nation--nearly 600,000 individuals experience at least one night of homelessness every year [1]. One of the most critical social determinants of health, housing insecurity often prevents access to health-supporting resources including medical and sanitary services while at the same time increasing mental and physical stressors correlated with poor health outcomes, including higher rates of illness and death. In the US, a person experiencing homelessness is up to four times more likely to die prematurely and has a life expectancy of only 48 years [2]. For the past five years, official counts of homelessness in the US have been on the rise after years of sustained progress in decreasing the raw count of homeless households nationwide [1]. It is now more important than ever that we develop cost-effective, high-impact, rapid-turnaround interventions that are sustainable, both from a collective resources perspective and from the perspective of households moving through the service delivery system. Efforts to address homelessness, however, have long been troubled by our collective underperformance in measuring the extent of the problem and identifying appropriate geographic allocation of resources and interventions. Since the 1970s, federal policy bodies such as the US Department of Housing and Urban Development (HUD), the US Department of Agriculture (USDA), and the US Census Bureau, have attempted to develop and refine efforts to count the homeless with the stated policy goal of making homelessness "rare, brief, and non-recurring" [3]. The Point in Time (PiT) count, a national effort conducted and reported annually since 2007 by regional US service coordinating entries called Continua of Care (CoCs) in partnership with outreach volunteers, has been the federal standard for estimation of homelessness distribution [4]. The Point in Time Count, however, has come under increasing scrutiny, criticized for a lack of uniformity in data collection and estimation techniques, poor design, coarse geographic scales, and coarse time scales, which make meaningful research and analysis nearly impossible [4; 5]. In response, advocates within the homelessness services world increasingly call for new methods to generate more reliable, finely-grained estimates of homelessness distributions and system flow dynamics. Accordingly, in our present study, we aim to determine whether social media analytics could serve as a comparable point-in-time and/or dynamic proxy measurement of homelessness at the state level that could provide a finer temporal resolution than the PiT's annual timeframe. ### Literature Review Recent research on homelessness and social media use has applied sentiment and content analysis to posts on homelessness from homeless versus non-homeless bloggers. In "Characterizing Homeless Discourse on Social Media," researchers reviewed Tumblr posts with hashtags #homeless, #homelessness, #poverty, #begging, and #homelessshelter, dividing users and posts into homeless versus non-homeless users and posts. Using word2vec embeddings and Latent Dirichlet Allocation (LDA) algorithms to cluster posts by semantic similarity, the authors found that content by homeless users had a more negative sentiment, used more first-person pronouns, and tended to chronicle the challenges they were experiencing in homelessness. By contrast, relevant posts by non-homeless users tended to use more abstract, depersonalized words about advocacy, acts of kindness, or the news [6]. Other research has focused on user-role identification and network analysis of homeless users' social media accounts. For example, in Koepfler _et al._'s analysis of @WeAreVisible, a Twitter account encouraging content visibility by and for people experiencing homelessness, identified eleven primary user roles and analyzed the structure of the connections between and within user types [7]. The majority of homeless group participants belonged to a densely-connected cluster of users comprising around 10% of all members. The authors found that, despite the community's stated purpose, only about 4% of users in the account's network self-identified as homeless, with over half the network being tagged as "social media enthusiasts" or "do-gooders" [8]. People experiencing homelessness tended to be well-connected with one another and connected, though to a lesser extent, with other user types [9]. Koepfler _et al._ also sought to distinguish values characterizing homeless versus non-homeless users on the basis of their tweet content, finding that homeless and formerly-homeless users tended to express, among other values, helpfulness, broadmindedness, justice, equality, responsibility, and freedom more often than never-homeless users. Our concern is both broadly the analysis of volume, sentiment, and content of homelessness-related Twitter use, and also narrowly its relationship to observed measures of homelessness rates within a US-geotagged jurisdiction. Prior research has established correlations between tweet content, location, and sentiment with other public health metrics. For example, co-authors developed the "Lexicocalorimeter", a tool ranking states by caloric intake and output estimates assigned to US-geotagged tweets containing food, drink, and activity names. Through comparison between state-level tweet content, a range of strong, common-sense relationships emerged between aggregate caloric scores and an assortment of health and well-being indicators, such as life expectancy, obesity rates, and mental health challenges [10]. Similarly, researchers have demonstrated strong correlations between state-level expressions of happiness, measured via tweet sentiment, and well-established but resource-intensive survey measures of happiness like the Gallup well-being index [11]. Importantly, research has also suggested the capacity of social media sentiment and content analysis to serve as a real-time public health surveillance tool through geolocation analysis of tweets containing symptom-descriptive words, a use-case of which was the monitoring of intra-urban spread of dengue fever in Indonesia [12; 13]. Thus, we were motivated to perform similar analysis of Twitter data to determine whether or not the platform could be harnessed to solve the seemingly-intractable problem of estimating homelessness--both comparative point-in-time levels across states and national and state-level trends over time. ## II Data **Tweets:** Using the Twitter API v 2.0, which granted researchers full-archive access to, among other data, tweet and user account information, we queried all English-language tweets containing the word "homeless" that also contained geotag bounding box and other metadata indicating that they had originated from a location within the United States between the dates March 1, 2010, and December 31, 2022 [14]. The fields we accessed included: * _id:_ A unique numeric identifier for each tweet. * _text:_ A string of words, numbers, characters, and emojis comprising the published content. * _created_at:_ Date and time stamp of the tweet's publication. * _author_id:_ A unique numeric identifier for the user who posted the tweet. * _author:_ A dictionary of descriptive attributes of the user, including user-generated, self-descriptive text and the username. * _geo_: A dictionary of geolocation information, including country of origin, bounding box latitude/longitude coordinates surrounding the user's exact location at time of posting, a "full_name" string descriptor of the location, and the place type corresponding to the named location (e.g., "city"). We supplemented our dataset using the Storywrangler platform [15], accessing log-odds ratio data for words (1-grams) across a random sample of 10% of all English-language tweets during the same time period. Log-odds counts the number of English-language 1-grams containing the substring "homeless" as a fraction of all English-language 1-grams, transformed using the base-10 logarithm for clearer comparison across time steps. Data accessed by the Storywrangler site was made available to the University of Vermont via Gnip's Decanose application programming interface (API), an enterprise endpoint and streaming connection to Twitter data that has greatly enabled social media research and statistical analysis on a representative and robust sample of tweets. **Homelessness data:** From data.hud.gov, we downloaded the 2007-2019 and 2022 Point in Time Estimates by CoC (Continuum of Care), a collection of annual reported counts of homeless individuals and families across the US reflecting a single 24-hour window in the last week of January [16]. Over time, the PiT dataset has evolved from just 28 questions in 2008 to 574 in 2022. In order to compare data across years, we selected the variable "overall homeless", an aggregate count of all sheltered and unsheltered adults and children identified as homeless. Initially, we hypothesized that sentiment and tweet volume would correlate most strongly with unsheltered homelessness, the visibility of which can trigger strong emotions. However, unsheltered homeless counts were characterized by much narrower variance, which could make correlations harder to discern, and relationships appeared stronger between total homelessness and other variables. Accordingly, our results focus exclusively on the overall homeless variable common to all years of the PiT count dataset. **Census data:** To estimate annual per capita rates of homelessness and Twitter activity, we used the US Census Bureau's American Communities Survey One-Year Annual Estimates [17]. Within this dataset, we used only total state population estimates within each given year. To estimate the population density of homelessness (number of persons experiencing homelessness per square land-mile), we used the Land Area estimates for each US state as published by the Census Bureau in its State Area Measurements and Internal Point Coordinates dataset [18]. ### Pre-processing Our dataset of geotagged US tweets contained 923,385 messages posted from March 1, 2010, through December 31, 2022, featuring a "geoid" field comprised of a dictionary of information about the specific location from which the tweet was published. We used information provided in the 'geo' field to identify the state where the tweet was originally posted [19], and where the value was ambiguous (e.g., Starbucks), we applied GeoPy's Nominatim object and geolocator.reverse() function to the latitude and longitude coordinates, if available [20]. Among our tweets, only 19,461 were unable to be precisely state-labeled, or 2.1% of the dataset. We then added several fields, including "state-year," for ease of filtering. Tweets were included in our analysis if they were geolocated to one of the fifty US states. We excluded tweets from the District of Columbia and US territories such as Guam or Puerto Rico. For tweet text analysis, we next removed from the "text" field emoji, URLs, usernames (as indicated by an asperband symbol), and stop words that add no semantic meaning and, due to their frequency, dilute composite sentiment toward a neutral score (e.g., "the" and "of"), retaining only blocks of alphanumeric characters separated by spaces into single-word tokens. Time stamps were converted from the API's Coordinated Universal Time (UTC) measurement to the local time zone at point of tweeting. Additionally, any hashtags in camel case were separated into their individual words (e.g., the hashtag #HomelessnessFirst would be converted into the string "homelessness first" while #Homelessnessfirst would not be affected). Finally, we processed each dataset by transforming each state's raw counts to per capita counts and per square mile densities based on the annual census population estimate for each state (see Figures 2 and 3). We then calculated annualized changes to the per capita tweet and homelessness count by state-year and log Figure 1: **Raw counts of all US-geotagged tweets containing “homeless” by year.** Geolocation functionality first became available in November 2009 but was only slowly rolled out through first months of 2010 to broaden its potential scope to all users and tweets, resulting in lower counts during the early years of available data. transformed each as appropriate. ## III Methods In our research, we consider the potential of real-time analysis of volume, content, and sentiment of homelessness-related social media communication to serve as a proxy variable estimating local homelessness rates or trends at the state level in the US. We initially hypothesized that localized changes in homelessness rates would exhibit a positive association with the volume of tweets generated within a given state, while sentiment expressed in tweets would be negatively correlated with changes in homelessness rates. Our proposed mechanism for these associations was that an increase in real-life, public encounters between the general public and individuals experiencing homelessness would result in an increase in online expressions of frustration and stigma, either against the structures that cause homelessness or toward the homeless themselves. However, social media users' abilities to perceive changes to homelessness are heterogeneous, highly variable, and context-dependent, unlike more ubiquitous phenomena previously explored, such as the sleep loss insult of daylight savings time [21], happiness [11], or caloric intake/output [10]. Accordingly, we needed to consider a range of possible variables, measures, and relationships. Testing for parametric (\(r\)) and nonparametric rank-rank (\(\rho\)) correlations (see Sec. IV.1), as well as time series cross-correlations (see Sec. IV.1.4), we analyzed relationships between and among: * Reported counts of homeless individuals, * Tweet volume, * Tweet sentiment, and * Tweet content Each variable was measured: * As raw, per capita, and ranked counts, per square land-mile densities, and annualized changes thereof, * As percentages of total nationwide measurement in a given year, and * Aggregated by year across all states, by state across all years, and by state by year ("state-year"), * With each measure additionally log-transformed as useful or appropriate. We also compared time series data illustrating the log-odds of the substring "homeless" appearing in an English-language tweet with nationwide homeless counts (see Sec. IV.1.3). Finally, we drew on techniques specific to the domain of natural language processing--sentiment analysis and allotaxonometry--to better understand the dynamics of sentiment and content expressed in homelessness-related tweets over time and at various scales (see Sections IV.2 and IV.3.2). ## IV Results ### Homelessness & Tweet Volume #### iv.1.1 Homelessness volume through time: Counts & densities Although homelessness counts nationwide have been on the rise since 2017 following nearly a decade of annual decreases, mean and median per capita homelessness rates by state, as well as the maximum per capita rate nationwide, actually declined from 2012-2019 (see Figure 2). Moreover, while homelessness counts are characterized by a consistent annual variance of 0.000001 people experiencing homelessness per capita, the variance of estimated homelessness density is much higher, with a mean variance across states of 0.24 people experiencing homelessness per square land-mile. Nevertheless, there is a much higher rank-turbulence [22] among per capita state homelessness counts than among per square mile densities, with only two states consistently ranked in the bottom ten states for per capita counts and all ten of the bottom ten states for density remaining the same across all years of the data. Likewise, eight of the top ten density states were invariant across all years, as compared with only six of the top ten per capita states. The annualized distribution of raw state-level tweet counts had a variance distribution among all fifty states for each year of 3.06\(\times\)10\({}^{4}\) to 1.22\(\times\)10\({}^{7}\) per year, with state population predictably correlating to a state's raw tweet count across all years. #### iv.1.2 Correlating homelessness density, counts with tweet counts We found that the correlation between a state's ranked homelessness density and its ranked per capita homelessness-related Twitter activity is statistically significant across all years of data after 2010 (see Tab. 1). We also observed that the ranked per capita homelessness counts correlates with ranked per capita homelessness-related tweet counts among states from 2013-2019 and again in 2022. We considered the possibility that state population density in general--and not the density only or specifically of people experiencing homelessness--was correlated with per capita tweet rates without regard for topic, which could provide an alternative explanation for the statistical significance of the relationship between homelessness density and tweet counts. We did not find, however, that overall state general-population density (number of persons per square land-mile) correlated at statistically significant levels with homelessness-related Twitter activity volume in any year. We propose that the likelihood of an average social media user to tweet about homelessness is greater if the probability is comparatively high that they will directly observe changes to their state's overall homeless population. This probability will depend, in part, on the size and population density distribution of their state of residence and the extent to which a raw change in homelessness counts is visible as a proportion of nationwide count or per capita proportion of a single state's total population. For example, in California, a total homeless count increase of 21,306 between 2018 and 2019--which translates to a change of 0.00054 persons experiencing homelessness per capita and 0.137 per square mile--may be less noticeable than a decrease of just 1,597 in Massachusetts over the same period--a decrease of half as many persons per capita (0.00024) but nearly twice as many per square mile (0.20). #### iii.2.3 Tracking nationwide counts, log-odds of homelessness-related tweeting However, the salience of homelessness as a topic of interest for an average Twitter user is not only affected \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|} \multicolumn{11}{c}{**State-level general-population densities, per capita tweet rates**} \\ \hline & 2010 & 2011 & 2012 & 2013 & 2014 & 2015 & 2016 & 2017 & 2018 & 2019 & 2022 \\ \hline Spearman \(\rho\) & -0.071 & -0.019 & 0.103 & 0.066 & 0.054 & 0.083 & -0.064 & -0.097 & -0.015 & -0.064 & -0.143 \\ \(p\)-value & 0.619 & 0.896 & 0.473 & 0.645 & 0.707 & 0.563 & 0.653 & 0.497 & 0.914 & 0.658 & 0.318 \\ \hline \multicolumn{11}{c}{**State-level homeless-population densities, per capita tweet rates**} \\ \hline & 2010 & 2011 & 2012 & 2013 & 2014 & 2015 & 2016 & 2017 & 2018 & 2019 & 2022 \\ \hline Spearman \(\rho\) & 0.060 & 0.672 & 0.822 & 0.669 & 0.689 & 0.634 & 0.638 & 0.613 & 0.685 & 0.52 & 0.452 \\ \(p\)-value & 0.677 & 6.64e-s & 1.46e-13 & 7.93e-s & 2.13e-s & 6.00e-7 & 4.66e-7 & 1.77e-6 & 2.87e-s & 9.26e-5 & 8.59e-4 \\ \hline \end{tabular} \end{table} Table 1: **Spearman \(\rho\) correlation between population density and homelessness-related tweet rates.** Nonparametric, ranked correlations were found to be statistically significant with respect to homelessness densities (bottom) but not general-population densities (top). Figure 2: **Computation of point-in-time cross-state comparative data.** To re-scale measurements across states of different size, we calculated each state’s homelessness counts or tweet counts by some scaling factor: each state’s estimated population in the relevant year (top and bottom), estimated land area in square miles (center). Within-year correlation coefficients show us that states with highly-ranked per capita and per square land mile homelessness counts tend to have highly-ranked per capita homelessness-related tweet counts. The re-scaling helps us compare large and small states with the same statistical test such that state size does not account for observed correlations. See Fig. D for more details. by the conditions in their local environment. National trends and the attendant media attention they receive, amplified by real-time distribution available on social media, may also affect the likelihood of the 1-gram "homeless" appearing in an English-language tweet. For example, as US trends shifted from decreasing to increasing nationwide counts, the log-odds of "homeless" appearing in a tweet, increasing on average \(6.11\times 10^{-7}\) units per year (from \(1.00\times 10^{-5}\) to \(1.43\times 10^{-5}\) ) over the seven year period 2010-2016, increase at nearly six times that rate between 2017 and 2019 (\(3.65\times 10^{-6}\) per year) [15], see Fig. 5. This dramatic increase in proportion of homelessness-related tweets as a proportion of all Twitter content as US homelessness increased, paired with a similar acceleration in the increase to homelessness-related tweet positivity (see Sec. IV.2), strongly suggest that homelessness-related Twitter activity is not only affected by the local density experienced by a given Twitter user, but also that user's contextualization of homelessness within broader national trends. #### iv.2.4 Within-state time series correlations We next sought to determine whether fluctuations within a single state across time might correlate to changes in the per capita homelessness-related tweet count within that state. While in-year changes to per capita homelessness counts (see Fig. 2) did not correlate across states to per capita tweet counts, nineteen states exhibited statistically significant nonparametric rank-rank correlations between some measure of localized homelessness (count, density, or change since the prior year) and Twitter volume (count or change) across all eleven years of available data (see Fig. 3). Among these states, five (California, Florida, Delaware, Massachusetts, and New York) were both at or above the 75\({}^{\text{th}}\) percentile of per capita homelessness densities across all ten years and also ranked in the top ten states with respect to variance in homelessness rate or density, indicating both high point-in-time measures of homelessness and longitudinal instability. An additional state (North Dakota) ranked within the top ten states with respect to variance only, likely because of low volume; one more (Washington) was consistently at or above the 75\({}^{\text{th}}\) percentile of per capita homelessness rates. Figure 3: **Computation of within-state time series data.** We performed re-scaling as described at Fig. 2 and then performed correlation tests to determine whether, within a single state’s ten years of data, higher-ranked homeless counts resulted in higher-ranked tweet counts. This might suggest that Twitter users were aware of, and responding to, larger or smaller homeless counts within their state from year to year. #### iii.1.5 Within-state time series cross-correlations We also examined whether time series data for states' homelessness counts exhibited cross-correlation with tweet counts after some period of lag, indicating for example that an increase in per capita homelessness would correlate to an increase in per capita tweets over the following \(t\) years. Accordingly, we constructed two sets of time series data for each of the 50 states, then calculated the cross-correlation coefficients for up to 9 years of lag. Fig. 10 visualizes the distribution of cross-correlation measures across all states throughout the entire time period. As expected, cross-correlations between point-in-time homelessness and tweet counts indicate that tweet volume likely reflects observations that are recent in time, i.e., within the last 1 to 2 years. The strongest cross-correlation relationships between per capita total homelessness and tweet counts were at time step \(t=1\) and \(t=4\), with Southern states Louisiana, Florida, South Carolina, and Virginia exhibiting a strong anti-correlation and New York and Massachusetts strong positive correlations (see Tab. 2, 10). States with cross-correlation coefficients closest to 0 at \(t=1\), on the other hand, included Arkansas, Pennsylvania, Maine, and California. We also constructed time series for changes to per capita homelessness counts and changes to per capita tweet Figure 4: **A. Raw nationwide counts and B. per capita state homelessness rate distributions, as measured in the 2010-2019 and 2022 PiTs. Nationwide US counts of total homelessness are on the rise since the 2017 Point-in-Time Count. Roughly 75% of the per capita counts for all fifty US states in each year falls below 0.002 people experiencing homelessness per capita (or 2 for every 1,000), though a heavy-tailed distribution across all years reveals a number of states in every year experiencing a per capita count at least twice as high.** Figure 5: **A. Log-probability of the 1-gram “homesless” appearing in an English-language tweets and B. BEAST trend changepoint detection visualization for month-scale sentiment data.** Drawing from a random sample of 10% of all English-language tweets, storywrangler.org provides statistics on the rank and frequency of \(n\)-grams, available at various timescales. The light blue line in the upper plot represents daily probabilities, while the dark blue represents monthly averages in the period 1/1/2010 - 12/31/2022. As with the tweet sentiment time series, the Breast package identifies time step 96 of the monthly average log-odds data, or January 2018, as a potential changepoint with an associated probability of 96.5%. ing. In these timeseries, an increase from time step \(t-1\) to time step \(t\) represents not just an increase in overall count of homeless population per capita, but also an increase in the magnitude of population density _growth_ in two successive years. Interestingly, cross-correlation coefficients tended to be much lower for most states at the upper end of the distribution when assessing the relationship between changes to per capita homelessness rate and changes to per capita tweeting. Only two states had a cross-correlation greater than 0.75 in magnitude across any potential lag period, Connecticut (0.825 at \(t=1\)) and Massachusetts (0.796 at \(t=1\)). By contrast, geographically large states with relatively low density rankings tended to be closer to zero among the distribution of cross-correlation scores between changes in homelessness rate and changes in tweets, including Colorado with density rankings in the range [25, 31]; Texas, [26, 31]; Utah, [40, 42]; Washington, [11, 14]; Alabama, [31, 35]; and Wyoming [48, 49]. The strength of these two relationships, paired with Massachusetts' outlier status among nationwide homelessness density scores (consistently ranked \(1^{\text{st}}\), and Connecticut ranked from \(6^{\text{th}}\) to \(8^{\text{th}}\)), may indicate that changes to per capita tweet rates are driven by the greater likelihood of an escalation of population growth being more immediately visible to the average Twitter user in smaller, high-density states. ### Homelessness & Tweet Sentiment #### iii.2.1 Sentiment analysis overview Sentiment analysis is a tool widely adopted to gauge the relative positivity or negativity of a text given the sentiment expressed by its constituent pieces, or types (e.g., unique words), weighted by the frequency of each type's appearance (tokens). Though methods to compute a text's composite score vary, one approach has been to rely on sentiment measurements of a set of curated words in frequent use across diverse corpora (i.e., the "lexicon"), as averaged across scores assigned by trained reviewers who are familiar with the words and their common contexts and uses. This "bag-of-words" approach references the score for each unique word in a text that is represented in the lexicon and takes the sum of those scores weighted by the proportion of the text that is represented by each word. Though words may admittedly carry different sentiments to different individuals and across different contexts, several factors contribute to the reliability of this approach for texts, or corpora, of sufficient size: for example, the exclusion of some words from lexicons that are too context-dependent to be reliably scored (e.g., some curse words), the generally-accepted requirement that corpora must contain sufficient words to counterbalance any uncommon use of a small number of types, and the averaging of scores across many reviewers (see Refs. [23, 24] for more detail). For each state, year, month, and state-year, we aggregated all geotagged tweets containing "homeless" into a single string, discarded words with a neutral score in the range of \([4.5,5.5]\)[25], then used the Language assessment by Mechanical Turk (labMT) dictionary to determine compound sentiment scores of large tweet subgroupings (e.g., all tweets geolocated to California in a single month or year), which we then rescaled to a range of possible values of -1 to 1. [23, 24]. \begin{table} \begin{tabular}{|c|c|c|} \hline State & Coef & \(t\) \\ \hline LA & -0.923 & 1 \\ FL & -0.917 & 1 \\ \hline TX & -0.915 & 1 \\ SC & -0.913 & 4 \\ VA & -0.912 & 1 \\ NY & 0.906 & 1 \\ OK & -0.893 & 1 \\ KY & -0.865 & 1 \\ MA & 0.852 & 1 \\ CO & -0.848 & 1 \\ \hline \end{tabular} \end{table} Table 2: **Strongest cross-correlation coefficients between per capita homelessness and tweet rates.** Figure 6: **Nationwide US homeless counts and log-odds of the 1-gram “homeless” appearing in an English-language tweet, annual scale.** Here, log-odds of the substring “homeless” appearing in a US-geotagged, English-language tweet were calculated for each month of the period 1/1/2010–12/31/2022 and are compared to nationwide raw homeless counts. Note the scaling of the double y-axes. See also B #### iii.2.2 Baseline nationwide versus homelessness-related sentiment While nearly all corpora exhibited a higher-than-average sentiment score as predicted by the positivity bias observed by Dodds _et al._Dodds _et al._ (2019), the overall sentiment of tweets containing "homeless" was nevertheless lower than the sentiment of overall English-speaking Twitter during the same period and also lower than the sentiment of all US Twitter as documented at hedonometer.org and [https://hedonometer.org/maps.html](https://hedonometer.org/maps.html), which almost never fell below 5.9 during the period 2010-2019, rescaled to Figure 7: **Distribution of homelessness and tweet volume by state.** A. Distributions include per capita tweet rate, B. per capita homelessness rate, and C. per square land-mile homelessness density. Each box-and-whiskers plot displays the distribution of a single state’s data across the years 2010-2019 and in 2022. 0.225 under our calculations (see Fig. 12). For the three years during which the Hedonometer provides an average sentiment score for all US-geotagged tweets (2011, 2012, and 2013), sentiment of US-geotagged tweets containing "homeless" falls well below the reference ( See Tab. 3) [27]. #### iv.2.3 Changepoint detection for national sentiment The sentiment of US tweets containing "homeless" increased steadily throughout the years 2010-2019 at the same time that odds of usage for our target \(n\)-gram within US-geotagged Twitter increased. After 2017, the increase in odds and sentiment scores accelerated as the US PiT count documented a change from a nearly decade-long downward trend in overall homeless counts to what has since become 6 years of increasing homelessness. (see Fig. 11 and 5.) Between May and June 2017, all fifty states had published the results of their 2016 Point in Time Counts, and in December 2017, HUD released Part I of its Annual Homelessness Assessment report (AHAR) to Congress, where it reported the nationwide increase for the first time [28]. A Python-compatible Bayesian estimator of abrupt change and seasonality, Rbeast's trend change point detection function estimates a 99.7% probability of a change in sentiment trend at time step 95, or December 2017, the end of three successive months of increase in compound sentiment score [29]. Indeed, the Spearman \(\rho\) correlation coefficient between ranked raw national homelessness counts and ranked nationwide sentiment scores is statistically significant (\(\rho=-0.68\) at \(p=0.02\)). However, this correlation between sentiment and homelessness counts is significant only at the national scale; there is no year in which the correlation coefficient between ranked per capita state-level homelessness counts and ranked state-level compound sentiment scores is statistically significant. Variance across the compound sentiment scores of all 50 states, in fact, was extremely low--less than 0.001 for every year since 2011, and only 0.007 in 2010. The increase in nationwide sentiment score is driven both by a decrease in the number of states with negative scores and also by an increase in magnitude of positive scores among positive-sentiment states. In 2010, roughly half (23, or 46%) of the states tweeting about homelessness scored below zero on average, with a range of \([-0.4,0]\), and no states' scores ranged higher than 0.09. After 2013, no state's mean sentiment score is below zero with only two state exceptions in 2015 [30]. By 2019, roughly 2/3 of all states' annualized tweet compound sentiment score was greater than 0.1. Curious about the mechanism driving the overall positive trend in the sentiment expressed by tweets containing "homeless" even as homelessness increased nationwide, we speculated that either (a) more frequent contact with homelessness could be eliciting among average Twitter users a sense of sympathy, rather than stigma, toward the homeless or (b) any signal of negative sentiment associated with increased homelessness generated by individual Twitter users could be being drowned out by an increase in, for example, appeals and marketing communication by homeless-serving organizations. To investigate further, we began an analysis of the relationship of homelessness rates to tweet content and user type. If, as we hypothesized, homelessness advocacy, service, or policy bodies were responsible for content sentiment change over time, perhaps the proportion of sector-specific language or user accounts could further refine proxy estimates of homelessness rates across the 50 states. Figure 8: **Per capita tweet rate distributions by year, all US states.** Each box-and-whiskers plot represents the distribution of per capita tweet rates across all fifty US states in a single year. \begin{table} \begin{tabular}{|c|c|c|c|} \hline Year & Average US & Rescaled & Sample Average \\ & Sentiment & & Sentiment \\ \hline 2011 & 5.98 & 0.245 & 0.032 \\ 2012 & 5.92 & 0.223 & 0.039 \\ 2013 & 5.88 & 0.220 & 0.053 \\ \hline \end{tabular} \end{table} Table 3: **Comparison of average sentiment of all US tweets versus tweets containing “homeless”.** Average sentiment of a random sample of 10% of all English-language tweets from 2011-2013 was filtered for US-geotagged tweets only and published at headometer.org, then rescaled to a range of -1 to 1 for comparison to the average sentiment scores for tweets in the sample of all US-geotagged tweets containing “homeless”. ### Homelessness & tweet content #### v.3.1 Corpus curation To determine whether rates of homelessness may be distinguishable according to the content of tweets originating from each state, we compared language used in tweets originating from states consistently reporting among the lowest rates of homelessness versus tweets from states with the highest rates of homelessness. This initial analysis narrowly defined high-homelessness states as the six jurisdictions within the top ten rates across all years (California, Washington, Oregon, Nevada, New York, and Hawaii). High-density states included California, Hawaii, and New York as well, in addition to Connecticut, Massachusetts, Maryland, New Jersey, and Rhode Island. At the bottom of each scale, only Kansas and Mississippi ranked in the bottom ten per capita homelessness rates across all years of available data, as contrasted with Alaska, Idaho, Kansas, Montana, North Dakota, Nebraska, New Mexico, South Dakota, Utah, and Wyoming, ten states that remained at the bottom of density rankings consistently. We aggregated all tweets across all years from each cluster of states (i.e., high-homelessness versus low-homelessness, high density versus low density), then ranked by magnitude of difference between the two corpora the relative frequency of each word. The top 50 words occurring more frequently in either corpus relative to the other can be found at Appendix B. #### v.3.2 Sector-specificity and high-homelessness states Perhaps not surprisingly, high-homelessness states, when tweeting content containing the word "homeless", had higher proportions of words common to housing and homelessness policy and services. As seen in Appendix B, ten industry-specific terms were more common to high-homelessness states, such as "housing," "employed," "preventing," "encampment," and "youth." In contrast, words more commonly associated with low-homelessness states were more likely to be generic words (e.g., "much," "think," "possible," "want) or words likely to be associated with crowdsourced and peer-to-peer fundraising ("donate," "paypal," "venmo"), rather than policy solutions. Figure 9: **Comparison of strongly correlated (A. New York, C. Massachusetts) and anti-correlated (B. Louisiana, D. South Carolina) time series of per capita homelessness and tweet rates.** Note that, while many Southern states see anti-cross-correlation between homelessness and tweet rates due to monotonically decreasing rates of homelessness offset by one year from primarily-increasing rates of tweeting, South Carolina’s anti-correlation may be an artifact of the annual oscillation of its rates. #### iii.3.3 Allotaxonometry overview Allotaxonometry is the comparison of any two complex systems with internally diverse structures [22]. For any two text corpora, allotaxonometry with rank-turbulence divergence uses the relative frequency of a word in each respective corpus in order to identify which words contribute the most to the difference between the corpora. The large, diamond-shaped diagrams in Fig. 14 and in Appendix A, for example, are each interpreted as a double-histogram in which a word's marker is located along the central dividing line if its rank is equal in both corpora, with the highest-ranked word for both corpora being represented as the apex of the diamond. If a word appears more often in the reference (or left-hand) corpus, it will appear in the left-hand side of the histogram, as its rank will be lower in that corpus than in the right-hand/comparison corpus. The farther a word appears from the central dividing line, the more extreme the difference in its rank between the two corpora. Words that appear in one corpus but not the other are represented in isolated lines along the lower edges of either side [22, 31]. To the right of the allotaxonometric histogram is a word shift diagram, which ranks the words that contribute the most to the difference between two Figure 11: **B. Bayesian trend change point analysis of A. monthly compound sentiment of all US-originating tweets containing “homeless”, 2010—2022, overlaid with histogram of magnitude of positive change in successive year, where applicable.** Vertical lines in the lower graphic identify five potential changepoints in sentiment trends detected by the Rbeast package. Note that fluctuations in the early years of the data may reflect greater measurement uncertainty, as total tweet counts prior to 2012 were extremely low. See also Fig. A5 at Appendix B. Figure 10: **Distribution of cross-correlation measures between A. per capita total homelessness and tweet volume by state versus B. annualized changes to both.** In the top image, the cross-correlation coefficient of a state approaches 1 at time step \(t\) if its per capita tweet count rises and falls in sync with its per capita homeless count after a lag of \(t\) years. A cross-correlation coefficient of -1 would indicate perfect anti-correlation: one time series rises as the other falls, and vice versa. In the bottom image, we see the cross-correlation coefficients for changes to per capita tweet count and homelessness counts, rather than the counts themselves. corpora in reverse order of importance of contribution, from greatest to least importance [32]. #### iv.1.4 Sector-specificity and nationwide trends We next generated an allotaxonometry diagram comparing tweets from 2010-2015 and from 2016-2022 [33]. (see Fig. 14) to see if differences could be captured nationwide between the period of decline in homelessness and the period of nationwide increase. Although the word shift was particularly sensitive to a tuneable alpha parameter value, the words important to distinguishing tweets from the period of increasing homelessness from the period of decreasing homelessness were in every case more likely to reflect a politicization of the issue of homelessness. For example, for tweets posted prior to 2016, political words like "via", "look", "man" "gave", and "guy" predominate, whereas words like "housing", "trump", "veterans", "city", "vets", "homelessness", "illegals", "country", and "state" characterize the tweets posted 2016 forward (with additional political words [34] featured, though of lesser importance) [35]. This suggests that a change to the overall direction of growth or decline in nationwide rates of homelessness may be visible in the semantic content of an aggregate corpus containing homelessness-related US-geotagged tweets. #### iv.1.5 Sector-specificity and changes in trend polarity We then tested to see whether changes in homelessness counts would be similarly visible at the state level by identifying states with variable directional trends in their per capita homelessness counts over time and visualizing allotaxonometry histograms to identify words that were important to any differences between positively-trending periods versus negatively-trending periods. We selected four states--California, Massachusetts, Washington, and Hawaii--which experienced a trend reversal with respect to homelessness volume after at least three consecutive years of monotonic growth or decline in order to ensure a sufficiently robust sample size of tweets within each state-year for comparison. Two states featured a polarity shift from decreasing to increasing trends in annual homelessness rates (California and Washington); the others featured the reverse polarity shift, from increasing to decreasing homelessnessness (Massachusetts and Hawaii). For California, we aggregated tweets from 2011-2014 (decreasing) to compare against tweets from 2015-2019 and 2022 (increasing); Washington, 2010-2013 (decreasing) versus 2014-2016, 2018 (increasing); Massachusetts, 2012-2014 (increasing) versus 2015-2017 (decreasing); and Hawaii, 2010, 2014-2016 (increasing) versus 2017-2019, 2022 (decreasing). In California and Hawaii, sector- and policy-specific language was more prevalent in the period of increasing homelessness. For example, among the California 2015-2019 data, words like "housing", "homelessness", "trump", and "crisis" contribute the most to distinguish Figure 12: **Compound sentiment scores across English-language tweets, 2010-2019.** Captured at headometer.org, this time series reflects sentiment scores at the daily scale for a random sample of 10% of all English-language tweets. Annotations provide context for daily spikes, which typically correspond to annual holidays but can also reflect significant historical events. ing the tweets from the California 2010-2014 set, and the word "democrats" appears only in the later years' dataset. Similarly, in Hawaii, words like "appropriation", "bill", "rights", "shelter", and "housing" all appear on the left-hand side of the diagram, which corresponds in Hawaii's case to the period of increasing homelessness rates, and similar language is absent from the right-hand side of the word shift, which depicts important words in distinguishing the period of decreasing homelessness. For Massachusetts and Washington, trends observed at the national scale were less clearly reproduced at the state level. While policy and service-sector words were equally balanced in the importance of their contribution to the semantic differences between the two periods in Massachusetts, charity references predominate the period of decreasing homelessness. For example, in the years of increasing Massachusetts homelessness, early-year 1-grams like "veterans", "support", and "military" are counterbalanced by words like "unemployed", "homelessness", and "youth" in the latter years. However, the Massachusetts 2015-2017 tweets are characterized by a prevalence of references to private fundraising events such as "shoozercruse", "charity", and "harbordonate", as well as words potentially related to homeless pets ("freekible", "dogs"). In Washington, by contrast, language typically associated with policy, services, and charity was scarce across both periods, and so the time frames of increasing and decreasing homelessness were not easily distinguishable. It is worth noting that trends observed at the national scale are not present at the state level at the unit of individual years (or "state-years"). To test whether language would differ significantly between increasing and decreasing state-years, we aggregated into two separate corpora (1) all tweets from states in years \(x\) where the rate had increased from the year \(x-1\) and (2) all tweets from states in which the rate had decreased. We then generated an allotaxonometry diagram comparing the corpora. Approximately equal proportions of sector-specific terms characterize both corpora, and neither corpus was distinguished by references to charitable efforts or direct appeals. It is unclear whether the corpora's similarity is a result of distributions of state-years from the period of nationwide increasing homelessness across both corpora, indicating that national discourse during the post-2017 era may in some cases overcome semantic differences that would otherwise be present, or because of the presence of anomalous single-year polarity changes with-in otherwise-monotonic, reverse-polar trends, indicating that changes must persist several years in order to effect semantic changes in social media language. ### User Type, Tweet Volume & Content Multiple findings of the analysis above recommend a closer inspection of the relationship between user type and tweet volume, content, and sentiment with respect to homelessness. The observation by prior literature of content and value differences between homeless and non-homeless social media users suggests that differences observed in sentiment and content may be the result of increased activity by altruistic Twitter users and/or advocate networks. Thus, we finally reviewed homelessness-related Twitter use patterns by account type as a potential means of signal detection for real-world changes to local homelessness volume. While a full examination of the impact of account type is outside the scope of our immediate investigations, we nevertheless undertook a preliminary analysis of frequency of posts on homelessness by user account to explore, in a general way, whether unusually high-frequency posters in high-homelessness versus low-homelessness states were individuals or entities, and if the latter, what type of entities. We began with a sample of tweets from the 100 Figure 13: **Distribution of state compound sentiment scores by year, 2010-2022, A. with and B. without outliers.** Each box-and-whiskers plot visualizes the distribution of all compound sentiment scores for the fifty US states within a single year. state-years with the highest reported homelessness density (\(N_{h}=313,311\) from 134,616 accounts) versus the 100 state-years with the lowest reported homelessness density (\(N_{l}=19,966\) from 11,078 accounts). Because account type is not among the metadata provided by Twitter's API, we manually labeled the accounts with the highest number of tweets in each corpus as Individual versus Entity accounts, indicating a sub-type where relevant, e.g., Agency, News, or Legal for Entities and Politician or Journalist for Individuals. #### iv.2.1 Low- versus high-density state-years Our first finding was that the relative prevalence of users who posted only once about homelessness ("single-post" accounts) versus users who posted more than once within the dataset ("multi-post" accounts) differed significantly by homelessness density (75.5% versus 24.5% in low-density state-years, 69.2% versus 30.8% in high-density state-years), indicating that accounts were somewhat more likely to post more than once about homelessness in high-density states. Additionally, tweets from those multi-post accounts comprised 70.3% of all tweets from high homelessness-density states, as contrasted with only 58.1% of all tweets from low-density states. The difference in sample sizes from high-density and low-density states resulted in top 20 accounts generating a much smaller percentage of overall tweets in low-density states (0.18% versus 0.01%). The ratio of the percentage represented by top 20 individual accounts to top 20 entity accounts, however, is 9:1 in low-density states, or six times that in high-density states, 3:2. Individual accounts represented a significantly greater proportion of top 20 accounts in low-density states (90%) versus high-density states (60%). The percentage of top 20 tweets represented by those individual accounts also differed significantly by density (95.4% in low-density state-years versus 70.2%). In sum, in higher-density state-years, a greater proportion of the users tweeting about homelessness are doing so more than once and generating a higher percentage of the total tweet corpus. High-frequency posters are far more likely to be individuals than entities in low-density state-years, and the proportion of tweets they generate within the top 20 user tweet corpus is also much higher in low-density state-years, suggesting a potential increase in entity-account online presence as states increase in homelessness density. Figure 14: **Comparisons of tweets from high-homelessness versus low-homelessness states.** Allotaxonometry is a method that compares two text corpora and, using the relative frequency of a word in each respective corpus, identifies which words contribute the most to the divergence between the corpora. Words appearing to the right or left of the center line appear more frequently in the corpus associated with the side in which they appear. The word shift at right displays the words with the greatest impact on the difference between the two corpora. #### iv.2.2 Negatively- versus positively-sentimented state-years We then reproduced this approach with a sample of tweets from the 30 negatively-sentimented state-years (\(N_{n}=5,593\) generated by 2574 users) and a sample of tweets from the 30 most positively-sentimented state-years (\(N_{p}=3,5820\) generated by 17482 users) for comparison. Overall, single-tweet accounts comprise a much larger percentage of the accounts tweeting about homelessness across all state-years (positive and negative), but the number of tweets generated by multiple-tweet accounts represents a larger percentage of all tweets [36]. We found that the relative contribution of single-post accounts again differed significantly by corpus sentiment, though less so than by density. Single-post accounts represented 73.4% and 77.6% of all accounts in positively-sentimented versus negatively-sentimented state-years, respectively; multi-post accounts, 26.6% and 22.4%. Similarly, the proportion of tweets generated by a single-post account to those generated by multi-post accounts differed significantly by sentiment, or 33.8% versus 37.9% and 66.2% versus 62.1%. Despite the fact that the number of accounts represented in negatively-sentimented states is only 14.7% of accounts in positively-sentimented states, top 20 accounts comprise an 24.8% of all accounts in the former and 21.3% in the latter. The ratio of top 20 entities to individuals as expressed as percentage of all accounts is twice as large in negatively-sentimented (3:1, or 0.58% versus 0.19%) as it is in positively-sentimented states (1:3, or 0.03% and 0.09%). Thus, user accounts tweeting about homelessness in negatively-sentimented states are more likely to be individuals than entities, while the opposite is true in positively-sentimented states. Tweets from those top 20 entity accounts comprise over 1.7 times Figure 15: **Comparison of the ratios of user account types in low-density versus high-density state-years.** C. Entity accounts comprise a greater percentage of the top 20 accounts posting most frequently about homelessness in high-density versus low-density states, and A. accounts tend to tweet more than once about homelessness in high-density states. the percentage of all tweets in negatively-sentimented states (6.7%) as they do in positively-sentimented states (3.8%), versus tweets from top 20 individual accounts (18.2% in negatively-sentimented states, 17.6% otherwise). Entity accounts dominate individual accounts among the top 20 highest-frequency users in negatively-sentimented states (75% versus 25%); the opposite is true in positively-sentimented states (25% versus 75%). Additionally, although the proportion of all tweets represented by individual top 20 high-frequency accounts among all tweets differs by only 1.4% in negatively Figure 16: **Comparison of the ratios of user account types in positively- versus negatively-sentimented state-years.** C. Entity accounts comprise a much smaller percentage of top 20 accounts posting most frequently about homelessness in positively-sentimented states. F. Tweets from entity accounts, regardless of frequency of posting, constitute a higher percentage of tweets in negatively-sentimented states as well. versus positively-sentimented state-years (18.2% and 17.6%), the proportion of tweets generated by individuals versus entities among the top 20 accounts' tweets is lower in negatively-sentimented states (73.2% versus 82.4%). Thus, while there are more single-post accounts than multi-post accounts across both positively- and negatively-sentimented states and more individuals than entities among the top 20 in positively-sentimented states, a handful of high-frequency accounts generate a greater percentage of tweets in the negatively-sentimented corpus. The presence of a higher proportion of individuals among positively-sentimented high-frequency accounts, and a higher proportion of top 20 tweets being generated by those individuals, does not increase their contribution as much to tweets overall as might be expected, while a decrease in entity presence among the positive state-years yields both a lower proportion of top 20 tweets and a decreased presence among \begin{table} \begin{tabular}{|c|c|c|} \hline User rank & Tweet Count & Account type \\ [MISSING_PAGE_POST] \hline \end{tabular} \end{table} Table 5: **Tweet counts (for tweets containing “homeless”) and account type classification for high-frequency users in low-density state-years.** \begin{table} \begin{tabular}{|c|c|c|} \hline User rank & Tweet Count & Account type \\ \hline 1 & 1857 & Individual \\ 2 & 1631 & Individual \\ 3 & 867 & Individual \\ 4 & 394 & Individual \\ 5 & 374 & Entity \\ 6 & 332 & Individual \\ 7 & 327 & Agency \\ 8 & 273 & Entity \\ 9 & 261 & Individual \\ 10 & 231 & Individual \\ 11 & 221 & Agency \\ 12 & 195 & Individual \\ 13 & 168 & Entity \\ 14 & 150 & Individual \\ 15 & 120 & Individual \\ 16 & 117 & Individual \\ 17 & 70 & Individual \\ 18 & 69 & Individual \\ 19 & 63 & Individual \\ 20 & 62 & Individual \\ \hline \end{tabular} \end{table} Table 6: **Tweet counts (for tweets containing “homeless”) and account type classification for high-frequency users in positively-sentimented state-years.** \begin{table} \begin{tabular}{|c|c|c|} \hline User rank & Tweet Count & Account type \\ \hline [MISSING_PAGE_POST] \hline \end{tabular} \end{table} Table 5: **Tweet counts (for tweets containing “homeless”) and account type classification for high-frequency users in low-density state-years.** \begin{table} \begin{tabular}{|c|c|c|} \hline User rank & Tweet Count & Account type \\ \hline 1 & 796 & Individual \\ 2 & 131 & Individual \\ 4 & 394 & Individual \\ 5 & 374 & Entity \\ 6 & 332 & Individual \\ 7 & 327 & Agency \\ 8 & 273 & Entity \\ 9 & 261 & Individual \\ 10 & 231 & Individual \\ 11 & 221 & Agency \\ 12 & 195 & Individual \\ 13 & 168 & Entity \\ 14 & 150 & Individual \\ 15 & 120 & Individual \\ 16 & 117 & Individual \\ 17 & 70 & Individual \\ 18 & 69 & Individual \\ 19 & 63 & Individual \\ 20 & 62 & Individual \\ \hline \end{tabular} \end{table} Table 7: **Tweet counts (for tweets containing “homeless”) and account type classification for high-frequency users in positively-sentimented state-years** overall tweets in the corpus. ## V Conclusions Within a given year, several social media indicators may generate a signal that correlates to changes in a jurisdiction's rate or volume of homelessness, including tweet rates, semantic content, user behavior, and ratios of different account types tweeting about homelessness. Some statistically significant nonparametric rank-rank correlations exist between tweet volume ranked by state within a given year, for example, and those state's comparative ranked densities of homelessness. Additionally, the likelihood of tweets to contain the word "homeless", as well as the average sentiment of those tweets, is sensitive to, and revealing of, national trends. However, the variability of population density distributions between and within states translates into different probabilities of real-world experiences of, and encounters with, homelessness by the typical social media user within a given jurisdiction. For example, correlations exist across years within certain--but not all--states between annualized tweet and homelessness values, particularly exceptionally high-density, high-homelessness states with high variance among annual state-level estimates. More research is needed to better understand why tweet rates generated by other high-density, high-variability states do not correlate with changes to homelessness estimates by year, and why the strongest cross-correlation coefficients of tweet volume to homelessness rates are negative for many southern states, yet positive in two New England states. Moreover, preliminary findings suggest that some retrospective or real-time trend detection may be possible through text analysis. Further research, which would require tools for geolocation and/or user-type classification, is needed to determine whether findings are generalizable at scale. A deeper exploration of the mechanisms underlying a typical social media user's understanding about conditions within their community--not only real-world interactions with people experiencing homelessness, but also indirect sources of information, such as news and social media--is critical to modeling and predicting the relationship of some more complex public health phenomena to online behavior, particularly in cases, as with homelessness, real-world experiences are heterogeneous and context-sensitive. In sum, a measurement consistent with established measures like the PiT cannot be straightforwardly obtained from Twitter volume, content, or sentiment, though comparisons between and among jurisdictions may be reasonably within grasp. Through a combination of real-time analytic techniques, including computational linguistics, social media data may yet provide insight into the mechanisms translating local homelessness measurements into online behavior. The cost of Twitter API access, however, which has increased dramatically since this project began, makes real-time data acquisition cost-prohibitive for many interested stakeholders with limited resources, such as smaller advocacy organizations or watchgroups. Extension of this research to other, less costly social media platforms is recommended (1) to ensure the robustness of these findings across fora and (2) to identify lower-cost real-time datasets that would be useful for analysis. Additionally, research at a more or less granular level--e.g., the city or county level within a single country, or at the nation-level internationally--will be critical to determine whether correlations between homelessness and tweet volume persist in environments characterized by greater heterogeneity and rank-turbulence with respect to homelessness rates and density. However, the challenge of identifying valid, real-time ground-truth datasets at these other levels against which to compare social media observations remains a difficult problem, given methods used to estimate homelessness, at least in the US. #### v.0.1 Limitations of the study Each dataset presented unique challenges to the validity and generalizability of our results. First and most importantly, the Point-in-Time Count relies on non-uniform data collection practices that are of variable quality, such as manual counts of unsheltered homeless who are identified outdoors during a single-night annual count the last week of January. Because these practices cannot be expected to perform comparatively to scale across jurisdictions with varying population densities, available volunteer base, outreach project data quality, and distributions of rural/urban centers, states, HUD permits different approaches to estimating the size of the unsheltered population, including census-based approaches, non-random sampling with extrapolation, or a combination of those approaches [37]. Moreover, even if data collection practices were standardized, the annualized time scale makes it impossible to view population flow dynamics during later months of the year and may underestimate regional needs in cold-weather states if the homeless are migrating to warmer locations seasonally. Consequently, any conclusions drawn regarding the relationship of homelessness rates to Twitter activity, even if validated against these counts, should considered within a broader context alongside other suspected proxies of a state's actual level of homelessness, such as eviction or unemployment rates or housing affordability [38]. Secondly, geotagged English-language data represents only a small fraction of overall Twitter activity generated in the US, which may contain hidden biases based on, for example, distributed proportions of multilingual or limited English proficiency Twitter users. Different state or regional norms around data and privacy protection with respect to social media may also affect the proportional representation of a jurisdiction within geotagged data. It is also well-documented that Twitter users are not a representative or random sample of United States resi dents [11]. A random sample of all US-originating tweets with and without geotagging, filterable for multiple languages' customary analogues to the 1-gram "homeless", would be a preferable dataset, but interpolating valid user location is a difficult problem often requiring time-intensive manual coding [9], and monthly caps on the volume of tweet and user account data downloads made this problem beyond the scope of the present research. Relatedly, homelessness, an often-sensitive political issue, may attract a volume and sentiment of social media communication that corresponds to factors and agendas other than observed homelessness rates. Political noise may mask an otherwise-detectable signal based on average citizens' observations of homelessness in their communities. For this reason, further analysis by user type (e.g., advocacy organization, news media, private individual account) is critical to better understanding the relationship of various actors, their online speech, and aggregate measures of sentiment, volume, and content, and how it is distinguishable from speech signals generated within tweets from the average individual. Finally, tweets were assumed to originate within the state identified by the geoid dictionary. Artificial bot amplification of tweet volume on homelessness was similarly assumed to be equally likely across all fifty states and was not specifically addressed. The impact of bots on homelessness-related Twitter communication should be an area of future research. ## VI Code and Data The code described herein can be found in the public Google Colab folder here: [https://drive.google.com/file/d/1va4dWRZ6RPT089rcTkrE5fvK5hT0nksV/view?usp=sharing](https://drive.google.com/file/d/1va4dWRZ6RPT089rcTkrE5fvK5hT0nksV/view?usp=sharing). We accessed the Twitter dataset described above by submitting the following query parameters to Twitter API v. 2.0: query = 'homeless place_country:US has:geo' start_time = '2010-01-01T00:00:00Z' end_time = '2022-12-31T23:59:00Z' ###### Acknowledgements. We are grateful for support from the University of Vermont Complex Systems Center, the MassMutual Center of Excellence in Complex Systems and Data Science, and discussions with colleagues from the Computational Story Lab.
2307.00585
Monotonic convergence of positive radial solutions for general quasilinear elliptic systems
We study the asymptotic behavior of positive radial solutions for quasilinear elliptic systems that have the form \begin{equation*} \left\{ \begin{aligned} \Delta_p u &= c_1|x|^{m_1} \cdot g_1(v) \cdot |\nabla u|^{\alpha} &\quad\mbox{ in } \mathbb R^n,\\ \Delta_p v &= c_2|x|^{m_2} \cdot g_2(v) \cdot g_3(|\nabla u|) &\quad\mbox{ in } \mathbb R^n, \end{aligned} \right. \end{equation*} where $\Delta_p$ denotes the $p$-Laplace operator, $p>1$, $n\geq 2$, $c_1,c_2>0$ and $m_1, m_2, \alpha \geq 0$. For a general class of functions $g_j$ which grow polynomially, we show that every non-constant positive radial solution $(u,v)$ asymptotically approaches $(u_0,v_0) = (C_\lambda |x|^\lambda, C_\mu |x|^\mu)$ for some parameters $\lambda,\mu, C_\lambda, C_\mu>0$. In fact, the convergence is monotonic in the sense that both $u/u_0$ and $v/v_0$ are decreasing. We also obtain similar results for more general systems.
Daniel Devine, Paschalis Karageorgis
2023-07-02T15:02:20Z
http://arxiv.org/abs/2307.00585v1
# Monotonic convergence of positive radial solutions ###### Abstract. We study the asymptotic behavior of positive radial solutions for quasilinear elliptic systems that have the form \[\begin{cases}\Delta_{p}u=c_{1}|x|^{m_{1}}\cdot g_{1}(v)\cdot|\nabla u|^{ \alpha}&\text{in }\mathbb{R}^{n},\\ \Delta_{p}v=c_{2}|x|^{m_{2}}\cdot g_{2}(v)\cdot g_{3}(|\nabla u|)&\text{in } \mathbb{R}^{n},\end{cases}\] where \(\Delta_{p}\) denotes the \(p\)-Laplace operator, \(p>1\), \(n\geq 2\), \(c_{1},c_{2}>0\) and \(m_{1},m_{2},\alpha\geq 0\). For a general class of functions \(g_{j}\) which grow polynomially, we show that every non-constant positive radial solution \((u,v)\) asymptotically approaches \((u_{0},v_{0})=(C_{\lambda}|x|^{\lambda},C_{\mu}|x|^{\mu})\) for some parameters \(\lambda,\mu,C_{\lambda},C_{\mu}>0\). In fact, the convergence is monotonic in the sense that both \(u/u_{0}\) and \(v/v_{0}\) are decreasing. We also obtain similar results for more general systems. Key words and phrases:Asymptotic behavior, elliptic systems, \(p\)-Laplace operator, radial solutions 2020 Mathematics Subject Classification: 35B40, 35J47, 35J92 ## 1. Introduction We study the positive radial solutions of the quasilinear elliptic system \[\begin{cases}\Delta_{p}u=c_{1}|x|^{m_{1}}\cdot g_{1}(v)\cdot|\nabla u|^{ \alpha}&\text{in }\Omega,\\ \Delta_{p}v=c_{2}|x|^{m_{2}}\cdot g_{2}(v)\cdot g_{3}(|\nabla u|)&\text{in } \Omega,\end{cases} \tag{1.1}\] where \(\Delta_{p}u=\operatorname{div}(|\nabla u|^{p-2}\nabla u)\) denotes the \(p\)-Laplace operator, \(p>1\), \(n\geq 2\), \(c_{1},c_{2}>0\) and \(m_{1},m_{2},\alpha\geq 0\). We are mainly concerned with the asymptotic behavior of global solutions in the case \(\Omega=\mathbb{R}^{n}\), but we shall also consider local solutions over the open ball \(\Omega=B_{R}\) around the origin. For the functions \(g_{j}\), we impose some generic conditions that we outline below. These allow each \(g_{j}\) to be an arbitrary polynomial with non-negative coefficients. Semilinear elliptic systems that do not involve any gradient terms have been extensively studied in the past [5, 7, 13, 17, 18, 19] and the same is true for quasilinear elliptic systems without gradient terms [1, 2, 3, 4, 5, 6, 10]. On the other hand, systems such as (1.1) have also attracted interest in recent years [8, 11, 12, 14, 20]. Their study was initiated by Diaz, Lazzo and Schmidt [8] who concentrated on the special case \[\begin{cases}\Delta u=v&\text{in }\Omega,\\ \Delta v=|\nabla u|^{2}&\text{in }\Omega.\end{cases}\] This system arises through a prototype model for a viscous, heat-conducting fluid, and its solutions correspond to steady states of a related parabolic system that describes the unidirectional flow of the fluid. More recently, Singh [20] considered general semilinear systems of the form \[\begin{cases}\Delta u=v^{m}&\quad\text{in }\Omega,\\ \Delta v=g(|\nabla u|)&\quad\text{in }\Omega,\end{cases}\] where \(m>0\) and \(g\in C^{1}([0,\infty))\) is non-decreasing and positive in \((0,\infty)\). He found optimal conditions for the existence of positive radial solutions in either \(\Omega=B_{R}\) or \(\Omega=\mathbb{R}^{n}\), and he determined the asymptotic behavior of global solutions, assuming that \(g(s)=s^{k}\) for some \(k\geq 1\) and that \(k,m\) satisfy some additional hypotheses. The existence results of [20] were extended by Filippucci and Vinti [12] to the \(p\)-Laplace operator for any \(p\geq 2\). Ghergu, Giacomoni and Singh [14] then studied the case \[\begin{cases}\Delta_{p}u=v^{k_{1}}\cdot|\nabla u|^{\alpha}&\quad\text{in }\Omega,\\ \Delta_{p}v=v^{k_{2}}\cdot|\nabla u|^{k_{3}}&\quad\text{in }\Omega,\end{cases} \tag{1.2}\] where \(k_{1},k_{2},k_{3},\alpha\geq 0\). Once again, optimal results were established for the existence of positive radial solutions, but the asymptotic behavior of global solutions was now settled under the very natural assumptions \[0\leq\alpha<p-1,\qquad(p-1-\alpha)(p-1-k_{2})>k_{1}k_{3} \tag{1.3}\] which are related to the existence of solutions. Assuming that (1.3) holds, in particular, one can easily check that the system (1.2) has an explicit solution of the form \[u_{0}(r)=C_{\lambda}r^{\lambda}=C_{\lambda}|x|^{\lambda},\qquad v_{0}(r)=C_{ \mu}r^{\mu}=C_{\mu}|x|^{\mu} \tag{1.4}\] for some \(\lambda,\mu,C_{\lambda},C_{\mu}>0\). According to [14, Theorem 2.3], all non-constant positive radial solutions of (1.2) in \(\Omega=\mathbb{R}^{n}\) have the same behavior at infinity, and they satisfy \[\lim_{r\to\infty}\frac{u(r)}{u_{0}(r)}=\lim_{r\to\infty}\frac{v(r)}{v_{0}(r)}=1. \tag{1.5}\] Our first goal in this paper is to further extend [14, Theorem 2.3] to a much more general class of quasilinear elliptic systems. Instead of the system (1.2) that was considered in [14], we shall thus study the system (1.1) which contains additional factors and more general nonlinearities. When it comes to the functions \(g_{j}\), we shall assume that 1. each \(g_{j}\) is differentiable and non-decreasing in \([0,\infty)\); 2. each \(g_{j}\) is positive in \((0,\infty)\) and \(g_{1},g_{2}\) are increasing in \((0,\infty)\); 3. there exist constants \(k_{j}\geq 0\) such that \(g_{j}(s)/s^{k_{j}}\) is non-increasing in \((0,\infty)\) with \[\lim_{s\to\infty}\frac{g_{j}(s)}{s^{k_{j}}}=1\qquad\text{for each }j=1,2,3.\] (1.6) Needless to say, system (1.2) arises when \(g_{j}(s)=s^{k_{j}}\) for each \(j\), but our assumptions allow much more general functions. For instance, each \(g_{j}(s)\) could be any sum of non-negative powers of \(s\) with non-negative coefficients, as long as the highest power is \(s^{k_{j}}\). In all these cases, we show that non-constant positive radial solutions of (1.1) in \(\Omega=\mathbb{R}^{n}\) exhibit the same asymptotic behavior as the corresponding solutions of the limiting system \[\begin{cases}\Delta_{p}u=c_{1}|x|^{m_{1}}\cdot v^{k_{1}}\cdot|\nabla u|^{ \alpha}&\quad\text{in }\Omega,\\ \Delta_{p}v=c_{2}|x|^{m_{2}}\cdot v^{k_{2}}\cdot|\nabla u|^{k_{3}}&\quad\text {in }\Omega,\end{cases} \tag{1.7}\] and that they all approach \((u_{0},v_{0})=(C_{\lambda}r^{\lambda},C_{\mu}r^{\mu})\) for some parameters \(\lambda,\mu,C_{\lambda},C_{\mu}>0\). Our second goal in this paper is to show that the convergence (1.5) actually occurs in a monotonic fashion. In fact, we shall prove that both \(u/u_{0}\) and \(v/v_{0}\) are decreasing for any positive radial solution \((u,v)\) which satisfies (1.1) in \(\Omega=B_{R}\). This observation appears to be entirely new, in any case whatsoever, and it is a key ingredient in our approach. When it comes to the results of [14, 20], the authors established (1.5) using a nonlinear change of variables and the theory of cooperative dynamical systems. In this paper, we introduce a new linear change of variables which seems more natural, and we apply simple continuity arguments to prove the monotonicity of \(u/u_{0}\) and \(v/v_{0}\) directly. **Theorem 1.1**.: _Assume (A1)-(A3) and (1.3). Let \((u,v)\) be a non-constant positive radial solution of the system (1.1), where \(n\geq 2\), \(m_{1},m_{2}\geq 0\) and \(c_{1},c_{2}>0\). Then the associated system (1.7) has a unique solution of the form \((u_{0},v_{0})=(C_{\lambda}r^{\lambda},C_{\mu}r^{\mu})\), where each of the parameters \(\lambda,\mu,C_{\lambda},C_{\mu}\) is positive._ 1. _If_ \(\Omega=B_{R}\)_, then_ \(\frac{u(r)}{u_{0}(r)}\)_,_ \(\frac{v(r)}{v_{0}(r)}\)_,_ \(\frac{u^{\prime}(r)}{u^{\prime}_{0}(r)}\)_,_ \(\frac{v^{\prime}(r)}{v^{\prime}_{0}(r)}\) _are all decreasing for each_ \(0<r<R\)_._ 2. _If_ \(\Omega=\mathbb{R}^{n}\)_, then_ \((u,v)\) _and_ \((u_{0},v_{0})\) _have the same behavior at infinity, namely_ \[\lim_{r\to\infty}\frac{u(r)}{u_{0}(r)}=\lim_{r\to\infty}\frac{v(r)}{v_{0}(r)} =\lim_{r\to\infty}\frac{u^{\prime}(r)}{u^{\prime}_{0}(r)}=\lim_{r\to\infty} \frac{v^{\prime}(r)}{v^{\prime}_{0}(r)}=1.\] (1.8) **Remark 1.2**.: Although we have mainly focused on the system (1.1), our method of proof applies verbatim for several variations of that system. When it comes to the first equation, for instance, one may replace the factor \(g_{1}(v)\) by \(g_{1}(v)\cdot g_{4}(u)\), where \(g_{4}\) is also subject to the assumptions (A1)-(A3). When it comes to the second equation, one may similarly replace \(g_{2}(v)\) by \(g_{2}(v)\cdot g_{5}(u)\), and also include the factor \(|\nabla v|^{\beta}\), where \(\beta\geq 0\). All these variations can be treated using the same approach by merely adjusting the existence condition (1.3). For the sake of simplicity, however, we shall not bother to treat them explicitly. As we show in Sections 3 and 4, Theorem 1.1 is actually valid for more general systems obtained by replacing the coefficients \(c_{1}|x|^{m_{1}}\), \(c_{2}|x|^{m_{2}}\) in (1.1) by arbitrary non-decreasing functions \(f_{1}(|x|)\), \(f_{2}(|x|)\). In that case, however, the asymptotic profile \((u_{0},v_{0})\) does not have the form \((C_{\lambda}|x|^{\lambda},C_{\mu}|x|^{\mu})\), and it is not necessarily an elementary function. We shall mostly concentrate on the general case (2.1) and then deduce Theorem 1.1 as a special case. To some extent, Theorem 1.1 is reminiscent of Wang's classical result [21] regarding the positive radial solutions of the scalar equation \(-\Delta u=u^{p}\), where \(p>1\). This equation has decaying solutions of the form \(u_{0}=C_{\lambda}|x|^{\lambda}\), where \(\lambda=-2/(p-1)\), and every positive radial solution \(u\) behaves like \(u_{0}\) at infinity. However, the convergence \(u/u_{0}\to 1\) is monotonic only for supercritical powers [16, 21]. A similar result holds for the scalar equation \(\Delta^{2}u=u^{p}\) which has decaying solutions of the form \(u_{0}=C_{\lambda}|x|^{\lambda}\) with \(\lambda=-4/(p-1)\). Once again, every positive radial solution \(u\) behaves like \(u_{0}\) at infinity, but the convergence \(u/u_{0}\to 1\) is monotonic only for supercritical powers \(p\); see [9, 15] for more details. It seems quite interesting that similar results can also be obtained for systems. The remaining sections are organized as follows. In Section 2, we introduce our general system (2.1), and we show that all positive radial solutions are increasing and convex. In Section 3, we prove a new Monotonic Comparison Theorem which is a natural extension of Theorem 1.1(a). In Section 4, we prove a new Asymptotic Comparison Theorem which similarly extends Theorem 1.1(b). The proof of Theorem 1.1 is then given in Section 5. ## 2. Monotonicity and convexity of solutions In this section, we study the positive radial solutions of the general system \[\begin{cases}\Delta_{p}u=f_{1}(|x|)\cdot g_{1}(v)\cdot|\nabla u|^{\alpha}&\text {in }\Omega,\\ \Delta_{p}v=f_{2}(|x|)\cdot g_{2}(v)\cdot g_{3}(|\nabla u|)&\text{in }\Omega, \end{cases} \tag{2.1}\] where \(\Omega=B_{R}\) and \(\alpha\geq 0\), for any functions \(f_{i},g_{j}\) that satisfy the following hypothesis: * each \(f_{i},g_{j}\) is continuous, non-decreasing in \([0,\infty)\) and positive in \((0,\infty)\). Our main goal is to show that all positive radial solutions have components \(u,v\) which are increasing and convex. This observation will play a crucial role in our subsequent analysis. Although our proofs are very similar to the ones presented in [14, 20], our system (2.1) is more general, so we also include the proofs for the sake of completeness. **Lemma 2.1** (Monotonicity).: _Assume (B1), \(n\geq 2\) and \(\alpha\geq 0\). Suppose that \((u,v)\) is a non-constant radial solution of (2.1) in \(\Omega=B_{R}\) which is positive for all \(0<r<R\). Then one has \(u^{\prime}(r)>0\) and \(v^{\prime}(r)>0\) for all \(0<r<R\)._ **Proof.** Since \((u,v)\) is a radial solution of (2.1), it must satisfy the system \[\begin{cases}\left[r^{n-1}u^{\prime}(r)|u^{\prime}(r)|^{p-2}\right]^{\prime}= r^{n-1}f_{1}(r)g_{1}(v(r))\cdot|u^{\prime}(r)|^{\alpha}&\text{for all }0<r<R,\\ \left[r^{n-1}v^{\prime}(r)|v^{\prime}(r)|^{p-2}\right]^{\prime}=r^{n-1}f_{2}(r )g_{2}(v(r))\cdot g_{3}(|u^{\prime}(r)|)&\text{for all }0<r<R,\\ u^{\prime}(0)=v^{\prime}(0)=0,\,u(r)>0,v(r)>0&\text{for all }0<r<R.\end{cases} \tag{2.2}\] Once we integrate the first equation and recall our assumption (B1), we find that \[r^{n-1}u^{\prime}(r)|u^{\prime}(r)|^{p-2}=\int_{0}^{r}s^{n-1}f_{1}(s)g_{1}(v( s))\cdot|u^{\prime}(s)|^{\alpha}\,ds>0\] for all \(0<r<R\). This gives \(u^{\prime}(r)>0\), and one may similarly argue that \(v^{\prime}(r)>0\). **Lemma 2.2** (Nonexistence).: _Assume (B1), \(n\geq 2\) and \(\alpha\geq p-1\). Then (2.1) does not have any non-constant positive radial solutions in \(\Omega=B_{R}\) for any \(R>0\)._ **Proof.** Suppose \((u,v)\) is such a solution. By Lemma 2.1, we must then have \(u^{\prime}(r)>0\) for all \(0<r<R\). Thus, the first equation in (2.2) can be expressed in the form \[\frac{\left[r^{n-1}u^{\prime}(r)^{p-1}\right]^{\prime}}{r^{n-1}u^{\prime}(r)^{ p-1}}=f_{1}(r)g_{1}(v(r))\cdot u^{\prime}(r)^{\alpha-p+1}\] for all \(0<r<R\). Integrating this equation over the interval \([0,R/2]\) now gives \[\ln\left[r^{n-1}u^{\prime}(r)^{p-1}\right]\Big{|}_{0}^{R/2}=\int_{0}^{R/2}f_{ 1}(s)g_{1}(v(s))\cdot u^{\prime}(s)^{\alpha-p+1}\,ds.\] Since \(\alpha\geq p-1\) and \(f_{1},g_{1}\) are continuous, the right hand side is obviously finite. This is a contradiction because \(u^{\prime}(0)=0\) and \(p>1\), so the left hand side is infinite. **Lemma 2.3** (Convexity).: _Assume (B1), \(n\geq 2\) and \(0\leq\alpha<p-1\). If \((u,v)\) is a non-constant radial solution of (2.1) in \(\Omega=B_{R}\) which is positive for all \(0<r<R\), then_ \[\frac{p-1-\alpha}{n(p-1-\alpha)+\alpha}\,f_{1}(r)g_{1}(v) \leq\left[u^{\prime}(r)^{p-1-\alpha}\right]^{\prime}\leq\frac{p-1- \alpha}{p-1}\,f_{1}(r)g_{1}(v), \tag{2.3}\] \[\frac{1}{n}\,f_{2}(r)g_{2}(v)g_{3}(u^{\prime}) \leq\left[v^{\prime}(r)^{p-1}\right]^{\prime}\leq f_{2}(r)g_{2}(v )g_{3}(u^{\prime}) \tag{2.4}\] _for all \(0<r<R\). In particular, both \(u(r)\) and \(v(r)\) are convex for all \(0<r<R\)._ **Proof.** Using Lemma 2.1, we see that the system (2.2) can be written as \[\left\{\begin{aligned} &\left[u^{\prime}(r)^{p-1}\right]^{\prime}+ \frac{n-1}{r}\,u^{\prime}(r)^{p-1}=f_{1}(r)g_{1}(v(r))\cdot u^{\prime}(r)^{ \alpha}&\text{for all }0<r<R,\\ &\left[v^{\prime}(r)^{p-1}\right]^{\prime}+\frac{n-1}{r}\,v^{ \prime}(r)^{p-1}=f_{2}(r)g_{2}(v(r))\cdot g_{3}(u^{\prime}(r))& \text{for all }0<r<R,\\ & u^{\prime}(0)=v^{\prime}(0)=0,\,u(r)>0,v(r)>0& \text{for all }0<r<R.\end{aligned}\right.\] Rearranging the terms in the first equation now leads to the system \[\left\{\begin{aligned} &\left[u^{\prime}(r)^{p-1-\alpha}\right]^{ \prime}+\frac{\delta}{r}\,u^{\prime}(r)^{p-1-\alpha}=\frac{\delta}{n-1}\,f_{1} (r)g_{1}(v(r))&\text{for all }0<r<R,\\ &\left[v^{\prime}(r)^{p-1}\right]^{\prime}+\frac{n-1}{r}\,v^{ \prime}(r)^{p-1}=f_{2}(r)g_{2}(v(r))\cdot g_{3}(u^{\prime}(r))& \text{for all }0<r<R,\end{aligned}\right. \tag{2.5}\] where \(\delta\) is a positive constant that is defined by \[\delta=\frac{(n-1)(p-1-\alpha)}{p-1}>0. \tag{2.6}\] Since \(u^{\prime}(r)>0\) and \(v^{\prime}(r)>0\) for all \(0<r<R\) by Lemma 2.1, it is then clear from (2.5) that the upper bounds in (2.3) and (2.4) hold. To prove the corresponding lower bounds, we first rearrange terms and express (2.5) in the equivalent form \[\left\{\begin{aligned} &\left[r^{\delta}u^{\prime}(r)^{p-1- \alpha}\right]^{\prime}=\frac{\delta}{n-1}\,r^{\delta}f_{1}(r)g_{1}(v(r))& \text{for all }0<r<R,\\ &\left[r^{n-1}v^{\prime}(r)^{p-1}\right]^{\prime}=r^{n-1}f_{2}(r) g_{2}(v(r))\cdot g_{3}(u^{\prime}(r))&\text{for all }0<r<R.\end{aligned}\right. \tag{2.7}\] Since \(f_{1},g_{1}\) are non-decreasing by our assumption (B1) and \(v\) is increasing by Lemma 2.1, it follows by the first equation of this system that \[r^{\delta}u^{\prime}(r)^{p-1-\alpha}=\frac{\delta}{n-1}\int_{0}^{r}s^{\delta} f_{1}(s)g_{1}(v(s))\,ds\leq\frac{\delta}{n-1}\,f_{1}(r)g_{1}(v(r))\int_{0}^{r}s^{ \delta}\,ds \tag{2.8}\] for all \(0<r<R\). In other words, one has the estimate \[\frac{1}{r}\,u^{\prime}(r)^{p-1-\alpha}\leq\frac{\delta}{(n-1)(\delta+1)}\,f_{ 1}(r)g_{1}(v(r)) \tag{2.9}\] for all \(0<r<R\). Combining this estimate with the first equation in (2.5) now gives \[\left[u^{\prime}(r)^{p-1-\alpha}\right]^{\prime} \geq\frac{\delta}{n-1}\,f_{1}(r)g_{1}(v(r))-\frac{\delta^{2}}{(n -1)(\delta+1)}\,f_{1}(r)g_{1}(v(r))\] \[=\frac{p-1-\alpha}{n(p-1-\alpha)+\alpha}\,f_{1}(r)g_{1}(v(r))\] for all \(0<r<R\). This proves the lower bound in (2.3), and it also implies that \(u^{\prime}(r)\) is increasing. Turning to the second equation in (2.7), one may then argue that \[r^{n-1}v^{\prime}(r)^{p-1}\leq f_{2}(r)g_{2}(v(r))\cdot g_{3}(u^{\prime}(r)) \int_{0}^{r}s^{n-1}\,ds\] for all \(0<r<R\), in analogy with (2.8). This provides an analogue of (2.9) which combines with the second equation in (2.5) to yield the lower bound in (2.4) as before. ## 3. Monotonic comparison for general quasilinear systems In this section, we continue our study of the general system (2.1) and we establish two important comparison results. Our overall plan is to compare solutions \((u,v)\) of the original system (2.1) with solutions \((u_{0},v_{0})\) of the system \[\begin{cases}\Delta_{p}u_{0}=f_{1}(|x|)\cdot h_{1}(v_{0})\cdot|\nabla u_{0}|^{ \alpha}&\text{in }\Omega,\\ \Delta_{p}v_{0}=f_{2}(|x|)\cdot h_{2}(v_{0})\cdot h_{3}(|\nabla u_{0}|)&\text {in }\Omega,\end{cases} \tag{3.1}\] which is obtained from (2.1) by replacing the functions \(g_{j}\) by the functions \(h_{j}\). Intuitively speaking, one should think of (3.1) as a simplified version of the original system (2.1) whose solutions are expected to exhibit the same behavior at infinity. When it comes to the functions \(f_{i}\), \(g_{j}\) and \(h_{j}\), we impose the following assumptions: 1. each \(f_{i},g_{j},h_{j}\) is continuous, non-decreasing in \([0,\infty)\) and positive in \((0,\infty)\); 2. each \(g_{j},h_{j}\) is differentiable in \([0,\infty)\) and \(g_{1},g_{2},h_{1},h_{2}\) are increasing in \((0,\infty)\); 3. the quotients \(Q_{j}(s,t)=g_{j}(st)/h_{j}(t)\) are non-increasing in \(t\) for all \(s\geq 1\), \(t>0\); 4. there exist constants \(k_{j}\geq 0\) such that \(\lim_{t\to\infty}Q_{j}(s,t)=s^{k_{j}}\) for each \(s\geq 1\); 5. one has \(g_{j}(t)\geq h_{j}(t)\) for all \(t>0\). Assumption (B1) coincides with that of the previous section, but it is now also imposed on the functions \(h_{j}\). For the comparison results of this section, we study local solutions in the open ball \(\Omega=B_{R}\) without assuming (B4). In the next section, we shall then turn to global solutions in the whole space \(\Omega=\mathbb{R}^{n}\) assuming (B1)-(B4). It is easy to see that (B3) and (B4) trivially imply (B5), as they imply \(g_{j}(st)\geq s^{k_{j}}h_{j}(t)\), more generally. **Lemma 3.1** (Comparison lemma).: _Assume (B1), (B2), (B5), \(n\geq 2\) and \(\alpha\geq 0\). Suppose that \((u,v)\) and \((u_{0},v_{0})\) are non-constant radial solutions of (2.1) and (3.1), respectively, which are positive for all \(0<r<R\). If \(u(0)>u_{0}(0)\) and \(v(0)>v_{0}(0)\), then_ \[u(r)>u_{0}(r),\quad v(r)>v_{0}(r),\quad u^{\prime}(r)>u^{\prime}_{0}(r),\quad v ^{\prime}(r)>v^{\prime}_{0}(r) \tag{3.2}\] _throughout the whole interval \((0,R)\)._ **Proof.** Let us denote by \((0,R_{0})\subset(0,R)\) the maximal subinterval on which \[u(r)>u_{0}(r),\qquad v(r)>v_{0}(r)\qquad\text{for all }0<r<R_{0}. \tag{3.3}\] The function \(u\) satisfies the first equation of the system (2.7), so it satisfies \[\left[r^{\delta}u^{\prime}(r)^{p-1-\alpha}\right]^{\prime}=\frac{\delta}{n-1} \,r^{\delta}f_{1}(r)g_{1}(v(r)),\] where \(\delta>0\) is defined by (2.6), and \(p-1-\alpha>0\) by Lemma 2.2. The function \(u_{0}\) satisfies the exact same equation with \(g_{1}\) replaced by \(h_{1}\), hence \[\left[r^{\delta}(u^{\prime}(r)^{p-1-\alpha}-u^{\prime}_{0}(r)^{p-1-\alpha}) \right]^{\prime}=\frac{\delta}{n-1}\,r^{\delta}f_{1}(r)\cdot\left[g_{1}(v)-h_{ 1}(v_{0})\right].\] Along the interval \((0,R_{0})\), we have \(v>v_{0}>0\) by (3.3), so \(g_{1}(v)>g_{1}(v_{0})\geq h_{1}(v_{0})\) by (B2) and (B5). Since the factor \(f_{1}(r)\) is positive by (B1), the right hand side is then positive, so it easily follows that \(u^{\prime}(r)>u^{\prime}_{0}(r)\) for all \(0<r<R_{0}\). Next, we turn to the functions \(v,v_{0}\). Using the second equation in (2.7), we get \[\left[r^{n-1}(v^{\prime}(r)^{p-1}-v^{\prime}_{0}(r)^{p-1})\right]^{\prime}=r^ {n-1}f_{2}(r)\cdot\left[g_{2}(v)g_{3}(u^{\prime})-h_{2}(v_{0})h_{3}(u^{\prime} _{0})\right].\] When it comes to the interval \((0,R_{0})\), we have \(u^{\prime}>u^{\prime}_{0}\) by above, so \(g_{3}(u^{\prime})\geq g_{3}(u^{\prime}_{0})\) by (B1). In addition, \(v>v_{0}\) by (3.3), and thus \(g_{2}(v)>g_{2}(v_{0})\) by (B2). The right hand side is then positive by (B5), so it easily follows that \(v^{\prime}(r)>v^{\prime}_{0}(r)\) for all \(0<r<R_{0}\). This shows that both \(u(r)-u_{0}(r)\) and \(v(r)-v_{0}(r)\) are increasing throughout \((0,R_{0})\). As these functions are positive at \(r=0\) by assumption, they cannot possibly vanish at \(r=R_{0}\). In other words, the maximal subinterval on which (3.3) holds is the whole interval \((0,R)\), and our argument above gives \(u^{\prime}(r)>u^{\prime}_{0}(r)\) and \(v^{\prime}(r)>v^{\prime}_{0}(r)\) for all \(0<r<R\). **Theorem 3.2** (Monotonic comparison).: _Assume (B1)-(B3), (B5), \(n\geq 2\) and \(\alpha\geq 0\). Suppose that \((u,v)\) and \((u_{0},v_{0})\) are non-constant radial solutions of (2.1) and (3.1), respectively, which are positive for all \(0<r<R\). Suppose also that \(u(0),v(0)>0\) and_ \[u_{0}(0)=v_{0}(0)=0,\quad\lim_{r\to 0^{+}}\frac{u_{0}(r)}{u^{\prime}_{0}(r)}< \infty,\quad\lim_{r\to 0^{+}}\frac{v_{0}(r)}{v^{\prime}_{0}(r)}<\infty. \tag{3.4}\] _Then each of the quotients_ \[\mathcal{U}(r)=\frac{u(r)}{u_{0}(r)},\qquad\mathcal{V}(r)=\frac{v(r)}{v_{0}(r )},\qquad\mathcal{W}(r)=\frac{u^{\prime}(r)}{u^{\prime}_{0}(r)},\qquad\mathcal{ Y}(r)=\frac{v^{\prime}(r)}{v^{\prime}_{0}(r)} \tag{3.5}\] _is decreasing throughout the interval \((0,R)\)._ **Proof.** First of all, it follows from Lemmas 2.1 and 2.2 that \(\alpha<p-1\) and that \[u^{\prime}(r),\,v^{\prime}(r),\,u^{\prime}_{0}(r),\,v^{\prime}_{0}(r)>0\quad \text{for all }0<r<R.\] Recalling the first equation in (2.7), we see that the functions \(u,u_{0}\) satisfy the system \[\left\{\begin{aligned} \left[r^{\delta}u^{\prime}(r)^{p-1- \alpha}\right]^{\prime}&=\frac{\delta}{n-1}\,r^{\delta}f_{1}(r)g _{1}(v(r))&\text{for all }0<r<R,\\ \left[r^{\delta}u^{\prime}_{0}(r)^{p-1-\alpha}\right]^{\prime}& =\frac{\delta}{n-1}\,r^{\delta}f_{1}(r)h_{1}(v_{0}(r))& \text{for all }0<r<R\end{aligned}\right. \tag{3.6}\] with \(\delta>0\) defined by (2.6). We let \(A(r)=r^{\delta}u^{\prime}_{0}(r)^{p-1-\alpha}\) for simplicity and divide the last two equations. Using our definition (3.5) and our assumption (B3), we arrive at \[\frac{1}{A^{\prime}(r)}\,\left[r^{\delta}u^{\prime}(r)^{p-1-\alpha}\right]^{ \prime}=\frac{g_{1}(v)}{h_{1}(v_{0})}=\frac{g_{1}(\mathcal{V}v_{0})}{h_{1}(v_{ 0})}=Q_{1}(\mathcal{V},v_{0}).\] Since \(u^{\prime}=u^{\prime}_{0}\mathcal{W}\) by our definition (3.5), the left hand side can also be expressed as \[\frac{1}{A^{\prime}(r)}\,\left[r^{\delta}u^{\prime}_{0}(r)^{p-1-\alpha} \cdot\mathcal{W}(r)^{p-1-\alpha}\right]^{\prime}=\frac{1}{A^{\prime}(r)}\, \left[A(r)\cdot\mathcal{W}(r)^{p-1-\alpha}\right]^{\prime}.\] Once we combine the last two equations, we conclude that \(\mathcal{W}(r)\) satisfies \[\frac{A(r)}{A^{\prime}(r)}\cdot\left[\mathcal{W}(r)^{p-1-\alpha}\right]^{\prime}+ \mathcal{W}(r)^{p-1-\alpha}=Q_{1}(\mathcal{V}(r),v_{0}(r)), \tag{3.7}\] where \(A(r)=r^{\delta}u_{0}^{\prime}(r)^{p-1-\alpha}\) and \(Q_{1}\) is defined in our assumption (B3). Repeating the exact same argument, one may derive a similar equation for \(\mathcal{Y}(r)\), starting with the second equation of (2.7) instead of the first. This leads to the analogue \[\frac{B(r)}{B^{\prime}(r)}\cdot\left[\mathcal{Y}(r)^{p-1}\right]^{\prime}+ \mathcal{Y}(r)^{p-1}=Q_{2}(\mathcal{V}(r),v_{0}(r))\cdot Q_{3}(\mathcal{W}(r ),u_{0}^{\prime}(r)), \tag{3.8}\] where \(B(r)=r^{n-1}v_{0}^{\prime}(r)^{p-1}\) and \(Q_{2},Q_{3}\) are defined in our assumption (B3). We now proceed to analyze the system (3.7)-(3.8). Recalling (3.4), we note that \[\lim_{r\to 0^{+}}v_{0}(r)\mathcal{V}^{\prime}(r)=\lim_{r\to 0^{+}}\left[v^{ \prime}(r)-\frac{v(r)v_{0}^{\prime}(r)}{v_{0}(r)}\right]=-v(0)\cdot\lim_{r \to 0^{+}}\frac{v_{0}^{\prime}(r)}{v_{0}(r)}<0.\] Let us denote by \((0,R_{1})\subset(0,R)\) the maximal subinterval on which \(\mathcal{V}^{\prime}(r)<0\). Along this subinterval, the right hand side of (3.7) is easily seen to be decreasing, as \(Q_{1}(s,t)\) is increasing in \(s\) by (B2) and non-increasing in \(t\) by (B3) for all \(s\geq 1\) and \(t>0\). Using the fact that \(s=\mathcal{V}(r)>1\) by Lemma 3.1, we thus have \[\frac{d}{dr}\,Q_{1}(\mathcal{V}(r),v_{0}(r))=\frac{\partial Q_{1}}{\partial s} \cdot\mathcal{V}^{\prime}(r)+\frac{\partial Q_{1}}{\partial t}\cdot v_{0}^{ \prime}(r)<0\] for all \(0<r<R_{1}\) because \(\mathcal{V}^{\prime}(r)<0\) and \(v_{0}^{\prime}(r)>0\) along this interval. Returning to (3.7), we conclude that the left hand side is decreasing in \(r\), namely \[\frac{d}{dr}\left[\frac{A(r)}{A^{\prime}(r)}\cdot\left[\mathcal{W}(r)^{p-1- \alpha}\right]^{\prime}+\mathcal{W}(r)^{p-1-\alpha}\right]<0 \tag{3.9}\] for all \(0<r<R_{1}\). Noting that \(A(r)=r^{\delta}u_{0}^{\prime}(r)^{p-1-\alpha}\) is positive, we now multiply the last equation by \(A(r)\) and then rearrange terms to find that \[\frac{d}{dr}\left[A(r)\cdot\frac{A(r)}{A^{\prime}(r)}\cdot\left[\mathcal{W}(r) ^{p-1-\alpha}\right]^{\prime}\right]<0\] for all \(0<r<R_{1}\). This makes the expression within the square brackets decreasing, so \[\frac{A(r)^{2}}{A^{\prime}(r)}\cdot\left[\mathcal{W}(r)^{p-1-\alpha}\right]^{ \prime}<\lim_{r\to 0^{+}}\frac{A(r)^{2}}{A^{\prime}(r)}\cdot\left[\mathcal{W}(r)^{ p-1-\alpha}\right]^{\prime} \tag{3.10}\] for all \(0<r<R_{1}\). To compute the limit on the right hand side, we first note that \[\lim_{r\to 0^{+}}\frac{A(r)^{2}}{A^{\prime}(r)}\cdot\left[ \mathcal{W}(r)^{p-1-\alpha}\right]^{\prime} =\lim_{r\to 0^{+}}A(r)\cdot\left[Q_{1}(\mathcal{V}(r),v_{0}(r))- \mathcal{W}(r)^{p-1-\alpha}\right]\] \[=\lim_{r\to 0^{+}}\left[r^{\delta}u_{0}^{\prime}(r)^{p-1-\alpha}Q_{1}( \mathcal{V}(r),v_{0}(r))-r^{\delta}u^{\prime}(r)^{p-1-\alpha}\right]\] by (3.7) and the definition (3.5) of \(\mathcal{W}(r)\). Since \(p-1-\alpha>0\), we thus have \[\lim_{r\to 0^{+}}\frac{A(r)^{2}}{A^{\prime}(r)}\cdot\left[\mathcal{W}(r)^{p-1- \alpha}\right]^{\prime}=\lim_{r\to 0^{+}}r^{\delta}u_{0}^{\prime}(r)^{p-1-\alpha} \cdot\frac{g_{1}(v(r))}{h_{1}(v_{0}(r))}=0\] by the definition of \(Q_{1}\) and since our estimate (2.9) ensures that \[0\leq r^{\delta}u_{0}^{\prime}(r)^{p-1-\alpha}\cdot\frac{g_{1}(v(r))}{h_{1}(v_{0}( r))}\leq\frac{\delta\,r^{\delta+1}}{(n-1)(\delta+1)}\cdot f_{1}(r)g_{1}(v(r)).\] Knowing that the limit in the right hand side of (3.10) is zero, we deduce that the left hand side is negative. Since \(A^{\prime}(r)>0\) by (3.6), this implies \(\mathcal{W}(r)<0\) for all \(0<r<R_{1}\). Our next step is to show that \(\mathcal{U}^{\prime}(r)<0\) for all \(0<r<R_{1}\). Recalling (3.5), we get \[\frac{u_{0}(r)}{u_{0}^{\prime}(r)}\cdot\mathcal{U}^{\prime}(r)+\mathcal{U}(r) =\frac{u^{\prime}(r)u_{0}(r)-u_{0}^{\prime}(r)u(r)}{u_{0}(r)u_{0}^{\prime}(r)} +\frac{u(r)}{u_{0}(r)}=\mathcal{W}(r). \tag{3.11}\] This function is decreasing on \((0,R_{1})\) by above, so it obviously satisfies \[\frac{d}{dr}\left[\frac{u_{0}(r)}{u_{0}^{\prime}(r)}\cdot\mathcal{U}^{\prime} (r)+\mathcal{U}(r)\right]<0\] for all \(0<r<R_{1}\). Multiplying by \(u_{0}(r)\) and rearranging terms now gives \[\frac{d}{dr}\left[u_{0}(r)\cdot\frac{u_{0}(r)}{u_{0}^{\prime}(r)}\cdot \mathcal{U}^{\prime}(r)\right]<0 \tag{3.12}\] for all \(0<r<R_{1}\). Thus, the expression in square brackets is decreasing. Since \[\lim_{r\to 0^{+}}\frac{u_{0}(r)^{2}}{u_{0}^{\prime}(r)}\cdot\mathcal{U}^{ \prime}(r)=\lim_{r\to 0^{+}}\left[\frac{u^{\prime}(r)u_{0}(r)}{u_{0}^{ \prime}(r)}-u(r)\right]=-u(0)<0\] by our assumption (3.4), it follows by the last two equations that \[\frac{u_{0}(r)^{2}}{u_{0}^{\prime}(r)}\cdot\mathcal{U}^{\prime}(r)<0\] for all \(0<r<R_{1}\). In particular, one also has \(\mathcal{U}^{\prime}(r)<0\) for all \(0<r<R_{1}\). We note that the above analysis only uses the first equation (3.7) of our system. Let us now turn to the second equation (3.8), according to which \[\frac{B(r)}{B^{\prime}(r)}\cdot\left[\mathcal{Y}(r)^{p-1}\right]^{\prime}+ \mathcal{Y}(r)^{p-1}=Q_{2}(\mathcal{V}(r),v_{0}(r))\cdot Q_{3}(\mathcal{W}(r),u_{0}^{\prime}(r)).\] Knowing that \(\mathcal{V}(r)\) and \(\mathcal{W}(r)\) are decreasing in \((0,R_{1})\), one may easily check that the right hand side is itself decreasing because of our assumptions (B2)-(B3) and since \(u_{0}^{\prime\prime}(r)>0\) by Lemma 2.3. Thus, the left hand side is decreasing as well, and we get \[\frac{d}{dr}\left[\frac{B(r)}{B^{\prime}(r)}\cdot\left[\mathcal{Y}(r)^{p-1} \right]^{\prime}+\mathcal{Y}(r)^{p-1}\right]<0\] for all \(0<r<R_{1}\), an exact analogue of (3.9). Proceeding as before, one may then multiply by the positive factor \(B(r)\) and integrate to deduce that \(\mathcal{Y}^{\prime}(r)<0\) for all \(0<r<R_{1}\). As our final step, we now relate \(\mathcal{Y}(r)\) with \(\mathcal{V}(r)\). According to (3.5), we have \[\frac{v_{0}(r)}{v_{0}^{\prime}(r)}\cdot\mathcal{V}^{\prime}(r)+\mathcal{V}(r) =\frac{v^{\prime}(r)v_{0}(r)-v_{0}^{\prime}(r)v(r)}{v_{0}^{\prime}(r)v_{0}(r)}+ \frac{v(r)}{v_{0}(r)}=\mathcal{Y}(r),\] and this provides an exact analogue of (3.11). Since the right hand side is decreasing, the same is true for the left hand side, so we obtain an exact analogue of (3.12), namely \[\frac{d}{dr}\left[v_{0}(r)\cdot\frac{v_{0}(r)}{v_{0}^{\prime}(r)}\cdot\mathcal{V }^{\prime}(r)\right]<0\] for all \(0<r<R_{1}\). On the other hand, \((0,R_{1})\) was assumed to be the maximal subinterval on which \(\mathcal{V}^{\prime}(r)\) is negative. Along this interval, the expression in square brackets is decreasing and negative, so it cannot possibly vanish at \(r=R_{1}\). This implies that \((0,R_{1})\) must be the whole interval \((0,R)\). In particular, the functions \(\mathcal{U}(r)\), \(\mathcal{V}(r)\), \(\mathcal{W}(r)\), \(\mathcal{Y}(r)\) are all decreasing throughout the whole interval \((0,R)\), and the proof is complete. ## 4. Asymptotic comparison for general quasilinear systems **Theorem 4.1** (Asymptotic Comparison).: _Assume (B1)-(B4), \(n\geq 2\) and \(\alpha\geq 0\). Suppose that \((u,v)\) and \((u_{0},v_{0})\) are non-constant radial solutions of (2.1) and (3.1), respectively, which are positive for all \(r>0\). If \(u(0),v(0)>0\) and \((u_{0},v_{0})\) satisfies (3.4), then_ \[\lim_{r\to\infty}\frac{u(r)}{u_{0}(r)}=\lim_{r\to\infty}\frac{v(r)}{v_{0}(r)}= \lim_{r\to\infty}\frac{u^{\prime}(r)}{u_{0}^{\prime}(r)}=\lim_{r\to\infty} \frac{v^{\prime}(r)}{v_{0}^{\prime}(r)}=1.\] **Proof.** Since (3.4) holds, one has \(u(0)>0=u_{0}(0)\) and \(v(0)>0=v_{0}(0)\), so Theorem 3.2 is applicable. Thus, the functions \(\mathcal{U}(r),\mathcal{V}(r),\mathcal{W}(r),\mathcal{Y}(r)\) defined by (3.5) are decreasing for all \(r>0\). Using Lemma 2.1 and the fact that \(\mathcal{U}(r)\) is decreasing, we get \[0>\mathcal{U}^{\prime}(r)\cdot\frac{u_{0}(r)}{u_{0}^{\prime}(r)}=\frac{u^{ \prime}(r)u_{0}(r)-u_{0}^{\prime}(r)u(r)}{u_{0}(r)u_{0}^{\prime}(r)}=\mathcal{ W}(r)-\mathcal{U}(r), \tag{4.1}\] and thus \(\mathcal{W}(r)<\mathcal{U}(r)\) for all \(r>0\). Similarly, one finds that \(\mathcal{Y}(r)<\mathcal{V}(r)\) for all \(r>0\). Let us now recall the first equation (3.7) of our system, according to which \[\frac{A(r)}{A^{\prime}(r)}\cdot\big{[}\mathcal{W}(r)^{p-1-\alpha}\big{]}^{ \prime}+\mathcal{W}(r)^{p-1-\alpha}=Q_{1}(\mathcal{V}(r),v_{0}(r))\] with \(A(r)=r^{\delta}u_{0}^{\prime}(r)^{p-1-\alpha}\) and \(Q_{1}\) as in our assumption (B3). Note that \(A^{\prime}(r)>0\) by (3.6) and that \(p-1-\alpha>0\) by Lemma 2.2. Since \(\mathcal{W}(r)\) is decreasing, it follows by the last equation and our assumptions (B3)-(B4) that \[\mathcal{W}(r)^{p-1-\alpha}>Q_{1}(\mathcal{V}(r),v_{0}(r))\geq\mathcal{V}(r)^ {k_{1}}\geq\mathcal{Y}(r)^{k_{1}}\] for all \(r>0\). Applying the same argument to the second equation (3.8), we get \[\mathcal{Y}(r)^{p-1} >Q_{2}(\mathcal{V}(r),v_{0}(r))\cdot Q_{3}(\mathcal{W}(r),u_{0}^{ \prime}(r))\] \[\geq\mathcal{V}(r)^{k_{2}}\cdot\mathcal{W}(r)^{k_{3}}\geq\mathcal{ Y}(r)^{k_{2}}\cdot\mathcal{W}(r)^{k_{3}}\] for all \(r>0\). In view of the last two estimates, one must then have \[\mathcal{Y}(r)^{(p-1-k_{2})(p-1-\alpha)}>\mathcal{W}(r)^{k_{3}(p-1-\alpha)} \geq\mathcal{Y}(r)^{k_{1}k_{3}}\] for all \(r>0\). Since \(\mathcal{Y}(r)>1\) by Lemma 3.1, however, this is only possible when \[(p-1-k_{2})(p-1-\alpha)>k_{1}k_{3}. \tag{4.2}\] Next, we analyze the behavior of solutions as \(r\to\infty\). The functions \(\mathcal{U}(r)\), \(\mathcal{V}(r)\), \(\mathcal{W}(r)\) and \(\mathcal{Y}(r)\) are all decreasing and positive, so they all attain a limit as \(r\to\infty\). Let us denote their limits by \(\mathcal{U}_{\infty}\), \(\mathcal{V}_{\infty}\), \(\mathcal{W}_{\infty}\) and \(\mathcal{Y}_{\infty}\), respectively. We must then have \[\mathcal{U}_{\infty}=\lim_{r\to\infty}\frac{u(r)}{u_{0}(r)}=\lim_{r\to\infty} \frac{u^{\prime}(r)}{u_{0}^{\prime}(r)}=\mathcal{W}_{\infty} \tag{4.3}\] by L'Hopital's rule, and similarly, \(\mathcal{V}_{\infty}=\mathcal{Y}_{\infty}\). According to the system (3.6), one has \[\begin{cases}(p-1-\alpha)\cdot u^{\prime}(r)^{p-2-\alpha}u^{\prime\prime}(r)= \frac{\delta}{n-1}\,f_{1}(r)g_{1}(v(r))-\frac{\delta}{r}\,u^{\prime}(r)^{p-1- \alpha},\\ (p-1-\alpha)\cdot u^{\prime}_{0}(r)^{p-2-\alpha}u^{\prime\prime}_{0}(r)= \frac{\delta}{n-1}\,f_{1}(r)h_{1}(v_{0}(r))-\frac{\delta}{r}\,u^{\prime}_{0}( r)^{p-1-\alpha}.\end{cases}\] Dividing these two equations and recalling our definition (3.5), one can thus write \[\mathcal{W}(r)^{p-2-\alpha}\,\frac{u^{\prime\prime}(r)}{u^{\prime\prime}_{0}( r)}=\frac{\frac{\delta}{n-1}\,f_{1}(r)g_{1}(v(r))\cdot u^{\prime}_{0}(r)^{ \alpha+1-p}-\frac{\delta}{r}\mathcal{W}(r)^{p-1-\alpha}}{\frac{\delta}{n-1}\, f_{1}(r)h_{1}(v_{0}(r))\cdot u^{\prime}_{0}(r)^{\alpha+1-p}-\frac{\delta}{r}}.\] Taking the limit as \(r\to\infty\) now leads to the identity \[\mathcal{W}_{\infty}^{p-2-\alpha}\,\lim_{r\to\infty}\frac{u^{\prime\prime}(r) }{u^{\prime\prime}_{0}(r)}=\lim_{r\to\infty}\frac{g_{1}(v(r))}{h_{1}(v_{0}(r) )}=\lim_{r\to\infty}Q_{1}(\mathcal{V}(r),v_{0}(r)) \tag{4.4}\] with \(Q_{1}\) as in our assumption (B3). The limit on the right hand side exists because \[\lim_{r\to\infty}Q_{1}(\mathcal{V}(r),v_{0}(r))=\lim_{r\to\infty}Q_{1}( \mathcal{V}_{\infty},v_{0}(r))=\mathcal{V}_{\infty}^{k_{1}}\] by our assumption (B4). Thus, the limit on the left hand side of (4.4) also exists. Using this fact along with L'Hopital's rule and (4.3), we see that (4.4) reduces to \[\mathcal{U}_{\infty}^{p-1-\alpha}=\mathcal{V}_{\infty}^{k_{1}}. \tag{4.5}\] This condition was derived by combining the equations (3.6) satisfied by \(u(r)\) and \(u_{0}(r)\). The exact same argument applies to \(v(r)\) and \(v_{0}(r)\), so one similarly has \[\mathcal{Y}_{\infty}^{p-2}\,\lim_{r\to\infty}\frac{v^{\prime\prime}(r)}{v^{ \prime\prime}_{0}(r)}=\lim_{r\to\infty}Q_{2}(\mathcal{V}(r),v_{0}(r))\cdot Q_{ 3}(\mathcal{W}(r),u^{\prime}_{0}(r))\] in analogy with (4.4). Since \(\mathcal{Y}_{\infty}=\mathcal{V}_{\infty}\) by above, our previous approach then gives \[\mathcal{V}_{\infty}^{p-1}=\mathcal{V}_{\infty}^{k_{2}}\cdot\mathcal{W}_{ \infty}^{k_{3}}=\mathcal{V}_{\infty}^{k_{2}}\cdot\mathcal{U}_{\infty}^{k_{3}} \tag{4.6}\] in analogy with (4.5). Once we combine (4.6) with (4.5), we find that \[\mathcal{V}_{\infty}^{(p-1-k_{2})(p-1-\alpha)}=\mathcal{U}_{\infty}^{k_{3}(p-1 -\alpha)}=\mathcal{V}_{\infty}^{k_{1}k_{3}},\] where \(\mathcal{V}_{\infty}\geq 1\) by Lemma 3.1. In view of (4.2), the only possibility is then \(\mathcal{V}_{\infty}=1\), which also implies that \(\mathcal{U}_{\infty}=1\). This is precisely the assertion of the theorem. ## 5. Proof of Theorem 1.1 First of all, we seek solutions of the associated system (1.7) that have the form \[u(r)=C_{\lambda}r^{\lambda},\qquad v(r)=C_{\mu}r^{\mu}. \tag{5.1}\] In view of the definition of the \(p\)-Laplace operator, it is easy to check that \[\Delta_{p}(C_{\lambda}r^{\lambda})=\left(\lambda C_{\lambda}\right)^{p-1} \left(\lambda(p-1)+n-p\right)\cdot r^{(p-1)\lambda-p}. \tag{5.2}\] Inserting this identity in (1.7) and comparing powers of \(r\), one obtains the system \[(p-1-\alpha)\lambda-k_{1}\mu =p-\alpha+m_{1},\] \[(p-1-k_{2})\mu-k_{3}\lambda =p-k_{3}+m_{2}.\] Since (1.3) holds, this system has a unique solution \((\lambda,\mu)\) which is given by \[\lambda =\frac{(p-\alpha+m_{1})(p-1-k_{2})+k_{1}(p-k_{3}+m_{2})}{(p-1- \alpha)(p-1-k_{2})-k_{1}k_{3}}>1,\] \[\mu =\frac{(p-1-\alpha)(p+m_{2})+k_{3}(1+m_{1})}{(p-1-\alpha)(p-1-k_{ 2})-k_{1}k_{3}}>1.\] Using (5.2) once again, we find that (5.1) is a solution of (1.7), if and only if \[(\lambda C_{\lambda})^{p-1}\cdot(\lambda(p-1)+n-p) =c_{1}C_{\mu}^{k_{1}}\cdot(\lambda C_{\lambda})^{\alpha},\] \[(\mu C_{\mu})^{p-1}\cdot(\mu(p-1)+n-p) =c_{2}C_{\mu}^{k_{2}}\cdot(\lambda C_{\lambda})^{k_{3}}.\] Let us set \(B_{\lambda}=\lambda(p-1)+n-p\) for simplicity. Then we get the equivalent system \[(\lambda C_{\lambda})^{p-1-\alpha}=\frac{c_{1}}{\mu^{k_{1}}B_{\lambda}}\,(\mu C _{\mu})^{k_{1}},\qquad(\mu C_{\mu})^{p-1-k_{2}}=\frac{c_{2}}{\mu^{k_{2}}B_{ \mu}}\,(\lambda C_{\lambda})^{k_{3}}.\] Since (1.3) holds, this system has a unique solution \(C_{\lambda},C_{\mu}\) which is given by \[(\lambda C_{\lambda})^{(p-1-\alpha)(p-1-k_{2})-k_{1}k_{3}} =\left(\frac{c_{1}}{\mu^{k_{1}}B_{\lambda}}\right)^{p-1-k_{2}} \left(\frac{c_{2}}{\mu^{k_{2}}B_{\mu}}\right)^{k_{1}},\] \[(\mu C_{\mu})^{(p-1-\alpha)(p-1-k_{2})-k_{1}k_{3}} =\left(\frac{c_{2}}{\mu^{k_{2}}B_{\mu}}\right)^{p-1-\alpha}\left( \frac{c_{1}}{\mu^{k_{1}}B_{\lambda}}\right)^{k_{3}}.\] The exact values of \(\lambda,\mu,C_{\lambda},C_{\mu}\) do not play any role in our approach, as we shall simply resort to our general Theorems 3.2 and 4.1 which apply for arbitrary solutions \((u_{0},v_{0})\) that vanish at the origin. In this case, our solution \((u_{0},v_{0})\) is given by (5.1), so \[\lim_{r\to 0^{+}}\frac{u_{0}(r)}{u_{0}^{\prime}(r)}=\lim_{r\to 0^{+}} \frac{r}{\lambda}=0,\qquad\lim_{r\to 0^{+}}\frac{v_{0}(r)}{v_{0}^{\prime}(r)}= \lim_{r\to 0^{+}}\frac{r}{\mu}=0\] and condition (3.4) holds. Let us then resort to Theorem 3.2 for the special case \[f_{i}(r)=c_{i}r^{m_{i}},\qquad h_{j}(s)=s^{k_{j}},\] where \(i=1,2\) and \(j=1,2,3\). Our assumptions (B1)-(B2) hold trivially, while \[Q_{j}(s,t)=\frac{g_{j}(st)}{h_{j}(t)}=\frac{g_{j}(st)}{(st)^{k_{j}}}\cdot s^{k _{j}}\] is non-increasing in \(t\) because of (A3). It also follows by condition (1.6) that \[\lim_{t\to\infty}Q_{j}(s,t)=\lim_{t\to\infty}\frac{g_{j}(st)}{(st)^{k_{j}}}\cdot s ^{k_{j}}=s^{k_{j}}\] for each \(s>0\), so our assumptions (B1)-(B5) are all valid. Thus, part (a) of Theorem 1.1 follows by Theorem 3.2, while part (b) of Theorem 1.1 follows by Theorem 4.1. ## Acknowledgements The first author acknowledges the financial support of The Irish Research Council Postgraduate Scholarship under grant number GOIPG/2022/469.
2308.10113
Modeling Random Networks with Heterogeneous Reciprocity
Reciprocity, or the tendency of individuals to mirror behavior, is a key measure that describes information exchange in a social network. Users in social networks tend to engage in different levels of reciprocal behavior. Differences in such behavior may indicate the existence of communities that reciprocate links at varying rates. In this paper, we develop methodology to model the diverse reciprocal behavior in growing social networks. In particular, we present a preferential attachment model with heterogeneous reciprocity that imitates the attraction users have for popular users, plus the heterogeneous nature by which they reciprocate links. We compare Bayesian and frequentist model fitting techniques for large networks, as well as computationally efficient variational alternatives. Cases where the number of communities are known and unknown are both considered. We apply the presented methods to the analysis of a Facebook wallpost network where users have non-uniform reciprocal behavior patterns. The fitted model captures the heavy-tailed nature of the empirical degree distributions in the Facebook data and identifies multiple groups of users that differ in their tendency to reply to and receive responses to wallposts.
Daniel Cirkovic, Tiandong Wang
2023-08-19T21:21:25Z
http://arxiv.org/abs/2308.10113v1
# Modeling Random Networks with Heterogeneous Reciprocity ###### Abstract Reciprocity, or the tendency of individuals to mirror behavior, is a key measure that describes information exchange in a social network. Users in social networks tend to engage in different levels of reciprocal behavior. Differences in such behavior may indicate the existence of communities that reciprocate links at varying rates. In this paper, we develop methodology to model the diverse reciprocal behavior in growing social networks. In particular, we present a preferential attachment model with heterogeneous reciprocity that imitates the attraction users have for popular users, plus the heterogeneous nature by which they reciprocate links. We compare Bayesian and frequentist model fitting techniques for large networks, as well as computationally efficient variational alternatives. Cases where the number of communities are known and unknown are both considered. We apply the presented methods to the analysis of a Facebook wallpost network where users have non-uniform reciprocal behavior patterns. The fitted model captures the heavy-tailed nature of the empirical degree distributions in the Facebook data and identifies multiple groups of users that differ in their tendency to reply to and receive responses to wallposts. Variational inference, Community detection, Preferential attachment, Bayesian methods ## 1 Introduction A frequent goal in the statistical inference of social networks is to develop models that adequately capture and quantify common types of user interaction. One such feature is the propensity of users to generate links with other users that already have attracted a large number of links (Newman, 2001; Jeong et al., 2003). In order to model this "rich get richer" self-organizing feature of nodes in a growing network, Barabasi and Albert (1999) developed the preferential attachment (PA) model. The classical the preferential attachment model posits that as users enter a growing network, they connect with other users with probability proportional to their degree. This simple mechanism produces power-law degree distributions, yet another feature of many real-world networks (Mislove et al., 2007). Since it's inception, many generalizations of the preferential attachment model have been developed to capture more features of growing networks (Bhamidi et al., 2015; Hajek and Sankagiri, 2019; Wang and Zhang, 2022; Wang and Resnick, 2020). Another common feature of online social networks is a significant degree of reciprocity (see Newman et al., 2002; Zlatic and Stefancic, 2011, for example). Reciprocity describes the tendency of users to reply to links and is typically measured by the proportion of reciprocal links in a network (Jiang et al., 2015). A recent study by Wang and Resnick (2022a) found that the traditional directed preferential attachment model often produces a negligible proportion of reciprocal links. Motivated by this finding, Wang and Resnick (2022c) and Cirkovic et al. (2022a) developed a preferential attachment model with reciprocity that is a more realistic choice for fitting to social networks. The model assumes that upon the generation of a link between nodes through the typical preferential attachment scheme, the users reciprocate the link with a probability \(\rho\in(0,1)\) that is common to all users in the network. The model was used to analyze a Facebook wallpost network. Although an improvement, the model of Cirkovic et al. (2022a) fails to account for the heterogeneity of reciprocal behavior in a social network. In reality, it is naive to assume all users in a large network engage in similar levels of reciprocity. Such an assumption has caused Cirkovic et al. (2022a) to remove a subset of nodes which apparently engaged in dissimilar reciprocal behavior from their analysis of the Facebook wallpost network. Further, when a link is made between two nodes \(u\) and \(v\), it is likely that the decision of whether or not to reciprocate the link depends on the direction of the original link, \((u,v)\) or \((v,u)\). For example, a celebrity in a social network may be less likely to reply to a message sent by a fan, whereas a fan is very likely to respond to message sent by the celebrity. Recently, Wang and Resnick (2022b) relax the assumption of having only one reciprocity parameter \(\rho\) to the case where reciprocity probabilities are different for users belonging to different communication classes. Theoretical results in Wang and Resnick (2022c) are obtained by assuming no new edge is added between existing nodes. In this paper, we consider a further generalization of the model presented in Wang and Resnick (2022b) to allow for more realistic assumptions, i.e. heterogeneous, asymmetric reciprocity as well as edges between existing nodes. We assume that each user in the network is equipped with a communication class that governs its tendency to reciprocate edges. In the network generation process, initial edges between nodes are generated via preferential attachment, while the decision to reciprocate the edge is decided by a stochastic blockmodel-like scheme. We describe three methods to fit such a model to observed networks, both when the number of communication classes is known and unknown. Specifically, we propose a fully Bayesian approach, along with variationally Bayesian and frequentist approaches. The approaches and their peformance on synthetic networks are then compared through simulation studies. Finally, we reconsider the Facebook wallpost network as in Cirkovic et al. (2022a), and use the heterogeneous reciprocal preferential attachment model to glean new insights into communication patterns on Facebook. ## 2 The PA Model with Heterogeneous Reciprocity ### The model In this section, we present the preferential attachment model with heterogeneous reciprocity. Let \(G(n)\) be the graph after \(n\) steps and \(V(n)\) be the set of nodes in \(G(n)\). Attach to each node \(v\) a communication type \(W_{v}\), where \(\{W_{v},v\geq 1\}\) are iid random variables with \[\mathbb{P}(W_{v}=r)=\pi_{r},\qquad\text{for}\quad\sum_{r=1}^{K}\pi_{r}=1. \tag{1}\] Define the vector \(\boldsymbol{\pi}\equiv(\pi_{r})_{r}\). Let \(W(n):=\{W_{v}:v\in V(n)\}\) denote the set of group types for all nodes in \(G(n)\). Throughout we assume that the communication group of node \(v\) is generated upon creation and remains unchanged throughout the graph evolution. Also, denote the set of directed edges in \(G(n)\) by \[E(n):=\{(u,v):u,v\in V(n)\}.\] Throughout this paper, we always assume \(G(n)=(V(n),E(n),W(n))\), for \(n\geq 0\). We initialize the model with seed graph \(G(0)\). \(G(0)\) consists of \(|V(0)|\) nodes, each of which is also endowed with its own communication class randomly according to (1). The edges \(E(0)\) will have no impact on inference other than setting the initial degree distribution. For each new edge \((u,v)\) with \(W_{u}=r,W_{v}=m\), the reciprocity mechanism adds its reciprocal counterpart \((v,u)\) instantaneously with probability \(\rho_{m,r}\in[0,1]\), for \(m,r\in\{1,2,\ldots,K\}\). Here \(\rho_{m,r}\) measures the probability of adding a reciprocal edge from a node in group \(m\) to a node in group \(r\). Note that the matrix \(\boldsymbol{\rho}:=(\rho_{m,r})_{m,r}\) is not necessarily a stochastic matrix, but can be an arbitrary matrix in \(M_{K\times K}([0,1])\), the set of all \(K\times K\) matrices with entries belonging to \([0,1]\). We now describe the evolution of the network \(G(n+1)\) from \(G(n)\). Let \(\left(D_{v}^{\text{in}}(n),D_{v}^{\text{out}}(n)\right)\) be the in- and out-degrees of node \(v\in V(n)\), and we use the convention that \(D_{v}^{\text{in}}(n)=D_{v}^{\text{out}}(n)=0\) if \(v\notin V(n)\). 1. With probability \(\alpha\in[0,1]\), add a new node \(|V(n)|+1\) with a directed edge \((|V(n)|+1,v)\), where \(v\in V(n)\) is chosen with probability \[\frac{D_{v}^{\text{in}}(n)+\delta_{\text{in}}}{\sum_{v\in V(n)}(D_{w}^{\text{ in}}(n)+\delta_{\text{in}})}=\frac{D_{v}^{\text{in}}(n)+\delta_{\text{in}}}{|E( n)|+\delta_{\text{in}}|V(n)|},\] (2) where \(\delta_{\text{in}}>0\) is an offset parameter, and update the node set \(V(n+1)=V(n)\cup\{|V(n)|+1\}\) and \(W(n+1)=W(n)\cup\{W_{|V(n)|+1}\}\). The new node \(|V(n)|+1\) belongs to group \(r\) with probability \(\pi_{r}\). If node \(v\) belongs to group \(m\), then a reciprocal edge \((v,|V(n)|+1)\) is added with probability \(\rho_{m,r}\). Update the edge set as \(E(n+1)=E(n)\cup\{(|V(n)|+1,v),(v,|V(n)|+1)\}\). If the reciprocal edge is not created, set \(E(n+1)=E(n)\cup\{(|V(n)|+1,v)\}\). 2. With probability \(\beta\in[0,1-\alpha]\), generate a directed edge \((u,v)\) between two existing nodes \(u,v\in V(n)\) with probability \[\begin{split}\frac{D_{v}^{\text{in}}(n)+\delta_{\text{in}}}{ \sum_{v\in V(n)}(D_{w}^{\text{in}}(n)+\delta_{\text{in}})}&\frac{D _{v}^{\text{out}}(n)+\delta_{\text{out}}}{\sum_{v\in V(n)}(D_{v}^{\text{out}}( n)+\delta_{\text{out}})}\\ &=\frac{D_{v}^{\text{in}}(n)+\delta_{\text{in}}}{|E(n)|+\delta_{ \text{in}}|V(n)|}\frac{D_{v}^{\text{out}}(n)+\delta_{\text{out}}}{|E(n)|+ \delta_{\text{out}}|V(n)|},\end{split}\] (3) where \(\delta_{\text{out}}>0\) is also an offset parameter. If node \(u\) belongs to group \(r\) and node \(v\) belongs to group \(m\), then a reciprocal edge \((v,u)\) is added with probability \(\rho_{m,r}\). Update the edge set as \(E(n+1)=E(n)\cup\{(u,v),(v,u)\}\). If the reciprocal edge is not created, set \(E(n+1)=E(n)\cup\{(u,v)\}\). Finally, update \(V(n+1)=V(n)\) and \(W(n+1)=W(n)\). 3. With probability \(\gamma\equiv 1-\alpha-\beta\), add a new node \(|V(n)|+1\) with a directed edge \((v,|V(n)|+1)\), where \(v\in V(n)\) is chosen with probability \[\frac{D_{v}^{\text{out}}(n)+\delta_{\text{out}}}{\sum_{v\in V(n)}(D_{v}^{\text {out}}(n)+\delta_{\text{out}})}=\frac{D_{v}^{\text{out}}(n)+\delta_{\text{ out}}}{|E(n)|+\delta_{\text{out}}|V(n)|},\] (4) and update the node set \(V(n+1)=V(n)\cup\{|V(n)|+1\}\), \(W(n+1)=W(n)\cup\{W_{|V(n)|+1}\}\). The new node \(|V(n)|+1\) belongs to group \(r\) with probability \(\pi_{r}\). If node \(v\) belongs to group \(m\), then a reciprocal edge \((|V(n)|+1,v)\) is added with probability \(\rho_{r,m}\). Update the edge set as \(E(n+1)=E(n)\cup\{(v,|V(n)|+1,v),(|V(n)|+1,v)\}\). If the reciprocal edge is not created, set \(E(n+1)=E(n)\cup\{(v,|V(n)|+1)\}\). Let \(\{J_{k}\}\) be iid Categorical random variables that indicate under which scenario the transition from \(G(k)\) to \(G(k+1)\) has occurred. That is, \(\mathbb{P}(J_{k}=1)=\alpha\), \(\mathbb{P}(J_{k}=2)=\beta\) and \(\mathbb{P}(J_{k}=3)=1-\alpha-\beta\). At each step \(k\), we denote the outcome of the reciprocal event via \(R_{k}\) where \(R_{k}=1\) if a reciprocal edge is added and \(R_{k}=0\) otherwise. ### Likelihood inference Suppose we observe the evolution of the graph sequence \(\{G(k)\}_{k=0}^{n}\) so that we have the edges \(e_{k}=E(k)\setminus E(k-1)\) added at each step according to the description in Section 2.1. Here, \[e_{k}=\begin{cases}\{(s_{k},t_{k}),(t_{k},s_{k})\}&\text{if }R_{k}=1\\ \{(s_{k},t_{k})\}&\text{if }R_{k}=0.\end{cases} \tag{5}\] Let \(\boldsymbol{\theta}=(\alpha,\beta,\delta_{\text{in}},\delta_{\text{out}})\). With these ingredients, the likelihood associated with the graph sequence \(\{G(k)\}_{k=0}^{n}\) is given by \[p\left((e_{k})_{k=1}^{n},W(n)\mid\boldsymbol{\theta},\boldsymbol {\pi},\boldsymbol{\rho}\right)\] \[=\alpha^{\sum_{k=1}^{n}1_{\{J_{k}=1\}}}\beta^{\sum_{k=1}^{n}1_{\{ J_{k}=2\}}}(1-\alpha-\beta)^{\sum_{k=1}^{n}1_{\{J_{k}=3\}}}\] \[\qquad\times\prod_{r=1}^{K}\pi_{r}^{\sum_{k=1}^{n}1_{\{J_{k}=1\} }1_{\{W_{s_{k}=r\}}+\sum_{k=1}^{n}1_{\{J_{k}=3\}}1_{\{W_{t_{k}=r\}}}}}\] \[\qquad\times\prod_{r=1}^{K}\prod_{m=1}^{K}\rho_{m,r}^{\sum_{k=1} ^{n}1_{\{W_{s_{k}=r\}}1_{\{W_{t_{k}=m\}}}1_{\{R_{k}=1\}}}}(1-\rho_{m,r})^{\sum _{k=1}^{n}1_{\{W_{s_{k}=r\}}}1_{\{W_{t_{k}=m\}}}1_{\{R_{k}=0\}}}\] \[\equiv p((e_{k})_{k=1}^{n}\mid\boldsymbol{\theta})\times p((e_{k} )_{k=1}^{n},W(n)\mid\boldsymbol{\pi},\boldsymbol{\rho}).\] The function \(p(\cdot\mid\mathbf{\theta})\) collects the likelihood terms dependent on \(\mathbf{\theta}\) and likewise \(p(\cdot\mid\mathbf{\pi},\mathbf{\rho})\) collects the terms dependent on \(\mathbf{\pi}\) and \(\mathbf{\rho}\). Such factorization implies that the estimation of the parameters \(\mathbf{\theta}\) and \(\mathbf{\pi},\mathbf{\rho}\) can be conducted independently. The frequentist estimation of \(\mathbf{\theta}\) in homogeneous reciprocal PA models has already been considered in Cirkovic et al. (2022a). These estimators are unchanged in the heterogeneous case. Naturally, the maximum likelihood estimators (MLE) for \(\alpha\) and \(\beta\) are given by \(\hat{\alpha}=n^{-1}\sum_{k=0}^{n}1_{\{J_{k}=1\}}\) and \(\hat{\beta}=n^{-1}\sum_{k=0}^{n}1_{\{J_{k}=2\}}\). The MLE for \(\delta_{\text{in}}\) satisfies \[\sum_{k=1}^{n}1_{\{J_{k}\in\{1,2\}\}}\frac{1}{D_{t_{k}}^{\text{in}}(k-1)+\hat {\delta}_{\text{in}}}-\sum_{k=1}^{n}1_{\{J_{k}\in\{1,2\}\}}\frac{|V(k-1)|}{|E(k -1)|+\hat{\delta}_{\text{in}}N(k-1)}=0, \tag{6}\] where (6) is obtained by setting \(\frac{\partial}{\partial\hat{\delta}_{\text{in}}}\log p((e_{k})_{k=1}^{n} \mid\mathbf{\theta})=0\). The MLE for \(\delta_{\text{out}}\) is obtained similarly. The estimators \(\hat{\alpha}\) and \(\hat{\beta}\) are strongly consistent for \(\alpha\) and \(\beta\), while consistency for \(\hat{\delta}_{\text{in}}\) and \(\hat{\delta}_{\text{out}}\) has not yet been verified since the reciprocal component of the model interferes with traditional techniques to analyze consistency in non-reciprocal preferential attachment models as in Wan et al. (2017). Estimation of \(\mathbf{\rho}\) and \(\mathbf{\pi}\) is considerably more involved, and will be the main focus of this paper. The reciprocal component of the preferential attachment model with heterogeneous reciprocity is reminiscent of a stochastic block model. Nodes first attach via the preferential attachment rules in (2), (3) and (4), then a stochastic-block-model type mechanism dictates the reciprocal behavior. A large portion of the literature on stochastic block modeling is concerned with community detection Bickel and Chen (2009); Holland et al. (1983); Karrer and Newman (2011); Zhao et al. (2011). Here we are primarily concerned with the estimation of \(\mathbf{\rho}\) and \(\mathbf{\pi}\), and consider the recovery of \(W(n)\) as a secondary goal. The optimal recovery of \(\mathbf{\rho}\) and \(\mathbf{\pi}\) hinges on the correct specification of \(K\), the number of reciprocal clusters. We will thus examine cases when \(K\) is known a priori, as well as cases where it must be inferred from the data. We also note that a minor nuisance of modeling reciprocal PA models is the observation of the random variable \(R_{k}\). Upon observations the edges \(\{(s_{k},t_{k}),(t_{k},s_{k})\}\), it is not possible to identify whether the second edge was generated under \(R_{k}=1\) or \(J_{k}=2\). Since, upon observation, the probability that the edge was generated under \(J_{k}=2\) is extremely small for large networks, we assume all such reciprocated edges are generated under \(R_{k}=1\). In real-world networks, however, time will often pass between message replies. For such networks, we will thus employ window estimators from Cirkovic et al. (2022a). We defer further discussion of window estimators to Section 6. We will continue to consider the estimation of \(\mathbf{\rho}\) and \(\mathbf{\pi}\) based on \(p((e_{k})_{k=1}^{n},W(n)\mid\mathbf{\pi},\mathbf{\rho})\). Since \(W(n)\) is unobservable, a natural probabilistic approach would marginalize over the unobservable communication types, and form a complete-data likelihood \(p((e_{k})_{k=1}^{n}\mid\mathbf{\pi},\mathbf{\rho})\). This, however, involves a sum over all latent configurations of \(W(n)\) which is analytically intractable, as well as computationally infeasible for large networks. Such difficulties encourage attempts to learn \(W(n)\) from the conditional distribution of \(W(n)\) given \((e_{k})_{k=1}^{n}\) (a la an EM Algorithm Dempster et al. (1977)) and jointly estimate \(W(n),\mathbf{\pi}\) and \(\mathbf{\rho}\). Often, these attempts are computationally infeasible due to the lack of factorization in the conditional distribution. In the following section, we will consider both Bayesian and frequentist estimation methods for \(\mathbf{\pi}\) and \(\mathbf{\rho}\) where \(K\) is known. We will first present an "ideal" fully Bayesian approach, and then move on to variationally Bayesian and frequentist approximations to that ideal. Afterwards, we will discuss how to perform model selection when \(K\) is unknown for each of these methods. ## 3 Inference for a known number of communication types ### Bayesian inference For Bayesian inference of the heterogeneous reciprocal PA model we follow Nowicki and Snijders (2001) and employ independent and conditionally conjugate priors \[\begin{split}&\rho_{m,r}\stackrel{{\text{i.i.d.}}}{{ \sim}}\text{Beta}(a,b),\ m,r=1,\ldots,K,\\ &\boldsymbol{\pi}\sim\text{Dirichlet}(\eta,\ldots,\eta).\end{split} \tag{7}\] The prior specification (7) leads to a simple Gibbs sampler that draws approximate samples from the posterior \(p\left(\boldsymbol{\rho},\boldsymbol{\pi},W(n)\mid(e_{k})_{k=1}^{n}\right)\). We present the Gibbs sampler as Algorithm 1. Here, \(\boldsymbol{\rho}\) and \(\boldsymbol{\pi}\) are initialized from prior draws and \(W(n)\) is initialized by drawing from \(p\left(W_{v}\mid\boldsymbol{\pi}\right)\) for \(v=1,\ldots|V(n)|\). Although the sampler is standard, many samples are required to sufficiently explore the posterior distribution. For large networks, this can be computationally onerous, and hence we appeal to variational alternatives. ### Variational inference In this section, we present variational alternatives for approximating posteriors associated with the heterogeneous reciprocal PA model. The aim of variational inference is to approximate the conditional distribution of latent variables \(\mathbf{z}\) given data \(\mathbf{x}\) via a class of densities \(\mathcal{Q}\) typically chosen to circumvent computational inconveniences. If Bayesian inference is being performed, the latent variables \(\mathbf{z}\) can also encompass the model parameters (\(\boldsymbol{\pi}\) and \(\boldsymbol{\rho}\) in our setting). The variational inference procedure aims to find the density \(q^{\star}\in\mathcal{Q}\) that minimizes the Kullback-Leibler (KL) divergence from \(p(\cdot\mid\mathbf{x})\), i.e. \[q^{\star}=\operatorname*{arg\,min}_{q\in\mathcal{Q}}\operatorname{KL}\left(q( \cdot)\mid\mid p(\cdot\mid\mathbf{x})\right). \tag{8}\] We will restrict \(\mathcal{Q}\) to the mean-field family, that is, the family of densities where components \(\mathbf{z}\) are mutually independent. Naturally, such restriction will prevent \(q^{\star}\) from capturing the dependence structure between the latent variables. Recently, however, some more structured, expressive families have been proposed that may improve the approximation; see for instance Yin et al. (2020). Conveniently, using the definition of the conditional density, the objective (8) can be expressed as \[\operatorname{KL}\left(q(\cdot)\mid\mid p(\cdot\mid\mathbf{x})\right)=E_{q}[ \log q(\mathbf{z})]-E_{q}[\log p(\mathbf{z},\mathbf{x})]+\log p(\mathbf{x}) \equiv-\text{ELBO}(q)+\log p(\mathbf{x}), \tag{9}\] so that minimizing the KL divergence from \(p(\cdot\mid\mathbf{x})\) to \(q(\cdot)\) is equivalent to maximizing the evidence lower bound (ELBO(\(q\))) since \(\log p(\mathbf{x})\) does not depend on \(q\). For more on variational inference, see Blei et al. (2017). ``` 0: Graph \(G(n)\), \(\#\) communication types \(K\), prior parameters \(a\), \(b\), \(\eta\), \(\#\) MCMC iterations \(M\) 0: Approximate samples from the posterior \(p\left(\boldsymbol{\rho},\boldsymbol{\pi},W(n)\mid(e_{k})_{k=1}^{n}\right)\) Initialize: Draw \(\boldsymbol{\pi}\) and \(\boldsymbol{\rho}\) from (7), draw \(W_{v}\sim\text{Multinomial}(\boldsymbol{\pi})\) for\(v\in V(n)\) for\(i=1\) to \(M\)do 1. Sample \(W(n)\) from its conditional posterior for all\(v\in V(n)\)do Sample \(W_{v}\) according to \[P\left(W_{v}=\ell\mid\boldsymbol{\pi},\boldsymbol{\rho},(W_{u})_{u \neq v},(e_{k})_{k=1}^{n}\right)\] \[\propto\pi_{r}\prod_{m=1}^{K}\rho_{m,\ell}^{\sum_{k:s_{k}=w}1 \left\{W_{t_{k}}=m\right\}1\left\{R_{k}=1\right\}}(1-\rho_{m,\ell})^{\sum_{k: s_{k}=w}1\left\{W_{t_{k}}=m\right\}1\left\{R_{k}=0\right\}}\] \[\qquad\qquad\times\prod_{r=1}^{K}\rho_{\ell,r}^{\sum_{k:t_{k}=w}1 \left\{W_{s_{k}}=r\right\}1\left\{R_{k}=1\right\}}(1-\rho_{\ell,r})^{\sum_{k: t_{k}=w}1\left\{W_{s_{k}}=r\right\}1\left\{R_{k}=0\right\}}\] for \(\ell=1,\ldots,K\) endfor 2. Sample \(\boldsymbol{\rho}\) from its conditional posterior for\(m=1\) to \(K\)do for\(r=1\) to \(K\)do Sample \(\rho_{m,r}\) from \[\rho_{m,r}\mid\boldsymbol{\pi},W(n),(e_{k})_{k=1}^{n}\sim\text{ Beta}\Bigg{(}a+\sum_{k=1}^{n}1_{\left\{W_{s_{k}}=r\right\}}1_{\left\{W_{t_{k}}=m \right\}}1_{\left\{R_{k}=1\right\}},\] \[b+\sum_{k=1}^{n}1_{\left\{W_{s_{k}}=r\right\}}1_{\left\{W_{t_{k} }=m\right\}}1_{\left\{R_{k}=0\right\}}\Bigg{)}\] endfor endfor 3. Sample \(\boldsymbol{\pi}\) from its conditional posterior \[\boldsymbol{\pi}\mid\boldsymbol{\rho},W(n),(e_{k})_{k=1}^{n}\sim\text{ Dirichlet}\left(\eta+\sum_{v\in V(n)}1_{\left\{W_{v}=1\right\}},\ldots,\eta+\sum_{v \in V(n)}1_{\left\{W_{v}=K\right\}}\right)\] endfor ``` **Algorithm 1** Gibbs sampling for heterogeneous reciprocal PA with known \(K\) #### 3.2.1 Bayesian Variational Inference Now we consider solving the variational problem (8) for the probabilistic model presented in Section 3.1. Although we have presented a sampler in Algorithm 1 that draws approximate samples from the posterior, we aim for an estimate that sacrifices modeling the dependence in the posterior distribution in favor of computation time. Variational inference for stochastic blockmodels in the Bayesian setting was studied in Latouche et al. (2012). Following their strategy, we posit a mean-field variational family: \[q(\boldsymbol{\pi},\boldsymbol{\rho},W(n))=q(\boldsymbol{\pi})q(\boldsymbol{ \rho})q(W(n))=q(\boldsymbol{\pi})\prod_{m=1}^{K}\prod_{r=1}^{K}q(\rho_{m,r}) \prod_{v\in V(n)}q_{v}(W_{v}). \tag{10}\] We further assume that the variational densities have the following forms: \[q(\boldsymbol{\pi}) \propto\prod_{r=1}^{K}\pi_{1}^{d_{1}}\cdots\pi_{K}^{d_{K}},\ d_{1},\ldots,d_{K}\geq 0,\] \[q(\rho_{m,r}) \propto\rho_{m,r}^{\omega_{m,r}}(1-\rho_{m,r})^{\xi_{m,r}},\ \omega_{m,r},\xi_{m,r}\geq 0,\ m,r=1,\ldots,K,\] \[q_{v}(W_{v}) =\prod_{r=1}^{K}\tau_{v,r}^{1\{W_{v}=r\}},\tau_{v,r}\geq 0,\ r=1, \ldots,K,\ v=1,\ldots|V(n)|,\] and additionally \(\sum_{r=1}^{K}\tau_{v,r}=1\) for all \(v\in V(n)\). In other words, the posterior of \(\boldsymbol{\pi}\) is approximated by a Dirichlet\((d_{1},\ldots,d_{K})\) distribution, and the component-wise posteriors of \(\boldsymbol{\rho}\) and \(W(n)\) are approximated by \(\text{Beta}(\omega_{m,r},\xi_{m,r})\) and \(\text{Multinomial}(1,(\tau_{v,r})_{r=1}^{K})\) distributions, respectively. In Algorithm 2 we present a coordinate ascent variational inference (CAVI) algorithm for optimizing the ELBO. Here, \(\psi(\cdot)\) is the digamma function. Note that in step 3 of algorithm, we write \(\sum_{k:s_{k}=v}\equiv\sum_{k:s_{k}=v,s_{k}\neq t_{k}}\) for brevity of notation. The inclusion of self-loops makes the optimization of the ELBO much more difficult, hence their exclusion. Here, the class probabilities, \(\tau_{u,r}\), are initialized uniformly at random. We omit the calculations for the derivation of this algorithm, as they are very similar to Latouche et al. (2012). To monitor the convergence of Algorithm 2, we recommend computing the ELBO after each iteration of the CAVI algorithm and terminating the algorithm once the increase in the ELBO is less than some predetermined threshold \(\epsilon\). Specifically, if the ELBO is computed after step 2, it has the simplified form: \[\begin{split}\text{ELBO}(q)=&\log\left(\frac{\Gamma( K\eta)\prod_{r=1}^{K}\Gamma(d_{r})}{\Gamma(\sum_{r=1}^{K}d_{r})\Gamma(\eta)^{K}} \right)+\sum_{r=1}^{K}\sum_{m=1}^{K}\log\left(\frac{\Gamma(a+b)\Gamma(\omega_{m,r})\Gamma(\xi_{m,r})}{\Gamma(\omega_{m,r}+\xi_{m,r})\Gamma(a)\Gamma(b)}\right) \\ &-\sum_{v\in V(n)}\sum_{r=1}^{K}\tau_{v,r}\log\tau_{v,r}.\end{split} \tag{11}\] #### 3.2.2 Variational Expectation Maximization In this section we consider frequentist estimation of the PA model with heterogeneous reciprocity through a variational expectation maximization algorithm (VEM). VEM for ``` Graph \(G(n)\), # communication types \(K\), prior parameters \(a\), \(b\), \(\eta\), tolerance \(\epsilon>0\) ``` 0: Variational approximation to the posterior \(q^{\star}\) Initialize: Draw \(\tau_{v,r}\), \(r=1,\ldots,K\) uniformly at random from the \(K\)-simplex for every \(v\in V(n)\) while the increase in \(\operatorname{ELBO}(q)\) is greater than \(\epsilon\)do 1. Update \(q(\boldsymbol{\pi})\) for\(r=1\) to \(K\)do \[d_{r}=\eta+\sum_{v\in V(n)}\tau_{u,r}\] endfor 2. Update \(q(\boldsymbol{\rho})\) for\(m=1\) to \(K\)do for\(r=1\) to \(K\)do \[\omega_{m,r} =a+\sum_{k=1}^{n}\tau_{s_{k},r}\tau_{t_{k},m}1_{\{R_{k}=1\}}\] \[\xi_{m,r} =b+\sum_{k=1}^{n}\tau_{s_{k},r}\tau_{t_{k},m}1_{\{R_{k}=0\}}\] endfor endfor 3. Update \(\operatorname{ELBO}(q)\) according to (11) 4. Update \(q(W(n))\) for all\(v\in V(n)\)do for\(\ell=1\) to \(K\)do \[\tau_{v,\ell}\propto \exp\left\{\psi\left(d_{\ell}\right)\ -\psi\Bigg{(}\sum_{r=1}^{K}d_{r}\Bigg{)}\right\}\] \[\times\prod_{m=1}^{K}\exp\Bigg{\{}\psi\left(\omega_{m,\ell} \right)\sum_{k:s_{k}=v}\tau_{t_{k},m}1_{\{R_{k}=1\}}+\psi\left(\xi_{m,\ell} \right)\sum_{k:s_{k}=v}\tau_{t_{k},m}1_{\{R_{k}=0\}}\] \[\qquad\qquad-\psi(\omega_{m,\ell}+\xi_{m,\ell})\sum_{k:s_{k}=v} \tau_{t_{k},m}\Bigg{\}}\] \[\times\prod_{r=1}^{K}\exp\Bigg{\{}\psi\left(\omega_{\ell,r} \right)\sum_{k:t_{k}=v}\tau_{s_{k},r}1_{\{R_{k}=1\}}+\psi\left(\xi_{\ell,r} \right)\sum_{k:t_{k}=v}\tau_{s_{k},r}1_{\{R_{k}=0\}}\] \[\qquad\qquad-\psi(\omega_{\ell,r}+\xi_{\ell,r})\sum_{k:t_{k}=v} \tau_{s_{k},r}\Bigg{\}}\] endfor endfor endwhile ``` **Algorithm 2** CAVI for heterogeneous reciprocal PA with known \(K\) stochastic blockmodel data was first considered in Daudin et al. (2008) which further inspired many interesting generalizations that could enhance the reciprocal PA model (see Matias and Miele, 2017, for example). The VEM algorithm augments the traditional EM algorithm by approximating the E-step for models in which the conditional distribution of the latent variables given the observed data is computationally intractable. The VEM estimates thus serve as a computationally efficient approximation to the maximum likelihood estimates of \(\boldsymbol{\pi}\) and \(\boldsymbol{\rho}\). Although a frequentist procedure, the VEM algorithm may enhance Bayesian inference of stochastic blockstructure data. For example, since the dimension of the posterior \(p\left(\boldsymbol{\pi},\boldsymbol{\rho}\mid(e_{k})_{k=1}^{n}\right)\) does not grow with the size of the data, one might expect a Bernstein-von-Mises phenomena to occur. The VEM estimates may thus approximate the posterior mean or even be leveraged to enhance posterior sampling as in Donnet and Robin (2021). As in Section 3.2.1, we approximate the distribution of the communication types given the observed network, \(p\left(W(n)\mid\boldsymbol{\pi},\boldsymbol{\rho},(e_{k})_{k=1}^{n}\right)\), via the mean-field approximation \[q(W(n))=\prod_{v\in V(n)}q_{v}(W_{v}).\] Via the mean-field family assumption, the ELBO is given by \[\begin{split}\text{ELBO}&(q,\boldsymbol{\pi}, \boldsymbol{\rho})\\ =& E_{q}\left[\log p\left(W(n),(e_{k})_{k=1}^{n}\mid \boldsymbol{\pi},\boldsymbol{\rho}\right)\right]-E_{q}\left[\log q(W(n)) \right]\\ =&\sum_{k=1}^{n}\sum_{r=1}^{K}\left(1_{\{J_{k}=1\} }\tau_{s_{k},r}+1_{\{J_{k}=3\}}\tau_{t_{k},r}\right)\log\pi_{r}-\sum_{v\in V(n )}\sum_{r=1}^{K}\tau_{v,r}\log\tau_{v,r}\\ &+\sum_{k=1}^{n}\sum_{r=1}^{K}\sum_{m=1}^{K}\tau_{s_{k},r}\tau_{t _{k},m}\left(1_{\{R_{k}=1\}}\log\rho_{m,r}+1_{\{R_{k}=0\}}\log(1-\rho_{m,r}) \right).\end{split} \tag{12}\] Note that from (9), maximizing (12) with respect to \(q\) (the E-step) is equivalent to minimizing the KL divergence from \(p\left(\cdot\mid\boldsymbol{\pi},\boldsymbol{\rho},(e_{k})_{k=1}^{n}\right)\) to \(q(\cdot)\) and maximizing (12) with respect to \(\boldsymbol{\pi}\) and \(\boldsymbol{\rho}\) is equivalent to the M-step in the usual EM algorithm. Thus, the E-step is equivalent to performing variational inference for \(p\left(\cdot\mid\boldsymbol{\pi},\boldsymbol{\rho},(e_{k})_{k=1}^{n}\right)\) where \(\boldsymbol{\pi}\) and \(\boldsymbol{\rho}\) are evaluated at their current estimates \(\hat{\boldsymbol{\pi}}_{\text{VEM}}\) and \(\hat{\boldsymbol{\rho}}_{\text{VEM}}\). The VEM algorithm for the heterogeneous reciprocal PA model is given in Algorithm 3. As in Algorithm 2, we write \(\sum_{k:s_{k}=v}\equiv\sum_{k:s_{k}=v,s_{k}\neq t_{k}}\) for ease of notation. We describe the intialization of the algorithm at the end of Appendix A in Algorithm 5. We further provide some derivations of the VEM algorithm in Appendix B. Similar types of computations can be employed to derive Algorithm 2. As in Algorithm 2, we recommend cycling through the updates of \(\hat{\tau}_{v,\ell}\) in the E-step until the ELBO no longer increases beyond a prespecified threshold \(\epsilon>0\). ## 4 Model selection for an unknown number of communication types In this section we extend the methods discussed in Section 3 to the case where the number of communication types is not known a priori. This can be viewed as a model selection problem, where the Bayesian solution places a prior on \(K\) while the variationally Bayesian and EM algorithms aim to imitate marginal likelihood-based procedures. ``` 0: Graph \(G(n)\), \(\#\) communication types \(K\), tolerances \(\epsilon,\kappa>0\) 0: Variational EM estimates \(\hat{\mathbf{\pi}}_{\text{VEM}}\) and \(\hat{\mathbf{\rho}}_{\text{VEM}}\) Initialize: Draw \(\hat{\tau}_{v,r}\), \(r=1,\dots,K\) uniformly at random from the \(K\)-simplex for every \(v\in V(n)\), run Algorithm 5 to initialize \(\hat{\mathbf{\pi}}_{\text{VEM}}\) and \(\hat{\mathbf{\rho}}_{\text{VEM}}\) while at least one of the elements of \(\hat{\mathbf{\pi}}_{\text{VEM}}\) and \(\hat{\mathbf{\rho}}_{\text{VEM}}\) change by more than \(\kappa\) in absolute value do 1. **E-step**: Update \(\hat{q}\) via while the increase in \(\text{ELBO}(q)\) is greater than \(\epsilon\)do for all\(v\in V(n)\)do for\(\ell=1\) to \(K\)do \[\hat{\tau}_{v,\ell}\propto\hat{\pi}_{\ell}\prod_{m=1}^{K} \hat{\rho}_{m,\ell}^{\sum_{k:s_{k}=w}\hat{\tau}_{t_{k},m}1_{\{R_{k}=1\}}(1- \hat{\rho}_{m,\ell})^{\sum_{k:s_{k}=w}\hat{\tau}_{t_{k},m}1_{\{R_{k}=0\}}}}\] \[\times\prod_{r=1}^{K}\hat{\rho}_{\ell,r}^{\sum_{k:t_{k}=w}\hat{ \tau}_{s_{k},r}1_{\{R_{k}=1\}}(1-\hat{\rho}_{\ell,r})^{\sum_{k:t_{k}=w}\hat{ \tau}_{t_{k},r}1_{\{R_{k}=0\}}}}\] endfor endfor Update \(\text{ELBO}(q)\) according to (12) endwhile ``` 2. **M-step**: Update \(\hat{\mathbf{\pi}}_{\text{VEM}}\) and \(\hat{\mathbf{\rho}}_{\text{VEM}}\) via for\(m=1\) to \(K\)do \[\hat{\pi}_{m}=\sum_{v\in V(n)}\hat{\tau}_{v,m}\] for\(r=1\) to \(K\)do \[\hat{\rho}_{m,r}=\frac{\sum_{k=1}^{n}\hat{\tau}_{s_{k},r}\hat{\tau}_{t_{k},m}1_ {\{R_{k}=1\}}}{\sum_{k=1}^{n}\hat{\tau}_{s_{k},r}\hat{\tau}_{t_{k},m}}\] endfor endfor endwhile ``` **Algorithm 3** VEM for heterogeneous reciprocal PA with known \(K\) ### A prior on K This section extends the Bayesian solution in Section 3.1 to making inference on the unknown number of communication classes \(K\). In a fully Bayesian framework, \(K\) is assigned a prior and inference is made on the posterior of \(K\) given the observed data. This, however, often requires the use of complicated reversible jump MCMC (RJMCMC) algorithms to make valid posterior inference on \(K\). Generically, mixture models with a prior on the number of mixture components are known as mixture of finite mixture (MFM) models. For Bayesian MFMs, Miller and Harrison (2018) derived the Dirichlet process-like properties of MFMs and proposed a collapsed Gibbs sampler that circumvented the need for RJMCMC. Geng et al. (2019) utilized a similar collapsed Gibbs sampler for learning the number of components in a stochastic block model. Unfortunately, such collapsed Gibbs samplers require analytically marginalizing over \(K\), restricting our ability to make inference on \(\mathbf{\pi}\) without some ad-hoc post-processing of the posterior samples. Recently, a telescoping sampler has been developed by Fruhwirth-Schnatter et al. (2021) for MFMs that obviates the need to marginalize over \(K\). Rather, \(K\) is explicitly sampled in the scheme by distinguishing between \(K\), the number of mixture components, and \(K_{+}\), the number of _filled_ mixture components. For the heterogeneous reciprocal PA model, we adopt the prior specification in (7) and additionally let \(K-1\) follow a beta-negative-binomial (BNB) distribution with parameters \(c_{1},c_{2}\) and \(c_{3}\) as recommended by Fruhwirth-Schnatter et al. (2021). The BNB distribution is a hierachical generalization of the Poisson, geometric, and negative-binomial distribution. If \(K-1\sim\text{BNB}(c_{1},c_{2},c_{3})\) then the probability mass function on \(K\) is given by \[p(K)=\frac{\Gamma(c_{1}+K-1)B(c_{1}+c_{2},K-1+c_{3})}{\Gamma(c_{1})\Gamma(K)B(c _{2},c_{3})},\ K=1,2,\ldots,\] where \(B\) denotes the beta function. As discussed in Fruhwirth-Schnatter et al. (2021), the BNB distribution allows for the user to specify a heavier tail on the number of mixture components which is essential in order for the telescoping sampler to mix well. Previous analyses in Geng et al. (2019) and Miller and Harrison (2018) specify that \(K-1\sim\text{Poisson}(1)\), which is a highly informative choice with a light tail. We present the telescoping sampler for heterogeneous reciprocal PA models in Algorithm 4. For ease of notation, we do not distinguish between \(W(n)\), the communication types, and the random partition of the \(|V(n)|\) nodes into \(K_{+}\) clusters induced by \(W(n)\). However, the alternating between sampling on the parameter space of the mixture distribution and the set partition space is a key aspect that allows \(K\) to be directly sampled from the conditional posterior of \(K\) given the partition induced by \(W(n)\) (Step 3 in Algorithm 4). We refer to Fruhwirth-Schnatter et al. (2021) for more details on the telescoping sampler. Note that within the sampler, \(K\) only decreases if one of the \(K_{+}\) filled components loses all of its membership in Step 1. Thus, in order for the sampler to mix well, \(K\) must occasionally exceed \(K_{+}\), emphasizing the need for a heavier-tailed prior on \(K\). Fruhwirth-Schnatter et al. (2021) also present a dynamic mixture of finite mixture model where the prior on \(\mathbf{\pi}\) is taken to be be \(\text{Dirichlet}(\varphi/K,\varphi/K,\ldots,\varphi/K)\) for some \(\varphi>0\). This specification would induce a sparse mixture model where a large number of mixture components \(K\) would be fit, but a majority of them would be unfilled (Fruhwirth-Schnatter and Malsiner-Walli, 2019; Malsiner-Walli et al., 2016). In this sense, the posterior distributions on \(K\) and \(K_{+}\) would differ greatly. Though this is undesirable for learning the parameters of a mixture model, it may be useful for analyses more focused on partitioning nodes into a small number of classes with similar reciprocal behavior. ### Imitations of the marginal likelihood In this section we review criteria for choosing the number of communication types \(K\) for the variational methods proposed in Section 3.2. A typical strategy for Bayesian model selection is choosing the model that maximizes the marginal likelihood, or the probability distribution that is obtained by integrating the likelihood over the prior distribution of the parameters. For many of the same reasons presented in Section 2.2, the marginal likelihood is not available for stochastic blockmodel data. Instead, for the Bayesian Variational Inference method presented in Section 3.2.1, Latouche et al. (2012) recommend employing the ELBO as the model selection criterion. From (9), it can be seen that \[\text{ELBO}(q)=-\text{KL}\left(q(\cdot)\mid\mid p(\cdot|(e_{k})_{k=1}^{n})) \right)+\log p\left((e_{k})_{k=1}^{n}\right)\leq\log p\left((e_{k})_{k=1}^{n} \right).\] That is, the ELBO lower bounds the marginal likelihood, and if the variational approximation to the posterior is good, the ELBO should approximate it. Though, there is no evidence that the variational approximation results in a sufficiently small KL divergence such that this is a worthwhile approximation. Regardless, this criteria is often used in practice (Blei et al., 2017). For the VEM algorithm, Daudin et al. (2008) recommend employing the Integrated Classification Likelihood (ICL). Although the VEM algorithm is a frequentist procedure, the ICL criterion is derived by assuming a Jeffrey's prior on \(\boldsymbol{\pi}\) (\(\eta=1/2\)) and further employs a BIC approximation to the distribution of \((e_{k})_{k=1}^{n}\) given \(W(n)\). The ICL for reciprocal PA models is given by \[\text{ICL}(K)= \log p((e_{k})_{k=1}^{n},\hat{W}(n)\mid\boldsymbol{\hat{\pi}}_{ \text{VEM}},\boldsymbol{\hat{\rho}}_{\text{VEM}})-\frac{K^{2}}{2}\log n- \frac{K-1}{2}\log|V(n)|,\] where \(\hat{W}(n)\) is the modal approximation of \(W(n)\) given by \(\hat{W}_{v}=\arg\max_{\ell=1,\ldots,K}\hat{\tau}_{v,\ell}\). ## 5 Simulation Studies In this section, we evaluate the performance of the estimation procedures presented in Sections 3 and 4 on synthetic datasets. We evaluate the performance of estimation methods for \(\boldsymbol{\pi}\) and \(\boldsymbol{\rho}\) when \(K\) is known, as well as the accuracy of the model selection criteria presented in Section 4 when \(K\) is unknown. When \(K\) is known, we employ the Monte Carlo averages of the approximate posterior samples, the posterior means of the variational densities and the variational EM estimates as point estimators of \(\boldsymbol{\pi}\) and \(\boldsymbol{\rho}\) for the fully Bayesian (B), Variational Bayes (VB) and Variational EM (VEM) methods, respectively. Since the B and VB methods produce approximate posteriors, we also provide marginal coverage rates of credible intervals constructed using the element-wise 2.5% and 97.5% quantiles of the respective posteriors for \(\boldsymbol{\pi}\) and \(\boldsymbol{\rho}\). In the case of known \(K\), we further provide the average Rand index for estimating \((W_{v})_{v\in V(n)}\) for each method. When \(K\) is unknown, we record the frequencies of the estimated \(K\) under each model selection criteria. We employ the posterior mode as the estimated \(K\) for the fully Bayesian method. **Input:** Graph \(G(n)\), parameters \(a\), \(b\), \(\eta\), \(c_{1}\), \(c_{2}\), \(c_{3}\), \(K\) initial/max values \(K_{\text{init}}\), \(K_{\text{max}}\), \(\#\) MCMC iterations \(M\) **Output:** Approximate samples from the posterior \(p\left(\boldsymbol{\rho},\boldsymbol{\pi},W(n),K\mid(e_{k})_{k=1}^{n}\right)\) **Initialize:** Set \(K=K_{\text{init}}\), draw \(\boldsymbol{\pi}\) and \(\boldsymbol{\rho}\) from (7), draw \(W_{v}\sim\text{Multinomial}(\boldsymbol{\pi})\) for \(v\in V(n)\) **for**\(i=1\) to \(M\)**do** 1. Sample \(W(n)\) from its conditional posterior **for all**\(v\in V(n)\)**do** Sample \(W_{v}\) according to \[P\left(W_{v}=\ell\mid\boldsymbol{\pi},\boldsymbol{\rho},(W_{u})_ {u\neq v},(e_{k})_{k=1}^{n}\right)\] \[\quad\propto\pi_{\ell}\prod_{m=1}^{K}\rho_{m,\ell}^{\sum_{k:s_{ k}=v}1\{W_{t_{k}=m}\}1\{R_{k}=1\}}(1-\rho_{m,\ell})^{\sum_{k:s_{k}=v}1\{W_{t_{k} =m}\}1\{R_{k}=0\}}\] \[\qquad\qquad\times\prod_{r=1}^{K}\rho_{\ell,r}^{\sum_{k:t_{k}=w}1 \{W_{s_{k}=r}\}1\{R_{k}=1\}}(1-\rho_{\ell,r})^{\sum_{k:t_{k}=w}1\{W_{s_{k}=r }\}1\{R_{k}=0\}},\] for \(\ell=1,\ldots,K\) **end for** and determine the number of filled components \(K_{+}\). Relabel the communication classes such that the first \(K_{+}\) components are filled and the rest are empty. 2. Sample the filled components of \(\boldsymbol{\rho}\) from its conditional posterior **for**\(m=1\) to \(K_{+}\)**do** **for**\(r=1\) to \(K_{+}\)**do** \[\rho_{m,r}\mid\boldsymbol{\pi},W(n),(e_{k})_{k=1}^{n}\sim\text{ Beta}\Bigg{(}a+\sum_{k=1}^{n}1_{\{W_{s_{k}=r}\}}1_{\{W_{t_{k}=m}\}}1_{\{R_{k}=1\}},\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad b+ \sum_{k=1}^{n}1_{\{W_{s_{k}=r}\}}1_{\{W_{t_{k}=m}\}}1_{\{R_{k}=0\}}\Bigg{)},\] **end for** **end for** 3. Sample \(K\) from \[p(K|W(n))\propto p(K)\frac{K!}{(K-K_{+})!}\frac{\Gamma(\eta K)}{\Gamma(|V(n)|+ \eta K)\Gamma(\eta)^{K_{+}}}\prod_{r=1}^{K_{+}}\Gamma\left(\sum_{v\in V(n)}1_ {\{W_{v}=r\}}+\eta\right),\] where \(K=K_{+},K_{+}+1,\ldots,K_{\text{max}}\). If \(K>K_{+}\), generate \(K-K_{+}\) empty components and fill the corresponding \(\boldsymbol{\rho}\) components with draws from the prior Beta\((a,b)\). 4. Sample \(\boldsymbol{\pi}\) from its conditional posterior \[\boldsymbol{\pi}\mid\boldsymbol{\rho},W(n),(e_{k})_{k=1}^{n}\sim\text{ Dirichlet}\left(\eta+\sum_{v\in V(n)}1_{\{W_{v}=1\}},\ldots,\eta+\sum_{v\in V(n)}1_{\{W_{v}=K\}}\right)\] endfor ``` **Algorithm 4** Telescoping sampler for heterogeneous reciprocal PA with known \(K\) In each simulation, we assume non-informative priors Dirichlet\((1/2,\ldots,1/2)\) on \(\boldsymbol{\pi}\) and Beta\((1/2,1/2)\) on \(\boldsymbol{\rho}\) for the VB and B methods. Although a prior is not explicitly assumed for the VEM method, the ICL model selection criterion implicitly assumes the same prior on \(\boldsymbol{\pi}\), hence these choices are consistent. For the VEM algorithm, we terminate the E-step once either the ELBO has increased by less than \(\epsilon=0.01\) or the total number of iterations exceed 500, and terminate the entire algorithm once the element-wise differences in the parameters fall below \(\kappa=0.01\). We also terminate the VB algorithm via the same conditions as in the E-step of the VEM algorithm. We run the fully Bayesian method for \(M=5,\!000\) MCMC samples, and discard the first half as burn-in. Further, when \(K\) is unknown, we assume a \(\text{BNB}(1,4,3)\) prior on \(K\) as recommended by Fruhwirth-Schnatter et al. (2021) and set \(K_{\text{max}}=20\). For the variational methods, we search over \(K=1,2,3,4\). ### Simulations on Pragmatic Networks In this section we evaluate the presented model estimation and selection criteria on networks which are generated to reflect those found in real-world applications. In particular, we generate a PA network with heterogeneous reciprocity such that \(\boldsymbol{\theta}=(\alpha,\beta,\delta_{\text{in}},\delta_{\text{out}})=(0.15,0.8,1,1)\), \[\boldsymbol{\pi}=\begin{bmatrix}0.8\\ 0.2\end{bmatrix}\qquad\text{and}\qquad\boldsymbol{\rho}=\begin{bmatrix}0.5&0. 9\\ 0.05&0.2\end{bmatrix}.\] This network generating process contains two groups, the first of which can be thought of as typical users, and the other can be thought of celebrities. Here typical users will often reciprocate the messages from celebrities, but a celebrity is far less likely to respond to a typical user. As one might expect, there are far more typical users than celebrities in this network. Table 1 displays the means and standard errors of the element-wise point estimators across the simulations, as well as the coverage of credible intervals produced from the B and VB methods. Here, the estimation procedures have virtually identical performance in terms of point estimation. Further, the coverage rates for the fully Bayesian method hover around the expected 95%, while the coverage rates for the Variational Bayes method vary across the parameters. The VB method seems to have difficultly capturing the larger reciprocity \(\rho_{12}=0.9\), as well as \(\pi_{1}=0.8\). The methods also perform similarly in terms of classification, as the average Rand index for the communication types are given by 0.767, 0.767 and 0.766 for the B, VB and VEM methods respectively. Table 2 displays the performance of the model selection criteria on the same preferential attachment model but for unknown \(K\). For the fully Bayesian method, we initialize at \(K_{\text{init}}=4\) in order to exhibit insensitivity of the telescoping sampler to initialization. Note that the ELBO and ICL select the correct class for every simulated data set, while the fully Bayesian method has a slight tendency to over-select the number of classes. Clearly, however, analysis of such networks result in variational methods that perform comparably to the fully Bayesian method, at less computational cost. We continue our simulations by evaluating the performance of the estimation procedures on 100 synthetic networks generating from a PA network with heterogeneous reciprocity such that \(\mathbf{\theta}=(0.15,0.8,1,1)\) but now \[\mathbf{\pi}=\begin{bmatrix}0.8\\ 0.2\end{bmatrix}\qquad\text{and}\qquad\mathbf{\rho}=\begin{bmatrix}0.5&0\\ 0.05&0.2\end{bmatrix}.\] Note that the only difference from this simulation set-up and the previous one is that \(\rho_{12}\) has decreased from \(0.9\) to \(0\). The inclusion of \(0\) into the \(\mathbf{\rho}\) matrix is motivated by the data example in Section 6, where we find a group of users that do not receive reciprocal edges. This set-up is analogous to a diagonally-dominant stochastic block model where users are likely to communicate within groups but not across groups. Table 3 displays the point estimates for all three methods, along with the coverage probabilities for the B and VB methods. With the decrease in \(\rho_{12}\), the variational methods struggle to recover \(\rho_{22}\). This is sensible since class 2 communicating with class 2 should be the least common communication type according to \(\mathbf{\pi}\) and, unlike the case when \(\rho_{12}=0.9\), the difference between the communication classes is not obvious. Otherwise, the estimation accuracy of the other parameters is relatively consistent across all the methods. Although coverage rates are similar to Table 1, we also observe a reduction in the coverage of \(\rho_{22}\). Evidently, equal-tailed credible intervals are a poor choice for capturing \(\rho_{12}\) and if one had prior knowledge on the behavior of \(\mathbf{\rho}\), a highest posterior density interval would be a sensible choice. The average Rand index for the B, VB and VEM methods are given by \(0.763\), \(0.763\) and \(0.760\), respectively, again indicating that the methods classify similarly when the number of edges far exceeds the number of nodes. \begin{table} \begin{tabular}{c c c c c c} \hline \hline Method & \(\pi_{1}=0.8\) & \(\rho_{11}=0.5\) & \(\rho_{12}=0.9\) & \(\rho_{21}=0.05\) & \(\rho_{22}=0.2\) \\ \hline \multicolumn{6}{c}{Mean(SE)} \\ B & \(0.803(0.003)\) & \(0.500(0.002)\) & \(0.900(0.004)\) & \(0.050(0.002)\) & \(0.198(0.010)\) \\ VB & \(0.805(0.003)\) & \(0.500(0.002)\) & \(0.896(0.004)\) & \(0.050(0.002)\) & \(0.198(0.010)\) \\ VEM & \(0.788(0.004)\) & \(0.501(0.002)\) & \(0.889(0.005)\) & \(0.052(0.002)\) & \(0.196(0.010)\) \\ \multicolumn{6}{c}{\% Coverage} \\ B & \(98\) & \(93\) & \(94\) & \(92\) & \(94\) \\ VB & \(50\) & \(90\) & \(70\) & \(87\) & \(92\) \\ \hline \end{tabular} \end{table} Table 1: Average point estimates and standard errors for 100 networks generated from a PA model with \(\mathbf{\theta}=(0.15,0.8,1,1)\). Coverage rates for equal-tailed credible intervals produced by the B and VB methods are also provided. \begin{table} \begin{tabular}{c c c c c} \hline \hline Method & \multicolumn{3}{c}{\(\hat{K}\)} \\ \hline & 1 & 2 & 3 & 4 \\ B & 0 & 68 & 31 & 1 \\ VB & 0 & 100 & 0 & 0 \\ VEM & 0 & 100 & 0 & 0 \\ \hline \end{tabular} \end{table} Table 2: Estimated \(K\) from 100 networks generated from a PA model with \(\mathbf{\theta}=(0.15,0.8,1,1)\) and \(\mathbf{\pi}\) and \(\mathbf{\rho}\) as in Table 1. Table 4 displays the performance of the model selection criteria presented in Section 4 for the preferential attachment model as in 3. Again, \(K\) is initialized at \(K_{\text{init}}=4\) for the fully Bayesian method. Note that, again, the variational methods select the correct number of clusters in each simulation, while the telescoping sampler has a slight tendency to overfit. ### Comparisons to the SBM In this section we evaluate the same estimation and model selection procedures on synthetic networks with a comparably low number of edges relative to the number of nodes. Such networks serve to highlight the additional difficulties faced by estimating reciprocal PA models compared to stochastic block models. We simulate 100 preferential attachment networks of size \(n=30{,}000\) from a PA model with \(\boldsymbol{\theta}=(\alpha,\beta,\delta_{\text{in}},\delta_{\text{out}})=(0.75,0,0.8,0.8)\) and the reciprocal component governed by \[\boldsymbol{\pi}=\begin{bmatrix}0.6\\ 0.4\end{bmatrix}\qquad\text{and}\qquad\boldsymbol{\rho}=\begin{bmatrix}0.1&0.4 \\ 0.5&0.8\end{bmatrix}.\] Wang and Resnick (2022b) have showed that, under suitable conditions, such heterogeneous reciprocal PA models with \(\beta=0\) exhibit networks with out/in-degrees that exhibit a complex extremal dependence structure (see Appendix A for more details). Additionally, since \(\beta=0\), such models allow for the complete observation of the reciprocal edge events as there are no \(J_{k}=2\) edges that could be mistaken as reciprocal edges. \begin{table} \begin{tabular}{c c c c c} \hline \hline Method & \multicolumn{4}{c}{\(K\)} \\ \hline & 1 & 2 & 3 & 4 \\ B & 0 & 78 & 20 & 2 \\ VB & 0 & 100 & 0 & 0 \\ VEM & 0 & 100 & 0 & 0 \\ \hline \end{tabular} \end{table} Table 4: Estimated \(K\) from 100 networks generated from a PA model with \(\boldsymbol{\theta}=(0.15,0.8,1,1)\) and \(\boldsymbol{\pi}\) and \(\boldsymbol{\rho}\) as in Table 4. \begin{table} \begin{tabular}{c c c c c c} \hline \hline Method & \(\pi_{1}=0.8\) & \(\rho_{11}=0.5\) & \(\rho_{12}=0.00\) & \(\rho_{21}=0.05\) & \(\rho_{22}=0.2\) \\ \hline & & & Mean(SE) & & \\ B & 0.802(0.003) & 0.500(0.002) & 0.001(0.001) & 0.051(0.003) & 0.201(0.019) \\ VB & 0.805(0.003) & 0.501(0.002) & 0.001(0.001) & 0.051(0.003) & 0.174(0.019) \\ VEM & 0.791(0.010) & 0.503(0.003) & 0.006(0.002) & 0.054(0.004) & 0.154(0.020) \\ & & & \% Coverage & & \\ B & 98 & 90 & 0 & 98 & 91 \\ VB & 49 & 88 & 0 & 86 & 47 \\ \hline \end{tabular} \end{table} Table 3: Average point estimates and standard errors for 100 networks generated from a PA model with \(\boldsymbol{\theta}=(0.15,0.8,1,1)\). Coverage rates for equal-tailed credible intervals produced by the B and VB methods are also provided. Here we assume \(K\) is known. Table 5 displays the average value of the point estimates of \(\mathbf{\pi}\) and \(\mathbf{\rho}\) for each method, as well as their associated standard errors. Clearly, the fully Bayesian method outperforms both the VB and VEM methods by producing accurate point estimates with lower standard errors. Additionally, the coverage rates for the fully Bayesian method are near the expected 95% level, while the VB method produces posteriors which do not reliably capture the true \(\mathbf{\pi}\) and \(\mathbf{\rho}\). The fully Bayesian method also dominates in terms of classification, as the average Rand index for the communication types are given by 0.590, 0.583 and 0.552 for the B, VB and VEM methods, respectively. The superiority of the fully Bayesian method compared to the variational methods is unsurprising in this setting. Although variational methods exhibit strong point estimation for stochastic block models, estimation for PA models with heterogeneous reciprocity is an inherently harder problem. Namely, in a directed stochastic block model, each node has the opportunity to connect to every other node in the network. This results in \(m(m-1)\) many potential edges for \(m\) many nodes in the network. For the PA model, one expects the number of potential edges to scale linearly with the number of nodes. Thus, there is inherently less observed information that can be leveraged to learn the latent communication classes. Such lack of information induces a multimodal ELBO, and therefore the variational methods struggle to find a global optimum. The fully Bayesian method is better able to incorporate this uncertainty since it is sampling from, not optimizing, a multimodal posterior. Table 6 displays the performance of the model selection criteria for 100 networks generated under the same PA model. For the fully Bayesian method, we initialize at \(K_{\text{init}}=1\). Despite the poor performance of the VB method at the parameter estimation, it captures the true \(K=2\) the most often, indicating that the ELBO is a good model selection criteria. The VEM algorithm always chooses \(K=1\), though we expect that this is again due to the lack of information in the data. The likelihood associated with \(\mathbf{\pi}\) has a much larger role in the ICL for PA models than in stochastic block models. This, combined with the poor estimation of the classes for known \(K\), results in the poor performance of the ICL criteria. \begin{table} \begin{tabular}{c c c c c c} \hline \hline Method & \(\pi_{1}=0.6\) & \(\rho_{11}=0.1\) & \(\rho_{12}=0.5\) & \(\rho_{21}=0.4\) & \(\rho_{22}=0.8\) \\ \hline \multicolumn{6}{c}{Mean(SE)} \\ B & 0.604(0.015) & 0.102(0.011) & 0.500(0.018) & 0.400(0.016) & 0.800(0.020) \\ VB & 0.587(0.028) & 0.168(0.058) & 0.523(0.035) & 0.331(0.0216) & 0.718(0.082) \\ VEM & 0.599(0.073) & 0.121(0.005) & 0.441(0.091) & 0.286(0.077) & 0.686(0.074) \\ \multicolumn{6}{c}{\% Coverage} \\ B & 95 & 93 & 98 & 93 & 92 \\ VB & 19 & 0 & 6 & 11 & 0 \\ \hline \hline \end{tabular} \end{table} Table 5: Average point estimates and standard errors for 100 networks generated from a PA model with \(\mathbf{\theta}=(0.75,0,0.8,0.8)\). Coverage rates for equal-tailed credible intervals produced by the B and VB methods are also provided. ## 6 Data Example Now we apply the heterogeneous reciprocal PA model to the Facebook wall post data from KONECT analyzed in Viswanath et al. (2009) and Cirkovic et al. (2022a). The Facebook wall post data tracks a group of users in New Orleans and their wallposts from September 9th, 2004 to January 22nd, 2009. The network is temporal: when user \(u\) posts to user \(v\)'s wall, a directed edge \((u,v)\) is generated and the timestamp of the post is recorded. The full dataset consists of 876,933 wallposts and 46,952 users. In Figure 1, we display the out/in-degree of each user in a trimmed version of the network; we postpone the discussion of the data cleaning procedure to the following paragraph. Note that upon first observation, the degree distribution indicates the existence of two populations that exhibit differing reciprocal behavior. The first group, concentrated on the out-degree axis, mostly post on other users' walls while not receiving any posts on their own. The second group both sends and receives wall posts at a commensurate rate. Further, the marginal out/in-degree distributions exhibit power law tails as indicated by Figure 2 where, on the log-log scale, the empirical tail functions seems to scale linearly with large degrees. In Cirkovic et al. (2022a), the Facebook wall post data was analyzed assuming that each user exhibited homogeneous reciprocal behavior. In the w \begin{table} \begin{tabular}{c c c c c c} \hline \hline Method & \multicolumn{5}{c}{\(K\)} \\ \hline & 1 & 2 & 3 & 4 & 5 \\ B & 0 & 83 & 12 & 4 & 1 \\ VB & 0 & 95 & 5 & 0 & 0 \\ VEM & 100 & 0 & 0 & 0 & 0 \\ \hline \end{tabular} \end{table} Table 6: Estimated \(K\) from 100 networks generated from a PA model with \(\boldsymbol{\theta}=(0.75,0,0.8,0.8)\) and \(\boldsymbol{\pi}\) and \(\boldsymbol{\rho}\) as in Table 5. Figure 1: Out/in-degree plot for the Facebook wallpost data was assumed that \(\mathbf{\pi}\equiv 1\) and \(\mathbf{\rho}\equiv\rho\in\mathbb{R}\). In doing so, the users concentrated on the out-degree axis in Figure 1 were excluded from the analysis as the homogeneous model could not model the observed hetereogenous reciprocal behavior. Additionally, by virtue of extreme value-based methods being sensitive to the choice of seed graph, Cirkovic et al. (2022a) also removed nodes that became inactive as the graph evolved, a phenomena not modeled by the proposed PA model. The likelihood-based methodology in Cirkovic et al. (2022a) returned a homogeneous reciprocity estimate of \(\hat{\rho}=0.28\). The flexibility provided by the heterogeneous reciprocal PA model aims to capture the additional, intricate dynamics underlying the Facebook wall post data not previously considered in Cirkovic et al. (2022a). According the analysis of the Facebook wallpost data in Viswanath et al. (2009), there observes a sudden uptick in the number of wall posts from July 2008 and onwards. They conjecture that this uptick is likely due to a Facebook redesign, introduced in July, that allowed users to interact with more wall posts through friend feeds. This likely results in a distributional shift in the network's evolution, and thus we discard the portion of the network observed beyond June, 2008, resulting in a network with 22,286 nodes 165,776 edges. This observation, however, may lead to additional analyses via changepoint detection (see Banerjee et al., 2018; Bhamidi et al., 2018; Cirkovic et al., 2022b, for example). Additionally, the evolution of the PA network specified in Section 2.1 posits that every new edge must attach to at least one node that was previously observed in the network evolution. In order to better adhere to this assumption we define a sequence of networks by first selecting the node with the largest total degree and pairing it with the first node it makes a connection with to create a seed graph \(G(0)\). Then, we only retain the edges \((u,v)\) that are (i) observed after the introduction of the seed graph and (ii) \(u\in V(k-1)\) or \(v\in V(k-1)\). This trimming procedure results in a connected network of 16,099 nodes and 123,920 edges that could have realistically been generated by a heterogeneous reciprocal PA model. Figure 2: Plot of empirical tail probability function for the Facebook wallpost degrees on a log base 10 scale The reciprocal PA model assumes that reciprocal edges \((t_{k},s_{k})\) are generated instantaneously with their parent edge \((s_{k},t_{k})\). However in the Facebook wall post network, it is likely that in the time between reciprocated wall posts, wall posts between other users have been generated. Thus, similar to Cirkovic et al. (2022a), we employ window estimators to identify reciprocal edges. That is, if \(e_{k}=(s_{k},t_{k})\) has a reciprocal counterpart \((t_{k},s_{k})\) appear in 24 hours, we attribute the event \(R_{k}=1\) to the edge \(e_{k}\), redefine \(e_{k}:=e_{k}\cup(t_{k},s_{k})\) and drop \((t_{k},s_{k})\) from the edgelist. This results in an edgelist that is in alignment with Section 2.1. To conclude our exploratory data analysis, we study the tail behavior of the out/in-degrees for the trimmed Facebook network. We employ the minimum distance procedure (Clauset et al., 2009) on the total degrees to obtain a threshold beyond which a power-law tail for the in/out-degree can be safely assumed. The minimum distance procedure computes a tail threshold of 51. Note that computing the tail threshold on the total degree implicitly assumes that the out/in-degree tails have the same power-law index. We find this to be a reasonable assumption as indicated by the similarity of the empirical tail functions in Figure 2. In fact, using a threshold of 51, the tail index estimates for the out/in-degrees are 2.212 and 2.231, respectively. Further observation of Figure 1 indicates that, beyond this threshold, there is an extremal dependence structure in the out/in-degree distribution; nodes with total degree larger than 51 tend to cluster around multiple lines through the origin. This extremal dependence structure is further analyzed in Appendix A. We fit the VEM, VB and fully Bayesian methods to the Facebook wall post network. For the VEM algorithm, we terminate the variational E-step when the increase in the ELBO is less than \(\epsilon=0.1\) and terminate the overall algorithm once the largest absolute difference in the estimated components of \(\boldsymbol{\pi}\) and \(\boldsymbol{\rho}\) between M-steps falls below \(\kappa=0.001\). For the Bayesian methods, we again assume non-informative priors on \(\boldsymbol{\pi}\) and \(\boldsymbol{\rho}\). Analogous to the VEM algorithm, we terminate the VB procedure once the change in the ELBO falls below \(\epsilon=0.1\). Both the VEM and VB methods are fit for \(K=1,\ldots,10\). The telescoping sampler for the fully Bayesian method is ran for \(M=100{,}000\) MCMC iterates, where the first 90,000 iterates are discarded as burn-in. Within the telescoping sampler, we set \(K_{\text{max}}=20\). The global PA parameters \(\boldsymbol{\theta}\) are estimated by maximizng the likelihood \(p(\cdot\mid\boldsymbol{\theta})\). Maximum likelihood returns \((\hat{\alpha},\hat{\beta},\hat{\delta}_{\text{in}},\hat{\delta}_{\text{out}} )=(0.071,0.829,1.756,1.571)\). The small values of \(\hat{\delta}_{\text{in}}\) and \(\hat{\delta}_{\text{out}}\) indicate that preferential attachment is indeed a viable mechanism to describe how users send and receive wall posts. Analyzing the reciprocal component of the model, the VEM algorithm identifies 3 clusters, while the VB and fully Bayesian algorithm identify 6 and 11 clusters, respectively. Figure 3 displays the ICL, ELBO and posterior of \(K\) for the VEM, VB and fully Bayesian methods. The ICL criterion clearly identifies \(K=3\) as the choice that optimally balances model parsimony with fidelity to the data. Though the VB method chooses \(K=6\), we note that the ELBO for the VB method becomes very flat at \(K=4\), indicating that perhaps a simpler model may fit the data nearly as well as the model with \(K=6\) mixture components. We suspect that the fully Bayesian method overfits the number of mixture components due to model misspecification. It is unlikely that the Facebook wallpost data exactly follows the specification in Section 2.1. For example, there is empirical evidence that the degree of each node may influence reciprocal behavior (Cheng et al., 2011). There is strong evidence that mixtures of finite mixtures do not reliably learn the number of mixture components under model misspecification (Cai et al., 2021; Miller and Dunson, 2018). The estimates of \(\mathbf{\pi}\) and \(\mathbf{\rho}\) for VEM and VB are \[\hat{\mathbf{\pi}}_{\text{VEM}}=\begin{bmatrix}0.538\\ 0.251\\ 0.211\end{bmatrix},\qquad\hat{\mathbf{\rho}}_{\text{VEM}}=\begin{bmatrix}0.242&0.273& 0.001\\ 0.597&0.650&0.001\\ 0.0701&0.053&0.001\end{bmatrix} \tag{13}\] \[\hat{\mathbf{\pi}}_{\text{VB}}=\begin{bmatrix}0.122\\ 0.285\\ 0.153\\ 0.060\\ 0.197\\ 0.184\end{bmatrix},\qquad\hat{\mathbf{\rho}}_{\text{VB}}=\begin{bmatrix}0.088&0.094& 0.084&0.038&0.001&0.083\\ 0.383&0.427&0.431&0.182&0.001&0.375\\ 0.670&0.699&0.718&0.433&0.001&0.641\\ 0.467&0.464&0.499&0.214&0.002&0.437\\ 0.089&0.089&0.059&0.036&0.005&0.082\\ 0.206&0.225&0.230&0.098&0.001&0.201\end{bmatrix}\] We also plot the marginal posteriors for \(\mathbf{\pi}\) and \(\mathbf{\rho}\) obtained by the telescoping sampler in Figure 4. Note that all three methods identify a group of nodes that receive nearly no reciprocal edges as indicated by a column of near-zero estimates in \(\mathbf{\rho}\). Additionally, the telescoping sampler seems to overfit the number of clusters by producing a cluster whose mixture weight, \(\pi_{11}\), has a posterior mean of \(0.0008\). Class 11 also has marginal posteriors for \(\mathbf{\rho}\) that clearly have not yet mixed well. Hence, we caution making inference on node classes that either have a small number of nodes in them, or continually drop in and out of the sampler. Figure 5 displays the degree distribution of the trimmed Facebook wall post network, grouped by the VEM cluster estimates. The VEM algorithm clearly identifies cluster 3 as nodes that do not receive reciprocal edges. Despite, cluster 2 having a heavier tail, and clusters 1 and 2 tend to concentrate in similar regions of \(\mathbb{R}_{+}\). Further, the similarity of the estimates \(\hat{\mathbf{\rho}}_{\text{VEM}}\) indicate that classes 1 and 2 engage in similar reciprocal behavior. These visual measures warrant further inspection on the differences between class 1 and 2. Figure 6 displays the discrete time of the last post made by each node in the network that posts more than once. Note that nodes in class 1 are more likely to become inactive in the early period of the network evolution. These inactive nodes were noted by Cirkovic Figure 3: ICL, ELBO and posterior on K from the VEM, VB and fully Bayesian methods. For the VEM and VB algorithms, we consider \(K=1,\ldots,10\). Figure 4: Marginal posteriors for \(\pi\) and \(\rho\) using the telescoping sampler. et al. (2022a) and Viswanath et al. (2009) as well. The lighter tails of class 1 thus can be explained by the relatively short lifetimes of the nodes, as such nodes do not have as long enough time to send and receive wallposts. The VEM algorithm may have picked up on this inactivity by proxy. Though, such observations warrant extension to a preferential attachment model that incorporates nodes that become inactive over time. ## 7 Conclusion In this paper, we outline a preferential attachment model with heterogeneous reciprocity, and offer three methods for fitting the model to both simulated and real-world networks. Through simulations, we find that when analyzing networks that have many edges compared to the number of nodes, the variational alternatives offer similar performance to the fully Bayesian method in terms of point estimation at less computational cost. However, Figure 5: Reciprocal components identified by the VEM algorithm Figure 6: Density plots for last appearance time (by class) of each node that posted more than once in the network the credible intervals generated by the fully Bayesian method more reliably capture the true data-generating parameters. We also compare the ability of each method to select the number of communication classes in heterogeneous reciprocal PA networks. Generally speaking, when the number of edges are again large compared to the number of nodes, all three methods consistently choose the true number of classes, with the fully Bayesian method having a slight tendency to overfit. We then showcase the ability of the heterogeneous reciprocal PA model to capture non-uniform reciprocal behavior across users in the Facebook wallpost network. The proposed model clearly offers the additional flexibility needed to model such data. Upon analyzing the Facebook wallpost network, we find that the VEM algorithm uncovered two reciprocal classes that engage in somewhat similar reciprocal behavior, though one of the classes consisted of more inactive users. The propensity of some users to become inactive in a network as it evolves over time is a common feature of many networks, and warrants the extension of the preferential attachment model to account for such behavior. In future work, we will also consider models that allow for users to become inactive as the network grows over time. The research work was partly supported by NSF Grant DMS-2210735. ## Appendix A: Statistical Tools for Multivariate Extremes Here we detail, in a non-technical fashion, some tools used to analyze data subject to extremal observations. For more rigorous treatments, we refer to the works of Beirlant et al. (2004) and Resnick (2007). A central goal in the study of multivariate extremes is to identify how extremes cluster. In other words, if one or more components of a random vector is large, how likely is that the other components of the random vector will also be large? For PA models with homogeneous reciprocity, Cirkovic et al. (2022a) proved that the extremal out/in-degrees tend to cluster on a line through the origin. With heterogeneous reciprocity, Wang and Resnick (2022b) proved that the model with \(\beta=0\) generates extreme out/in-degrees that concentrate on multiple lines through the origin. An exploratory tool used to identify where such extremes cluster in \(\mathbb{R}^{2}_{+}\) is the _angular density_, a plot of the angles \[\Theta_{r}\equiv\left\{D_{v}^{\mathrm{out}}(n)/(D_{v}^{\mathrm{out}}(n)+D_{v} ^{\mathrm{in}}(n)):v\in V(n),D_{v}^{\mathrm{out}}(n)+D_{v}^{\mathrm{in}}(n)>r\right\}\] for some large threshold \(r\). Intuitively, if the angular density concentrates mass around some point in \((0,1)\), then one would expect extremes to cluster on a line through the origin. On the other hand, if the angular density only places mass on the set \(\{0,1\}\), then the out/in-degrees are _asymptotically independent_; a large in-degree does not necessarily imply a large out-degree, and vice versa. Figure 7 displays the angular density for the Facebook wallpost data analyzed in Section 6. When the angular density concentrates mass on the set \(\{0.5,1\}\), it indicates the existence of two extremal populations: one that has approximately equal in/out-degree, and another that has high out-degree but small in-degree. The threshold for \(\Theta_{r}\) was chosen as \(r=51\) by the minimum distance method applied to the total degrees (Clauset et al., 2009). The minimum distance method chooses a threshold that minimizes the Kolmogorov-Smirnov distance between a power-law tail and the emprical tail o Figure 7: Angular density for the Facebook wallpost data with threshold chosen via the minimum distance method. the threshold. Note that the angular density is naturally sensitive to the choice of \(r\). If \(r\) is chosen too large, some extremal features of the data may be passed over, while if \(r\) is chosen too small, the extremal behavior will be corrupted by non-extremal observations. We now have the tools to describe the initialization of the VEM algorithm for a fixed \(K\) presented in Section 3.2.2. First, the set \(\Theta_{r}\) is constructed via threshold \(r\) chosen by the minimum distance method available in R package igraph(Csardi et al., 2006). We then employ \(K\)-means on the set \(\Theta_{r}\) to determine an initial clustering of nodes. Note that this only clusters nodes with total degree larger than \(r\). This clustering is then used to compute empirical class probabilities \((\hat{\pi}_{r})_{r=1}^{K}\) and empirical reciprocities \((\hat{\rho}_{m,r})_{m,r=1}^{K}\). Note that \((\hat{\rho}_{m,r})_{m,r=1}^{K}\) is computed only on edges that connect nodes which both have total degree larger than \(r\). \((\hat{\pi}_{r})_{r=1}^{K}\) and \((\hat{\rho}_{m,r})_{m,r=1}^{K}\) are thus used as initial parameter values and the the initial \((\tau_{w,\ell})_{w\in V(n),\ell\in\{1,\ldots,K\}}\) are chosen according to a uniform distribution on the \(K\)-simplex. The full initialization algorithm is given in Algorithm 5. ``` 0: Graph \(G(n)\), \(\#\) communication types \(K\) 0: Initial variational EM estimates \(\hat{\pi}_{\text{VEM}}\) and \(\hat{\rho}_{\text{VEM}}\) 1. Compute the tail threshold \(r\) according to the minimum distance procedure. 2. Construct the sets \[\aleph_{r} =\left\{v\in V(n):D_{v}^{\text{out}}(n)+D_{v}^{\text{in}}(n)>r\right\}\] \[\Theta_{r} =\left\{D_{v}^{\text{out}}(n)/(D_{v}^{\text{out}}(n)+D_{v}^{\text {in}}(n)):v\in\aleph_{r}\right\}\] 3. Employ \(K\)-means on \(\Theta_{r}\) to form initial communication class estimates \(\hat{W}_{v}\) for \(v\in\aleph_{r}\) 4. Form initial VEM estimates via for\(m=1\) to \(K\)do \[\hat{\pi}_{m}=\frac{1}{|\aleph_{r}|}\sum_{v\in\aleph_{r}}1_{\{\hat{W}_{v}=m\}}\] for\(r=1\) to \(K\)do \[\hat{\rho}_{m,r}=\frac{\sum_{k:\hat{W}_{s_{k}}=r,\hat{W}_{t_{k}}=m}1_{\{R_{k}= 1\}}}{\left|\left\{k:\hat{W}_{s_{k}}=r,\hat{W}_{t_{k}}=m\right\}\right|}\] endfor endfor ``` **Algorithm 5** Initialization of VEM for heterogeneous reciprocal PA ## Appendix B Sample Derivations for the VEM Algorithm In this appendix we present some sample derivations for the variational EM algorithm presented in Section 3.2.2. We note that the derivations are very similar to those of Daudin et al. (2008) and Latouche et al. (2012), though we reformulate them in our setting for convenience. The same type of calculations can be employed to derive the variational Bayes algorithm. ### Derivation of the ELBO In this section we derive (12). Recall that we posit the mean-field variational family on the communication types \(W(n)\) given by \[q(W(n))=\prod_{v\in V(n)}q_{v}(W_{v}).\] Then, the ELBO is given by \[\text{ELBO}(q,\boldsymbol{\pi},\boldsymbol{\rho})=E_{q}\left[\log p\left(W(n), (e_{k})_{k=1}^{n}\mid\boldsymbol{\pi},\boldsymbol{\rho}\right)\right]-E_{q} \left[\log q(W(n))\right].\] Focusing on the first term, recall that the log-likelihood is given by \[\log p\left(W(n),(e_{k})_{k=1}^{n}\mid\boldsymbol{\pi},\boldsymbol {\rho}\right)\] \[= \sum_{k=1}^{n}\sum_{r=1}^{K}\left(1_{\{J_{k}=1\}}1_{\{W_{s_{k}=r }\}}+1_{\{J_{k}=3\}}1_{\{W_{t_{k}=r}\}}\right)\] \[+\sum_{k=1}^{n}\sum_{r=1}^{K}\sum_{m=1}^{K}1_{\{R_{k}=1\}}1_{\{W_ {t_{k}=r}\}}1_{\{W_{t_{k}=m}\}}\log\rho_{m,r}\] \[+\sum_{k=1}^{n}\sum_{r=1}^{K}\sum_{m=1}^{K}1_{\{R_{k}=0\}}1_{\{W_ {r_{k}=r}\}}1_{\{W_{t_{k}=m}\}}\log(1-\rho_{m,r}).\] Taking an expectation with respect to \(q\) gives that \[E_{q}\left[\log p\left(W(n),(e_{k})_{k=1}^{n}\mid\boldsymbol{\pi },\boldsymbol{\rho}\right)\right]\] \[= \sum_{k=1}^{n}\sum_{r=1}^{K}\left(1_{\{J_{k}=1\}}E_{q}\left[1_{\{ W_{s_{k}=r}\}}\right]+1_{\{J_{k}=3\}}E_{q}\left[1_{\{W_{t_{k}=r}\}}\right]\right)\] \[+\sum_{k=1}^{n}\sum_{r=1}^{K}\sum_{m=1}^{K}1_{\{R_{k}=1\}}E_{q} \left[1_{\{W_{s_{k}=r}\}}1_{\{W_{t_{k}=m}\}}\right]\log\rho_{m,r}\] \[+\sum_{k=1}^{n}\sum_{r=1}^{K}\sum_{m=1}^{K}1_{\{R_{k}=0\}}E_{q} \left[1_{\{W_{s_{k}=r}\}}1_{\{W_{t_{k}=m}\}}\right]\log(1-\rho_{m,r}),\] and employing the mean-field family assumption, \[E_{q}\left[\log p\left(W(n),(e_{k})_{k=1}^{n}\mid\boldsymbol{\pi},\boldsymbol{\rho}\right)\right]\] \[= \sum_{k=1}^{n}\sum_{r=1}^{K}\left(1_{\{J_{k}=1\}}E_{q}\left[1_{\{W_ {s_{k}}=r\}}\right]+1_{\{J_{k}=3\}}E_{q}\left[1_{\{W_{t_{k}}=r\}}\right]\right)\] \[+\sum_{k=1}^{n}\sum_{r=1}^{K}\sum_{m=1}^{K}1_{\{R_{k}=1\}}E_{q} \left[1_{\{W_{s_{k}}=r\}}\right]E_{q}\left[1_{\{W_{t_{k}}=m\}}\right]\log\rho_ {m,r}\] \[+\sum_{k=1}^{n}\sum_{r=1}^{K}\sum_{m=1}^{K}1_{\{R_{k}=0\}}E_{q} \left[1_{\{W_{s_{k}}=r\}}\right]E_{q}\left[1_{\{W_{t_{k}}=m\}}\right]\log(1- \rho_{m,r})\] \[= \sum_{k=1}^{n}\sum_{r=1}^{K}\left(1_{\{J_{k}=1\}}\tau_{s_{k},r}+ 1_{\{J_{k}=3\}}\tau_{t_{k},r}\right)\] \[+\sum_{k=1}^{n}\sum_{r=1}^{K}\sum_{m=1}^{K}1_{\{R_{k}=1\}}\tau_{s _{k},r}\tau_{t_{k},m}\log\rho_{m,r}\] \[+\sum_{k=1}^{n}\sum_{r=1}^{K}\sum_{m=1}^{K}1_{\{R_{k}=0\}}\tau_{s _{k},r}\tau_{t_{k},m}\log(1-\rho_{m,r}).\] Finally, the entropy term is given by \[E_{q}\left[\log q(W(n))\right]= E_{q}\left[\sum_{v\in V(n)}\sum_{r=1}^{K}1_{\{W_{v}=r\}}\log \tau_{v,r}\right]\] \[= \sum_{v\in V(n)}\sum_{r=1}^{K}E_{q}\left[1_{\{W_{v}=r\}}\right] \log\tau_{v,r}\] \[= \sum_{v\in V(n)}\sum_{r=1}^{K}\tau_{v,r}\log\tau_{v,r}.\] #### Derivation of the E-Step In this section we derive the E-step of the variational EM algorithm (Step 1 of Algorithm 3). Recall that the E-step maximizes the ELBO with respect to the variational density \(q\). We perform this optimization with coordinate ascent. From Blei et al. (2017), for every \(\ell=1,\ldots,K\), the optimal \(q_{w}(W_{w})\) satisfies \[\tau_{w,\ell}=q_{w}^{\star}\left(W_{w}=\ell\right)\propto \exp\left\{E_{q-w}\left[\log p\left(W_{w}=\ell\mid(W_{v})_{v\neq w },(e_{k})_{k=1}^{n},\boldsymbol{\pi},\boldsymbol{\rho}\right)\right]\right\},\] which, by definition of conditional density, is proportional to \[\propto \exp\left\{E_{q-w}\left[\log p\left(W_{w}=\ell,(W_{v})_{v\neq w},(e _{k})_{k=1}^{n}\mid\boldsymbol{\pi},\boldsymbol{\rho}\right)\right]\right\}.\] Here, \(q_{-w}\) denotes the variational density on \((W_{v})_{v\neq w}\). Up to some constant \(C\) not depending on \(w\), the log-likelihood term is given by \[\log p\left(W_{w}=\ell,(W_{v})_{v\neq w},(e_{k})_{k=1}^{n}\mid\mathbf{ \pi},\mathbf{\rho}\right)= \log\pi_{\ell}+\sum_{m=1}^{K}\log\rho_{m,\ell}\sum_{k:s_{k}=w}1_{ \{W_{t_{k}}=m\}}1_{\{R_{k}=1\}}\] \[+\sum_{m=1}^{K}\log(1-\rho_{m,\ell})\sum_{k:s_{k}=w}1_{\{W_{t_{k}} =m\}}1_{\{R_{k}=0\}}\] \[+\sum_{r=1}^{K}\log\rho_{\ell,r}\sum_{k:t_{k}=w}1_{\{W_{s_{k}}=r \}}1_{\{R_{k}=1\}}\] \[+\sum_{r=1}^{K}\log(1-\rho_{\ell,r})\sum_{k:t_{k}=w}1_{\{W_{s_{k} }=r\}}1_{\{R_{k}=0\}}+C,\] and taking an expectation with respect to \(q_{-w}\) gives \[\log p\left(W_{w}=\ell,(W_{v})_{v\neq w},(e_{k})_{k=1}^{n}\mid\bm {\pi},\mathbf{\rho}\right)= \log\pi_{\ell}+\sum_{m=1}^{K}\log\rho_{m,\ell}\sum_{k:s_{k}=w}\tau _{t_{k},m}1_{\{R_{k}=1\}}\] \[+\sum_{m=1}^{K}\log(1-\rho_{m,\ell})\sum_{k:s_{k}=w}\tau_{t_{k},m }1_{\{R_{k}=0\}}\] \[+\sum_{r=1}^{K}\log\rho_{\ell,r}\sum_{k:t_{k}=w}\tau_{s_{k},r}1_{ \{R_{k}=1\}}\] \[+\sum_{r=1}^{K}\log(1-\rho_{\ell,r})\sum_{k:t_{k}=w}\tau_{s_{k},r }1_{\{R_{k}=0\}}+C.\] Hence, \[\tau_{w,\ell}=q_{w}^{\star}\left(W_{w}=\ell\right)\propto \exp\left\{E_{q_{-w}}\left[\log p\left(W_{w}=\ell,(W_{v})_{v\neq w },(e_{k})_{k=1}^{n}\mid\mathbf{\pi},\mathbf{\rho})\right]\right\} \tag{14}\] \[\propto \pi_{\ell}\prod_{m=1}^{K}\rho_{m,\ell}^{\sum_{k:s_{k}=w}\tau_{ t_{k},m}1_{\{R_{k}=1\}}}(1-\rho_{m,\ell})^{\sum_{k:s_{k}=w}\tau_{t_{k},m}1_{\{R_{ k}=0\}}}\] \[\times\prod_{r=1}^{K}\rho_{\ell,r}^{\sum_{k:t_{k}=w}\tau_{s_{k},r }1_{\{R_{k}=1\}}}(1-\rho_{\ell,r})^{\sum_{k:t_{k}=w}\tau_{s_{k},r}1_{\{R_{k}=0 \}}}.\] Thus, in the E-step, one cycles through (14) for each \(w\in V(n)\) until the ELBO no longer meaningfully increases.
2301.03302
A Rolling Horizon Game Considering Network Effect in Cluster Forming for Dynamic Resilient Multiagent Systems
A two-player game-theoretic problem on resilient graphs in a multiagent consensus setting is formulated. An attacker is capable to disable some of the edges of the network with the objective to divide the agents into clusters by emitting jamming signals while, in response, the defender recovers some of the edges by increasing the transmission power for the communication signals. Specifically, we consider repeated games between the attacker and the defender where the optimal strategies for the two players are derived in a rolling horizon fashion based on utility functions that take both the agents' states and the sizes of clusters (known as network effect) into account. The players' actions at each discrete-time step are constrained by their energy for transmissions of the signals, with a less strict constraint for the attacker. Necessary conditions and sufficient conditions of agent consensus are derived, which are influenced by the energy constraints. The number of clusters of agents at infinite time in the face of attacks and recoveries are also characterized. Simulation results are provided to demonstrate the effects of players' actions on the cluster forming and to illustrate the players' performance for different horizon parameters.
Yurid Nugraha, Ahmet Cetinkaya, Tomohisa Hayakawa, Hideaki Ishii, Quanyan Zhu
2023-01-09T12:44:33Z
http://arxiv.org/abs/2301.03302v1
# A Rolling Horizon Game Considering Network Effect ###### Abstract A two-player game-theoretic problem on resilient graphs in a multiagent consensus setting is formulated. An attacker is capable to disable some of the edges of the network with the objective to divide the agents into clusters by emitting jamming signals while, in response, the defender recovers some of the edges by increasing the transmission power for the communication signals. Specifically, we consider repeated games between the attacker and the defender where the optimal strategies for the two players are derived in a rolling horizon fashion based on utility functions that take both the agents' states and the sizes of clusters (known as network effect) into account. The players' actions at each discrete-time step are constrained by their energy for transmissions of the signals, with a less strict constraint for the attacker. Necessary conditions and sufficient conditions of agent consensus are derived, which are influenced by the energy constraints. The number of clusters of agents at infinite time in the face of attacks and recoveries are also characterized. Simulation results are provided to demonstrate the effects of players' actions on the cluster forming and to illustrate the players' performance for different horizon parameters. M + Footnote †: journal: Computer Engineering ultiagent Systems, Cybersecurity, Game Theory, Consensus, Cluster Forming, Network Effect/Network ## 1 Introduction Applications of large-scale networked systems have rapidly grown in various areas of critical infrastructures including power grids and transportation systems. Such systems can be considered as multiagent systems where a number of agents capable of making local decisions interact over a network and exchange information to reach a common goal [2]. While wireless communication plays an important role for the functionality of the network, it is also prone to cyber attacks initiated by malicious adversaries [11, 25]. Jamming attacks in consensus problems of multiagent systems have been studied in [3, 5, 28]. Noncooperative games between attackers and other players protecting the network are widely used to analyze security problems, including jamming attacks [12, 17] and injection attacks [18, 24, 26]. In a jamming attack formulation, it is natural to consider that the jammer/the attacker has an energy constraint such that, if it is not connected to energy sources, it is impossible to attack all communication links of the network at all times [4, 5]. In the context of game-theoretical approaches, this constraint becomes important to characterize the strategic behaviors of the players [17]. When the links in the network are attacked, the agents may become disconnected from other agents, resulting in several groups of connected agents, or _clusters_. The work [13] proposed the notion of network effect/network externality, which refers to the utility of an agent in a certain cluster depending on how many other agents belong to that particular cluster. Such a concept has been used to analyze grouping of agents on, e.g., social networks and computer networks, as discussed in [16, 10]. Rolling horizon control has been used to handle systems with uncertainties. It is also studied in the context of networked control [15, 30], where there may be additional uncertainties related to communications among agents in the networks. Rolling horizon approaches are also discussed in noncooperative security game settings in [34, 35], where horizon lengths affect the resilience of the system. Rolling horizon approaches have also been used to handle the constraints in the system, e.g., in an agent with obstacle avoidance constraints [27, 14]. In this paper, we consider a security problem in a two-player game setting between an attacker, who is motivated to disrupt the communication among agents by attacking communication links, and a defender, who attempts to recover some of the attacked links. We formulate the problem based on [20, 6], which use graph connectivity to characterize the game and the players' strategies. The game in this paper is played repeatedly over discrete time in the context of multiagent consensus. As a results of these persistent attacks and recoveries, under consensus protocol cluster forming emerges among the agents of the networks with different clusters having different agents' states. Cluster forming in multiagent systems has been studied in, e.g., [7, 29, 1], where the relations among certain agents may be hostile. In this paper, we approach clustering from a different viewpoint based on a game-theoretic formulation. Specifically, the players of the game consider network effect/network externality [13] to form clusters among agents. Their utilities are determined by how the network is disconnected into groups of agents as well as how the players' actions affect the states of the agents at each time. Under this setting, the number and the size of the clusters are influenced by how strong the attacks are; the stronger attacker is supposed to be able to separate agents into more smaller clusters, and vice versa. In the resilient network setting, it is common that there exists a network manager who is aware of the incoming attack, since the agents try to communicate with their neighbor agents at all time and thus quickly know if some of their neighbors do not send any signal. The network manager then tries to prepare a defense plan to quickly recover from such attacks and to repel the subsequent attacks. From the attacker's viewpoint, it is also common that the attacker knows which edges of the network are the most vulnerable as well as how powerful the network manager is, e.g., the manager's remaining resources. Therefore, we believe that this sequential model can be applied to several real-world settings. The main contribution of this paper is that we introduce a repeated game played repeatedly over time to model the decision making process between the attacker and the defender in the context of network security. It is then natural to explore how these games affect the networks and state evolution of the agents. Consensus protocol is considered due to its simple characterization, where all agents should converge in the case of no attack. More specifically, in comparison to [20, 6], our contribution is threefold: (i) We introduce more options for the attacker's jamming signal strengths; (ii) the game consists of multiple attack-recovery actions, resulting in more complicated strategies; and (iii) we consider a rolling horizon approach for the players so that their strategies may be modified as they obtain new knowledge of the status of the system. More specifically, it is now possible for the attacker to disable links with stronger intensity of attack signals so that the defender is unable to recover those links (the decision on which edges are to be attacked with stronger attack signals is made at the same time as the decision on which edges are to be attacked with normal attack signals); this feature is motivated by [32, 33]. In practice, this is possible when the attacker emits stronger jamming signals that takes more resource that results in much lower signal-to-interference-plus-noise ratio (SINR) so that it is not possible for the defender to recover the communication on those links with its limited recovery strength. On the other hand, we consider games consisting of multiple parts, where the players need to consider their future utilities and energy constraints when deciding their strategies at any point in time. This setting enables the the players to think further ahead and prioritize their long-term payoffs, compared to in a single-step case. The players recalculate and may override their strategies as time goes on, according to the rolling horizon approach. A related formulation without rolling horizon is discussed in [19], where the players are not able to change their strategies decided at earlier times. The paper is organized as follows. In Section 2, we introduce the framework for the attack-recovery sequence, cluster forming among agents, and energy consumption models of the players. The utility functions of the games in rolling horizon approach of the repeated games is discussed in Section 3, whereas the game structure is characterized in Section 4. In Section 5, we analyze some conditions of consensus among agents, which are related to the parameters of the underlying graph and the players' energy constraints. We continue by discussing the cluster forming of agents when consensus is not achieved in Section 6. The equilibrium characterization of the game under certain conditions is discussed in Section 7. We then provide numerical examples on consensus and cluster forming in Section 8 and conclude the paper in Section 9. The conference version of this paper appeared in [21], where we consider a more restricted situation on how often players update their strategies. The notations used in this paper are fairly standard. We denote by \(|\cdot|\) the cardinality of a set. The floor function and the ceiling function are denoted by \(\lfloor\cdot\rfloor\) and \(\lceil\cdot\rceil\), respectively. The sets of positive and nonnegative integers are denoted by \(\mathbb{N}\) and \(\mathbb{N}_{0}\), respectively. ## 2 Attack/Recovery Characterization for Multiagent Systems Under Consensus Dynamics We consider a multiagent system of \(n\) agents communicating to each other in discrete time in the face of jamming attacks. The agents are aiming to converge to a consensus state by interacting with each other over the communication network. The network topology for the normal operation is given by an undirected and connected graph \(\mathcal{G}=(\mathcal{V},\mathcal{E})\). The graph consists of the set \(\mathcal{V}\) of vertices representing the agents and the set \(\mathcal{E}\subseteq\mathcal{V}\times\mathcal{V}\) of edges representing the communication links. The edge connectivity [2] of the connected graph \(\mathcal{G}\) is denoted by \(\lambda\). Each agent \(i\) has the scalar state \(x_{i}[k]\) following the discrete-time update rule at time \(k\in\mathbb{N}_{0}\) given by \[x_{i}[k+1]=x_{i}[k]+u_{i}[k],\quad x[0]=x_{0}, \tag{1}\] where \(u_{i}[k]\) denotes the control input applied to agent \(i\). We assume that \(u_{i}[k]\) is constructed as the weighted sum of the state differences between agent \(i\) and its neighbor agents, commonly used in, e.g., [8], which is given by \[u_{i}[k]=\sum_{j\in\mathcal{N}_{i}[k]}a_{ij}(x_{j}[k]-x_{i}[k]), \tag{2}\] where \(\mathcal{N}_{i}[k]\) denotes the set of agents that can communicate with agent \(i\) at time \(k\), and \(a_{ij}\) represents the weight of edge \((i,j)\in\mathcal{E}\) such that \(\Sigma_{j=1,j\neq i}^{n}a_{ij}<1\), \(i\in\mathcal{V}\) to ensure that the agents achieve consensus without any attack. We assume that the jamming attacks on an edge affect the communication between the two agents connected by that attacked edge. As a result, the set \(\mathcal{N}_{i}[k]\) may change, and the resulting communication topology can be disconnected at time \(k\). Such jamming attacks are represented by the removal of edges in \(\mathcal{G}\). On the other hand, within the system there is a defender that may be capable of maintaining the communication among the agents, e.g., by asking agents to send stronger communication signals to overcome the jamming signals. This action is represented as rebuilding some of the attacked edges. From this sequence of attacks and recoveries, we characterize the attack-recovery process as a two-player game between the _attacker_ and the _defender_ in terms of the communication links in the network. In other words, the graph characterizing the networked system is _resilient_ if the group of agents is able to recover from the damages caused by the attacker. However, there may be cases where the resiliency level of the graph is reduced if the jamming signals are sufficiently strong such that the defender cannot recover. Note that to achieve consensus, the agents need _not_ be connected for _all_ time. In this paper, we consider the case where the attacker has two types of jamming signals in terms of their strength, _strong_ and _normal_. The defender is able to recover only the edges that are attacked with normal strength. In the following subsections, we first describe the sequence of attacks and recoveries and characterize some constraints on the players' energy and computational ability that we need to impose as well as how the objective of the problem is formulated. ### Attack-Recovery Sequence In our setting, at each discrete time \(k\), the players (the attacker and the defender) decide to attack/recover certain edges in two stages, with the attacker acting first and then the defender. Specifically, at time \(k\) the attacker attacks \(\mathcal{G}\) by deleting the edges \(\mathcal{E}_{k}^{\mathrm{A}}\subseteq\mathcal{E}\) with normal jamming signals and \(\overline{\mathcal{E}}_{k}^{\mathrm{A}}\subseteq\mathcal{E}\) with strong jamming signals with \(\mathcal{E}_{k}^{\mathrm{A}}\cap\overline{\mathcal{E}}_{k}^{\mathrm{A}}=\emptyset\), whereas the defender recovers \(\mathcal{E}_{k}^{\mathrm{D}}\subseteq\mathcal{E}_{k}^{\mathrm{A}}\). As mentioned earlier, the defender is not able to recover the edges attacked with strong jamming signals, i.e., \(\mathcal{E}_{k}^{\mathrm{D}}\cap\overline{\mathcal{E}}_{k}^{\mathrm{A}}=\emptyset\). Due to the attacks and then the recoveries, the network changes from \(\mathcal{G}\) to \(\mathcal{G}_{k}^{\mathrm{A}}:=(\mathcal{V},\mathcal{E}\setminus(\mathcal{E}_ {k}^{\mathrm{A}}\cup\overline{\mathcal{E}}_{k}^{\mathrm{A}}))\) and further to \(\mathcal{G}_{k}^{\mathrm{D}}:=(\mathcal{V},(\mathcal{E}\setminus(\mathcal{E}_ {k}^{\mathrm{A}}\cup\overline{\mathcal{E}}_{k}^{\mathrm{A}}))\cup\mathcal{E}_ {k}^{\mathrm{D}})\) at time \(k\). The agents then communicate to their neighbors \(\mathcal{N}_{i}[k]\) based on this resulting graph \(\mathcal{G}_{k}^{\mathrm{D}}\). In this game, the players attempt to choose the best strategies in terms of edges attacked/recovered \((\overline{\mathcal{E}}_{k}^{\mathrm{A}},\mathcal{E}_{k}^{\mathrm{A}})\) and \(\mathcal{E}_{k}^{\mathrm{D}}\) to maximize their own utility functions. Here, the games are played every game period \(T\) time steps and the \(l\)th game is defined over the horizon of \(h\) steps from time \((l-1)T\) to \((l-1)T+h-1\), with \(l\in\mathbb{N}\) and \(1\leq T\leq h\). The players make decisions in a _rolling horizon_ fashion; the optimal strategies obtained at \((l-1)T\) for the future time may be overridden when the players recalculate their strategies at time \(lT\) when the next game starts. Fig. 1 illustrates the discussed sequence over time with \(h=8\) and \(T=4\), where the filled circles indicate the implemented strategies and the empty circles indicate the strategies of the game that are discarded. In this setting, the _horizon length_\(h\) indicates the computational ability, i.e., how long in the future the players can plan their strategies, whereas the _game period_\(T\leq h\) indicates the players' adaptability, i.e., how long the players apply the obtained strategies without updating (shorter \(T\) means that a player is more adaptable). The rolling horizon game structure will be discussed in Section 4 in more detail. ### Energy Constraints The actions of the attacker and the defender are affected by the constraints on their energy resources. It is assumed that the total supplied energy for the players increases linearly in time; furthermore, the energy consumed by the players is proportional to the number of attacked/recovered edges. Here we suppose that the players initially possess certain amount of energy \(\kappa^{\rm A}\) and \(\kappa^{\rm D}\) for the attacker and the defender, respectively. Moreover, the players are assumed to be able to supply energy wirelessly to devices that obstruct/retain communication signals between the agents so that the energy supply rates to these devices are limited by the constant values of \(\rho^{\rm A}\) and \(\rho^{\rm D}\) every discrete time step. These devices are supposed to have unlimited battery capacity and thus can be supplied constantly by the players with a linear rate \(\rho^{\rm A}\) or \(\rho^{\rm D}\). For the attacker, the strong attacks on \(\overline{\cal E}^{\rm A}_{k}\) take \(\overline{\beta}^{\rm A}>0\) energy per edge per unit time whereas the normal attacks on \({\cal E}^{\rm A}_{k}\) take \(\beta^{\rm A}>0\) cost per edge, with \(\overline{\beta}^{\rm A}>\beta^{\rm A}\). The total energy used by the attacker is constrained as \[\sum_{m=0}^{k}(\overline{\beta}^{\rm A}|\overline{\cal E}^{\rm A}_{m}|+\beta^ {\rm A}|{\cal E}^{\rm A}_{m}|)\leq\kappa^{\rm A}+\rho^{\rm A}k \tag{3}\] for any time \(k\), where \(\kappa^{\rm A}\geq\rho^{\rm A}>0\). This implies that the total energy spent by the attacker cannot exceed the available energy characterized as the sum of the initial energy \(\kappa^{\rm A}\) and the supplied energy \(\rho^{\rm A}k\) by time \(k\). This energy constraint restricts the number of edges that the attacker can attack. Note that the attacker's available energy increases by \(\rho^{\rm A}\) at each \(k\). The condition \(\kappa^{\rm A}\geq\rho^{\rm A}\) allows the attacker to have at least the same attack ability at time \(k=0\). Fig. 2 illustrates the energy constraint of the attacker, where the dashed line with slope \(\rho^{\rm A}\) represents the total supplied energy and the filled circles indicate the total energy spent. A critical case is when \(\beta^{\rm A}<\rho^{\rm A}\), since it is possible for the attacker to attack at least one edge for all times. This will have implications on the consensus and cluster forming of the agents, as we will discuss later. The energy constraint for the defender is similar to (3): \[\sum_{m=0}^{k}\beta^{\rm D}|{\cal E}^{\rm D}_{m}|\leq\kappa^{\rm D}+\rho^{\rm D }k, \tag{4}\] with \(\kappa^{\rm D}\geq\rho^{\rm D}>0\) and \(\beta^{\rm D}>0\). Note that there is a single term on the left-hand side because there is only one type of recovery signals for the agents. ## 3 Utility Functions with Cluster Forming and Agent-group Index Considerations In our game setting, the attacker tries to make the graph disconnected to separate the agents into clusters. Here, we introduce a few notions related to grouping/clustering of agents. In a given subgraph \({\cal G}^{\prime}=({\cal V},{\cal E}^{\prime})\) of \({\cal G}\), the agents may be divided into \(\overline{n}({\cal G}^{\prime})\) number of _groups_, with the groups \({\cal V}^{\prime}_{1},{\cal V}^{\prime}_{2},\ldots,{\cal V}^{\prime}_{ \overline{n}({\cal G}^{\prime})}\) being a partition of \({\cal V}\) with \(\cup_{p=1}^{\overline{n}({\cal G}^{\prime})}{\cal V}^{\prime}_{p}={\cal V}\) and \({\cal V}^{\prime}_{p}\cap{\cal V}^{\prime}_{q}=\emptyset\), if \(p\neq q\). There is no edge connecting different groups, i.e., \(e_{i^{\prime},j^{\prime}}\notin{\cal E}^{\prime},\forall i^{\prime}\in{\cal V }^{\prime}_{p},j^{\prime}\in{\cal V}^{\prime}_{q}\). We also call each subset of agents taking the same state at infinite time as a _clus Figure 2: Energy constraint of the attacker considered in the formulation. The dashed line represents the total supplied energy to spend. The filled circles representing the actual energy consumed by the player should be below the dashed line. Figure 1: Illustration of the games played over discrete time \(k\) with rolling horizon approaches by the players. ter_, i.e., \(\lim_{k\rightarrow\infty}(x_{i}[k]-x_{j}[k])=0\) implies that agents \(i\) and \(j\) belong to the same cluster. In the considered game, the attacker and the defender are concerned about the number of agents in each group. Specifically, we follow the notion of _network effect/network externality_[13], where the utility of an agent in a certain group depends on how many other agents belong to that particular group. In the context of this game, the attacker wants to isolate agents so that fewer agents are in each group, while the defender wants as many agents as possible in the same group. We then represent the level of grouping in the graph \(\mathcal{G}^{\prime}\) by the function \(c(\cdot)\), which we call the _agent-group index_, given by \[c(\mathcal{G}^{\prime}):=\sum_{p=1}^{\overline{m}(\mathcal{G}^{\prime})}| \mathcal{V}_{p}^{\prime}|^{2}-|\mathcal{V}|^{2}\quad(\leq 0). \tag{5}\] The value of \(c(\mathcal{G}^{\prime})\) is \(0\) if \(\mathcal{G}^{\prime}\) is connected, since there is only one group (i.e., \(\overline{n}(\mathcal{G}^{\prime})=1\)). A larger value (closer to \(0\)) of \(c(\mathcal{G}^{\prime})\) implies that there are fewer groups in graph \(\mathcal{G}^{\prime}\), and/or each group has more agents. The agent-group indices of some graphs are shown in Fig. 3. Here, it is interesting that \(c(\mathcal{G}_{\rm D})\) is smaller than \(c(\mathcal{G}_{\rm C})\), even though \(\mathcal{G}_{\rm C}\) has more groups. It is because the largest cluster is constituted by more agents in \(\mathcal{G}_{\rm C}\) than the case of \(\mathcal{G}_{\rm D}\). Thus, for an attacker who tries to reduce the number of agents in one cluster, \(\mathcal{G}_{\rm D}\) is preferable to \(\mathcal{G}_{\rm C}\). In our problem setting, the players also consider the effects of their actions on the agent states when attacking/recovering. For example, the attacker may want to separate agents having state values with more differences in different groups. We specify the agents' state difference \(z_{k}\) as \[z_{k}(\overline{\mathcal{E}}_{k}^{\rm A},\mathcal{E}_{k}^{\rm A},\mathcal{E}_ {k}^{\rm D}):=x^{\rm T}[k+1]L_{\rm c}x[k+1], \tag{6}\] with \(L_{\rm c}\), for simplicity, being the Laplacian matrix of the complete graph with \(n\) agents. That is, (6) represents the sum of squares of the state differences of all the agent pairs. This implies that all state differences between any pair of agents are worth the same and thus the players do not prioritize any connection between agents. The attacked and recovered edges \((\overline{\mathcal{E}}_{k}^{\rm A},\mathcal{E}_{k}^{\rm A},\mathcal{E}_{k}^ {\rm D})\) will affect \(x[k+1]\) in accordance with (1) and (2), and in turn the value of \(z_{k}\). Note that the value of \(z_{k}\) is nonincreasing over time [2] even if some agents are left disconnected from other agents under attacks. This sum-of-square characterization of the agents' state difference is commonly used and essentially the same to our previous work [19] for the continuous-time setting; here, we extend the formulation to comply with the discrete-time setting by considering the states at one time step ahead \(k+1\). Now, we combine the two measures in (5) and (6) to construct the utility functions for the game in a zero-sum manner. Specifically, for the \(l\)th game starting at time \(k=(l-1)T\), the attacker and the defender's utility functions take account of the agent-group index \(c(\cdot)\) and the difference \(z_{k}\) of agents' states over \(h\) horizon length from time \((l-1)T\) to \((l-1)T+h-1\). With weights \(a,b\geq 0\), the utilities for the \(l\)th game \(U_{l}^{\rm A}\) for the attacker and \(U_{l}^{\rm D}\) for the defender are, respectively, defined by \[U_{l}^{\rm A} :=\sum_{k=(l-1)T}^{(l-1)T+h-1}(az_{k}-bc(\mathcal{G}_{k}^{\rm D})), \tag{7}\] \[U_{l}^{\rm D} :=-U_{l}^{\rm A}. \tag{8}\] In our setting both players attempt to maximize their utilities at the start of each game \(l\). The values of \(a\) and \(b\) represent the preference of the players towards either a long-term agent clustering or a short-term agent-grouping. A higher value of \(a\) implies that the players prefer to focus on the agent states and the subsequent cluster forming, whereas a higher value of \(b\) implies that they focus on the agent-grouping more. We suppose that both players know the underlying topology \(\mathcal{G}\) as well as the states of all agents \(x_{i}[k]\). ## 4 Rolling Horizon Game Structure We are interested in finding the subgame perfect equilibrium [9] of this game outlined in Section 3. To this end, the game is divided into some subgames/decision-making points. The subgame perfect equilibrium must be an equilibrium in every subgame. The optimal strategy of each player is obtained by using a backward induction approach, i.e., by finding the equilibrium from the smallest subgames. The tie-break condition happens when the players' strategies result in the same utility. In this case, we suppose that the players choose to save their energy by attacking/recovering less edges unless they have enough energy to attack/recover all edges in every subsequent steps, in which case they attack/recover more edges. Due to the nature of the rolling horizon approach, the strategies obtained from the \(l\)th game, i.e., attacked and recovered edges, are applied only from time \((l-1)T\) to \(lT-1\). Specifically, in the \(l\)th game for time \((l-1)T\) to \((l-1)T+h-1\), the strategies of both players are denoted by \(((\overline{\mathcal{E}}^{\mathrm{A}}_{l,1},\mathcal{E}^{\mathrm{A}}_{l,1}, \mathcal{E}^{\mathrm{D}}_{l,1}),\ldots,(\overline{\mathcal{E}}^{\mathrm{A}}_ {l,h},\mathcal{E}^{\mathrm{A}}_{l,h},\mathcal{E}^{\mathrm{D}}_{l,h}))\), with \((\overline{\mathcal{E}}^{\mathrm{A}}_{l,1},\mathcal{E}^{\mathrm{A}}_{l,1}, \mathcal{E}^{\mathrm{A}}_{l,\alpha})\) indicating the strategies at the \(\alpha\)th step of the \(l\)th game with \(\alpha\in\{1,\ldots,h\}\). Note that here we show the strategies with two subscripts representing the game and the step indices along the time axis. From the above set of strategies, only \(((\overline{\mathcal{E}}^{\mathrm{A}}_{l,1},\mathcal{E}^{\mathrm{A}}_{l,1}, \mathcal{E}^{\mathrm{D}}_{l,1}),\ldots,(\overline{\mathcal{E}}^{\mathrm{A}}_ {l,T},\mathcal{E}^{\mathrm{A}}_{l,T},\mathcal{E}^{\mathrm{D}}_{l,T}))\) is applied. Recall that \(h\) is taken to be greater than or equal to \(T\). Therefore, for the \(l\)th game from time \((l-1)T\) to \(lT-1\), the strategy applied will be written as \(((\overline{\mathcal{E}}^{\mathrm{A}}_{(l-1)T},\mathcal{E}^{\mathrm{A}}_{(l-1 )T},\mathcal{E}^{\mathrm{A}}_{(l-1)T}),\ldots,(\overline{\mathcal{E}}^{ \mathrm{A}}_{lT-1},\mathcal{E}^{\mathrm{A}}_{lT-1},\mathcal{E}^{\mathrm{D}}_{ lT-1})):=((\overline{\mathcal{E}}^{\mathrm{A}}_{l,1},\mathcal{E}^{\mathrm{A}}_{l,1}, \mathcal{E}^{\mathrm{D}}_{l,1}),\ldots,(\overline{\mathcal{E}}^{\mathrm{A}}_ {l,T},\mathcal{E}^{\mathrm{A}}_{l,T},\mathcal{E}^{\mathrm{D}}_{l,T}))\). We look at how the optimal edges can be found by an example with \(h=2\) and \(T=1\) or \(2\). In this case, for the \(l\)th game over time \((l-1)T\) and \((l-1)T+1\), the optimal strategies of the players are given by \[\mathcal{E}^{\mathrm{D}*}_{l,2}(\overline{\mathcal{E}}^{\mathrm{ A}}_{l,2},\mathcal{E}^{\mathrm{A}}_{l,2}) \in\arg\max_{\mathcal{E}^{\mathrm{D}}_{l,2}}U^{\mathrm{D}}_{l,2}, \tag{9}\] \[(\overline{\mathcal{E}}^{\mathrm{A}*}_{l,2}(\mathcal{E}^{\mathrm{ D}}_{l,1}),\mathcal{E}^{\mathrm{A}*}_{l,2}(\mathcal{E}^{\mathrm{D}}_{l,1})) \in\arg\max_{(\overline{\mathcal{E}}^{\mathrm{A}}_{l,2},\mathcal{E}^{\mathrm{ A}}_{l,2})}U^{\mathrm{A}}_{l,2},\] (10) \[\mathcal{E}^{\mathrm{D}*}_{l,1}(\overline{\mathcal{E}}^{\mathrm{ A}}_{l,1},\mathcal{E}^{\mathrm{A}}_{l,1}) \in\arg\max_{\mathcal{E}^{\mathrm{D}}_{l,1}}U^{\mathrm{D}}_{l},\] (11) \[(\overline{\mathcal{E}}^{\mathrm{A}*}_{l,1},\mathcal{E}^{\mathrm{ A}*}_{l,1}) \in\arg\max_{(\overline{\mathcal{E}}^{\mathrm{A}}_{l,1},\mathcal{E}^{\mathrm{A}}_{l,1})}U^{ \mathrm{A}}_{l}, \tag{12}\] where \(U^{\mathrm{A}}_{l,\alpha}\) and \(U^{\mathrm{D}}_{l,\alpha}\) are defined as parts of \(U^{\mathrm{A}}_{l}\) and \(U^{\mathrm{D}}_{l}\), respectively, calculated from the \(\alpha\)th step to the last (\(h\)th) step of the \(l\)th game, i.e., \(U^{\mathrm{A}}_{l,\alpha}=-U^{\mathrm{D}}_{l,\alpha}:=\sum_{(l-1)T+\alpha-1}^{( l-1)T+h-1}(az_{k}-bc(\mathcal{G}^{\mathrm{D}}_{k}))\). In this case with \(h=2\), the functions \(U^{\mathrm{A}}_{l,2}\) and \(U^{\mathrm{D}}_{l,2}\) are based on the values of \(az_{k}\) and \(b\mathcal{G}^{\mathrm{D}}_{k}\) at \(k=(l-1)T+1\) only. Note that to find \((\overline{\mathcal{E}}^{\mathrm{A}*}_{l,1},\mathcal{E}^{\mathrm{A}*}_{l,1})\), one needs to obtain \(\mathcal{E}^{\mathrm{D}*}_{l,1}(\overline{\mathcal{E}}^{\mathrm{A}}_{l,1}, \mathcal{E}^{\mathrm{A}}_{l,1})\) beforehand. Likewise, to find \(\mathcal{E}^{\mathrm{D}*}_{l,1}\), one needs to obtain \((\overline{\mathcal{E}}^{\mathrm{A}*}_{l,2}(\mathcal{E}^{\mathrm{D}}_{l,1}), \mathcal{E}^{\mathrm{A}*}_{l,2}(\mathcal{E}^{\mathrm{D}}_{l,1}))\). Similarly, to find \((\overline{\mathcal{E}}^{\mathrm{A}*}_{l,2},\mathcal{E}^{\mathrm{A}*}_{l,2})\), the edges \(\mathcal{E}^{\mathrm{D}*}_{l,2}(\overline{\mathcal{E}}^{\mathrm{A}}_{l,2}, \mathcal{E}^{\mathrm{A}}_{l,2})\) must be obtained beforehand. Note that deriving the optimal strategies above is subject to the energy constraints (3) and (4). For \(h>2\), the players' optimal strategies consist of \(2h\) parts similar to those in (9)-(12), with one time step consisting of two parts of strategies corresponding to the number of players. They are solved by the players at every time \(k=(l-1)T\) of the \(l\)th game, \(l\in\mathbb{N}\). With \(T=h\), the players do not have chance to override their strategies, which removes the rolling horizon aspect of the game. We will find the optimal strategies of the players by computing all possible combinations, since the choices of edges are finite. From the optimization problems specified above, the players examine at most \(3^{|\mathcal{E}|}2^{|\mathcal{E}|}h\) number of combinations of attacked and recovered edges for utility evaluations, since they have to foresee the opponent's response as well. Note that the attacker has three possible actions on an edge: no attack, attack with normal signals, and attack with strong signals, whereas the defender has only two actions: recover or not recover. Here we can see that the number of computation increases exponentially with respect to the number of edges in the underlying graph. To address scalability issues, we may find edges that are easier to attack first, i.e., edges that result in the formation of new groups if attacked, and limit the strategy choices over those edges only. Our previous works [19, 20] considered related games in continuous time, where the timings for launching attack/defense actions are also part of the decision variables. This aspect complicated the formulation, making it difficult to study games over a time horizon. In this paper, we simplify the timing issue and instead introduce the rolling horizon feature. This enables the players to consider the cluster forming in a longer time range, which is especially important when consensus among agents is obstructed by adversaries. With this rolling horizon setting, it is important for a player to know what the opponent's previous action at the previous step of the game is in order to know its position at the game tree, i.e., which subgame is the player's playing. For example, if the defender does not know which edges are previously attacked, then it cannot properly calculate the value of the utility function (8). ## 5 Consensus Analysis In this section, we examine the effect of the game structure and players' energy constraints on consensus. We will begin the analysis by looking at the case of certain energy conditions of the players. Specifically, if a player has enough energy to attack/recover all edges from a certain step of the game, then it will use all of their energy to attack/recover as many edges as they can in the subsequent steps. We will confirm this point formally in the following. For simplicity, we denote the total energy that the defender consumed before the \(l\)th game as \(\hat{\beta}^{\mathrm{D}}_{l}:=\sum_{k=0}^{(l-1)T-1}\beta^{\mathrm{D}}| \mathcal{E}^{\mathrm{D}}_{k}|\) and the total energy that the defender may consume from the \(1\)st to the \(\alpha\)th step of the \(l\)th game as \(\hat{\beta}^{\mathrm{D}}_{\alpha}:=\sum_{m=1}^{\alpha}\beta^{\mathrm{D}}| \mathcal{E}^{\mathrm{D}}_{l,m}|\), where we omit the index \(l\) from the left-hand side, with a slight abuse of notation. Similarly, for the attacker we denote \(\tilde{\beta}^{\mathrm{A}}_{l}:=\sum_{m=0}^{(l-1)T-1}(\beta^{\mathrm{A}}| \mathcal{E}^{\mathrm{A}}_{m}|+\overline{\beta}^{\mathrm{A}}|\overline{ \mathcal{E}}^{\mathrm{A}}_{m}|)\) and \(\hat{\beta}^{\mathrm{A}}_{\alpha}:=\sum_{m=1}^{\alpha}(\beta^{\mathrm{A}}| \mathcal{E}^{\mathrm{A}}_{l,m}|+\overline{\beta}^{\mathrm{A}}|\overline{ \mathcal{E}}^{\mathrm{A}}_{l,m}|)\). We discuss in Lemma 1 (resp., Lemma 2) the optimal strategy of the defender (resp., attacker) at the \(\alpha\)th step of the game given certain energy conditions mentioned in Section 2. This characterization of optimal strategy of the defender (resp., attacker) will be useful to obtain the necessary (resp., sufficient) conditions for consensus not to happen. ### Necessary Conditions for not Reaching Consensus This subsection discusses necessary conditions for the agents to be separated into different clusters for infinitely long duration without achieving overall consensus. We first discuss the defender's optimal strategy on some games with specific conditions in Lemmas 1 and 2. In Lemma 1, we state the defender's optimal strategy at any step of the \(l\)th game given a certain energy condition. **Lemma 1**.: _If the defender's total energy \(\tilde{\beta}^{\rm D}_{l}+\hat{\beta}^{\rm D}_{\hat{\alpha}-1}\) consumed before the \(\hat{\alpha}\)th step of the \(l\)th game satisfies_ \[\tilde{\beta}^{\rm D}_{l}+\hat{\beta}^{\rm D}_{\hat{\alpha}-1}\] \[\leq\kappa^{\rm D}+\rho^{\rm D}((l-1)T+\hat{\alpha}-1)-(h-\hat{ \alpha}+1)|{\cal E}|\beta^{\rm D}, \tag{13}\] _then \({\cal E}^{\rm D*}_{l,\alpha}={\cal E}^{\rm A*}_{l,\alpha}\) for all \(\alpha\geq\hat{\alpha}\), i.e., the defender will recover all normally attacked edges from the \(\hat{\alpha}\)th step._ [Proof.] We first look at the last (\(h\)th) step of the \(l\)th game. Since the game consists of a horizon of \(h\) steps, the last step of the game corresponds to the last decision-making point, in which the players' strategies cannot influence the decision already made in the previous steps of the same game. Hence, in the last step of the \(l\)th game the players do not save their energy by attacking/recovering less edges. From the defender's energy constraint (4), it is clear that at any time \(k\), the set of edges that the defender recovers is bounded as \(|{\cal E}^{\rm D}_{k}|\leq\frac{\kappa^{\rm D}+\rho^{\rm D}k-\sum_{\alpha=0}^{ k-1}\beta^{\rm D}|{\cal E}^{\rm D}_{l,\alpha}|}{\beta^{\rm D}}\). Thus, at the \(h\)th step, recovered edges satisfy \(|{\cal E}^{\rm D}_{l,h}|\leq|{\cal E}^{\rm D}_{l,h}|\) with \(|{\cal E}^{\rm D\prime}_{l,h}|:=\min\{\lfloor\frac{\kappa^{\rm D}+\rho^{\rm D }((l-1)T+h-1)-(\tilde{\beta}^{\rm D}_{l}+\tilde{\beta}^{\rm D}_{l-h-1})}{ \beta^{\rm D}}\rfloor,\,|{\cal E}^{\rm A*}_{l,h}|\}\). Depending on which edges are normally attacked, the defender may not recover the maximum number \(|{\cal E}^{\rm D\prime}_{l,h}|\) of edges. If the defender's optimal strategy given normally attacked edges \({\cal E}^{\rm A}_{l,h}\) is not to recover \(|{\cal E}^{\rm D\prime}_{l,h}|\) number of edges, i.e., recover less, then the defender will be able to obtain more utility \(U^{\rm D}_{l,h}(\overline{\cal E}^{\rm A}_{l,h},{\cal E}^{\rm A}_{l,h},{\cal E }^{\rm D}_{l,h})>U^{\rm D}_{l,h}(\overline{\cal E}^{\rm A}_{l,h},{\cal E}^{ \rm A}_{l,h},{\cal E}^{\rm D\prime}_{l,h})\). However, under (13) with \(\alpha=h\) the defender has sufficiently high energy, and thus the utility becomes \(U^{\rm D}_{l,h}(\overline{\cal E}^{\rm A}_{l,h},{\cal E}^{\rm A}_{l,h},{\cal E }^{\rm D}_{l,h})>U^{\rm D}_{l,h}(\overline{\cal E}^{\rm A}_{l,h},{\cal E}^{ \rm A}_{l,h},{\cal E}^{\rm A}_{l,h})=U^{\rm D}_{l,h}(\overline{\cal E}^{\rm A} _{l,h},\emptyset,\emptyset)\). It then follows that as long as the defender has enough energy, it will recover all optimal edges attacked normally at the \(h\)th step, i.e., \({\cal E}^{\rm D*}_{l,h}={\cal E}^{\rm A*}_{l,h}\). Next, we investigate the effect of this property on the earlier steps of the \(l\)th game. Since the defender's strategy at the \(h\)th step is not affected by its strategy at the previous (i.e., \((h-1)\)th) step when \(\kappa^{\rm D}+\rho^{\rm D}((l-1)T+h-1)-(\tilde{\beta}^{\rm D}_{l}+\hat{\beta}^{ \rm D}_{h-1})\geq\beta^{\rm D}|{\cal E}|\), here the defender does not need to recover fewer edges at the \((h-1)\)th step to save energy; this is because it already has enough energy to recover \({\cal E}^{\rm A*}_{l,h}\) at the \(h\)th step. Now, we derive that if \(\kappa^{\rm D}+\rho^{\rm D}((l-1)T+h-2)-(\tilde{\beta}^{\rm D}_{l}+\hat{\beta}^ {\rm D}_{h-2})\geq 2\beta^{\rm D}|{\cal E}|\) at the \((h-1)\)th step, then the defender will also recover \({\cal E}^{\rm D*}_{l,h-1}={\cal E}^{\rm A*}_{l,h-1}\). To recover all attacked edges at steps \(\alpha\geq\hat{\alpha}\), it is then sufficient that the defender's energy satisfies (13) so that \(\kappa^{\rm D}+\rho^{\rm D}((l-1)T+\alpha-1)\geq\tilde{\beta}^{\rm D}_{l}+ \hat{\beta}^{\rm D}_{\alpha-1}+\beta^{\rm D}|{\cal E}|\), i.e., the worst-case scenario of the energy constraint (4) when the defender recovers all edges, is always satisfied when \(\alpha\geq\hat{\alpha}\). \(\Box\) From the proof above, note that if the defender's strategy is _not_ to recover all normally attacked edges given even if (13) is satisfied, i.e., \({\cal E}^{\rm A}_{l,\alpha}=\hat{\cal E}^{\rm A}\neq{\cal E}^{\rm D}_{l,\alpha}\), then the attacker will not attack \(\hat{\cal E}^{\rm A}\) set of edges in the first place. This is because by attacking \(\hat{\cal E}^{\rm A}\) (and considering \({\cal E}^{\rm D}_{l,\alpha}\neq\hat{\cal E}^{\rm A}\)) the attacker's utility for step \(\alpha\geq\hat{\alpha}\) becomes \(U^{\rm A}_{l,\alpha}(\hat{\cal E}^{\rm A},{\cal E}^{\rm D}_{l,\alpha}\neq\hat {\cal E}^{\rm A})<U^{\rm A}_{l,\alpha}(\cdot,\emptyset,\emptyset)\), since \(U^{\rm D}_{l,\alpha}(\cdot,\hat{\cal E}^{\rm A},{\cal E}^{\rm D}_{l,\alpha} \neq\hat{\cal E}^{\rm A})>U^{\rm D}_{l,\alpha}(\cdot,\emptyset,\emptyset)=U^{ \rm D}_{l,\alpha}(\cdot,\hat{\cal E}^{\rm A},\hat{\cal E}^{\rm A})\) and \(U^{\rm D}_{l}=-U^{\rm D}_{l}\). We also remark that in order to derive the same optimal strategy for the defender the quantity \((h-\alpha+1)|{\cal E}|\) in the right-hand side of inequality (13) can be relaxed to the maximum number of edges that the attacker can attack from step \(\hat{\alpha}\) to step \(h\) given its energy condition. However, this number of edges may change every game, making the inequality complicated to express. Lemma 2 gives an interval over which, at least once, either not attacking with normal signals or recovering nonzero edges is optimal. **Lemma 2**.: _There is at least one occurrence of either \({\cal E}^{\rm D}_{k}\neq\emptyset\) or \({\cal E}^{\rm A}_{k}=\emptyset\) every \(\lceil\frac{h|{\cal E}|\beta^{\rm D}-\rho^{\rm D}}{\rho^{\rm D}T}+1\rceil\) time steps._ [Proof.] It follows from Lemma 1 that in a game with index \(l^{\prime}\) where (13) is satisfied for \(\alpha=1\), the defender always recovers edges that are attacked normally in the 1st step, i.e., \({\cal E}^{\rm D}_{l^{\prime},1}\neq\emptyset\) if \({\cal E}^{\rm A}_{l^{\prime},1}\neq\emptyset\). We then investigate in which game inequality (13) is satisfied for \(\alpha=1\). Since the defender gains \(\rho^{\rm D}\) every time \(k\), if \({\cal E}^{\rm D}_{k}=\emptyset\) for any \(k\in\{0,\ldots,(l^{\prime}-1)T-1\}\), then (13) at the first step of the \(l^{\prime}\)th game can be written as \(\frac{\kappa^{\mathrm{D}}+\rho^{\mathrm{D}}(l^{\prime}-1)T}{\beta^{\mathrm{D}}} \leq h|\mathcal{E}|\). With \(\kappa^{\mathrm{D}}=\rho^{\mathrm{D}}\) as a worst-case scenario, the left-hand side becomes \(\frac{\rho^{\mathrm{D}}(1+(l^{\prime}-1)T)}{\beta^{\mathrm{D}}}\), and we then obtain \(l^{\prime}\geq\lceil\frac{h|\mathcal{E}|^{\mathrm{B}}-\rho^{\mathrm{D}}}{ \rho^{\mathrm{D}}T}+1\rceil\). Note that the above fact holds when the defender does not recover any edge for any \(k\in\{(j-1)(l^{\prime}-1)T,\ldots,j(l^{\prime}-1)T-1\},j\in\mathbb{N}\). If the defender recovers one or more attacked edges at any \(k\in\{0,\ldots,(l^{\prime}-1)T-1\}\), then the above result may not hold, i.e., the defender may not be able to recover all \(\mathcal{E}_{k}^{\mathrm{D}}\). However, it follows that during time \(k\in\{(j-1)(l^{\prime}-1)T,\ldots,j(l^{\prime}-1)T-1\}\), either 1) the defender recovers nonzero edges (\(\mathcal{E}_{k}^{\mathrm{D}}\neq\emptyset\)), or 2) the attacker attacks no edges with normal signals (\(\mathcal{E}_{k}^{\mathrm{A}}=\emptyset\)) at least once. Lemmas 1 and 2 above imply that the defender is guaranteed to make recoveries from normal attacks every certain interval. Hence, the attacker needs to attack some edges strongly to prevent the recovery in order to separate agents into different clusters, as we discuss next. The following two results provide necessary conditions for consensus not to take place. We consider a more general condition in Proposition 3, whereas in Theorem 4 we consider a more specific situation for the utility functions that leads to a tighter condition. Recall that \(\lambda\) represents the connectivity of \(\mathcal{G}\). **Proposition 3**.: _A necessary condition for consensus not to happen is \(\lfloor\rho^{\mathrm{A}}/\beta^{\mathrm{A}}\rfloor\geq\lambda\)._ [Proof.] In deriving this necessary condition, we suppose that there is no recovery by the defender at any time \(k\). Without any recovery from the defender (\(\mathcal{E}_{k}^{\mathrm{D}}=\emptyset\)), the attacker must attack at least \(\lambda\) number of edges with normal signals (which take less energy) at any time \(k\) to make \(\mathcal{G}_{k}^{\mathrm{D}}\) disconnected at all times. Otherwise, there will be time steps where the graph \(\mathcal{G}_{k}^{\mathrm{D}}\) is connected, which implies that consensus will still be reached. If the attacker attacks \(\lambda\) edges with normal jamming signals at all times, the energy constraint (3) becomes \((\beta^{\mathrm{A}}\lambda-\rho^{\mathrm{A}})k\leq\kappa^{\mathrm{A}}\). Thus, the condition \(\rho^{\mathrm{A}}/\beta^{\mathrm{A}}\geq\lambda\) has to be satisfied to ensure that the attacker can make \(\mathcal{G}_{k}^{\mathrm{D}}\) disconnected for all \(k\). Note that, if the attacker does not have enough energy to disconnect \(\mathcal{G}_{k}^{\mathrm{D}}\) given no recovery, then it definitely cannot disconnect \(\mathcal{G}_{k}^{\mathrm{D}}\) in the face of recovery by the defender. We now limit the class of utility functions in (7), (8) to the case of \(b=0\) in the weights. This means that the players do not take account of the agent-group index in the graph, but only the states in consensus. In this case, the attacker may need more energy to prevent consensus as shown in the next theorem. **Theorem 4**.: _Suppose that \(b=0\). A necessary condition for consensus not to happen is \(\rho^{\mathrm{A}}/\overline{\beta}^{\mathrm{A}}\geq\lambda\)._ [Proof.] We prove by contrapositive; especially, we prove that consensus always happens if \(\rho^{\mathrm{A}}/\overline{\beta}^{\mathrm{A}}<\lambda\). We first suppose that the attacker attempts to attack \(\lambda\) edges strongly at all times to disconnect the graph \(\mathcal{G}_{k}^{\mathrm{D}}\). From (3), the energy constraint of the attacker at time \(k\) becomes \((\overline{\beta}^{\mathrm{A}}\lambda-\rho^{\mathrm{A}})k\leq\kappa^{\mathrm{A}}\). This inequality is not satisfied for sufficiently large \(k\) if \(\rho^{\mathrm{A}}/\overline{\beta}^{\mathrm{A}}<\lambda\), since \(\overline{\beta}^{\mathrm{A}}\lambda-\rho^{\mathrm{A}}\) becomes positive and \(\kappa^{\mathrm{A}}\) is finite. Therefore, the attacker cannot attack \(\lambda\) edges strongly at all times if \(\rho^{\mathrm{A}}/\overline{\beta}^{\mathrm{A}}<\lambda\), and is forced to disconnect the graph by attacking with normal jamming signals instead. Next, by Lemma 2 above, we show that there exists an interval of time where the defender always recovers if there are edges attacked normally, i.e., \(\mathcal{E}_{l^{\prime}}^{\mathrm{D}}\neq\emptyset\) is optimal given that \(\mathcal{E}_{l^{\prime}}^{\mathrm{A}}\neq\emptyset\). From the definitions in (7), (8), given that \(b=0\), we can see that the defender obtains a higher utility if the agents are closer. This means that given a nonzero number of edges to recover (at time \(jl^{\prime}T\) described above), the defender recovers the edges connecting further agents. Specifically, for some \(i\in\mathbb{N}\), for interval \([jl^{\prime}T,(j+i)l^{\prime}T]\), there is a time step where \(U_{l}^{\mathrm{D}}(\mathcal{E}_{k}^{\mathrm{D}}=\mathcal{E}_{1})\geq U_{l}^{ \mathrm{D}}(\mathcal{E}_{2})\), with edges \(\mathcal{E}_{1}\) connecting agents with further states than agents connected by \(\mathcal{E}_{2}\). This fact implies that when recovering, the defender always chooses the further disconnected agents. Since by communicating with the consensus protocol as in (1) the agents' states are getting closer, the defender will choose different edges to recover if the states of agents connected by recovered edges \(\mathcal{E}_{k}^{\mathrm{D}}\) become close enough. Consequently, if \(\rho^{\mathrm{A}}/\overline{\beta}^{\mathrm{A}}<\lambda\), then there exists \(i\in\mathbb{N}\) where the union of graphs, i.e., the graph having the union of the edges of each graph \((\mathcal{V},\bigcup((\mathcal{E}\setminus(\overline{\mathcal{E}}_{k}^{\mathrm{ A}}\cup\mathcal{E}_{k}^{\mathrm{A}}))\cup\mathcal{E}_{k}^{\mathrm{D}}))\) over the time interval \([j(l^{\prime}-1)T,(j+i)(l^{\prime}-1)T]\), becomes a connected graph, where \(l^{\prime}=\lceil\frac{h|\mathcal{E}|\beta^{\mathrm{D}}-\rho^{\mathrm{D}}}{ \rho^{\mathrm{D}}T}+1\rceil\) as in Lemma 2 above. These intervals \([j(l^{\prime}-1)T,(j+i)(l^{\prime}-1)T]\) occur infinitely many times, since the defender's energy bound keeps increasing over time. It is shown in [31] that with protocol (1), the agents achieve consensus in the time-varying graph as long as the union of the graphs over bounded time intervals is a connected graph. This implies that consensus is achieved if \((\mathcal{V},\bigcup((\mathcal{E}\setminus(\overline{\mathcal{E}}^{\Lambda}_{k} \cup\mathcal{E}^{\Lambda}_{k}))\cup\mathcal{E}^{\mathrm{D}}_{k}))\) is connected over \([l^{\prime}_{i},l^{\prime}_{i}+1,\ldots,l^{\prime}_{i+j}]\). Thus, if \(\rho^{\Lambda}/\overline{\beta}^{\Lambda}<\lambda\) then consensus is achieved. The result in Theorem 4 only holds for \(b=0\), since with \(b>0\) the defender may choose to recover the edges connecting agents that already have similar states to maximize \(c(\mathcal{G}^{\mathrm{D}}_{k})\) (instead of those connecting further agents). In such a case, the network may remain disconnected and thus the agents may converge to different states. As we see from these results, the weight values affect the necessary conditions to prevent consensus, whereas the effect of the weights on the sufficient condition (discussed later) is less straightforward. The effect of the values of \(a\) and \(b\) on consensus is illustrated in Section 8. ### Sufficient Condition to Prevent Consensus The next result provides a sufficient condition for preventing consensus. It shows that the attacker can prevent consensus if it has sufficiently large recharge rate \(\rho^{\Lambda}\) given the network topology \(\mathcal{G}\). We first state Lemma 5 about the attacker's optimal strategy under some energy conditions, similar to the discussion on the defender's case above. **Lemma 5**.: _The attacker's optimal strategy is \(\overline{\mathcal{E}}^{\Lambda*}_{l,\alpha}=\mathcal{E}\) if_ * _the attacker's recharge rate satisfies_ \(\rho^{\Lambda}/\overline{\beta}^{\Lambda}\geq|\mathcal{E}|\)_, or_ * _the attacker's total energy_ \(\tilde{\beta}^{\Lambda}_{l}+\tilde{\beta}^{\Lambda}_{\alpha-1}\) _that it consumes before_ \(\alpha\)_th step of the_ \(l\)_th game satisfies_ \[\tilde{\beta}^{\Lambda}_{l}+\hat{\beta}^{\Lambda}_{\alpha-1}\] \[\leq\kappa^{\Lambda}+\rho^{\Lambda}((l-1)T+\alpha-1)-(h-\alpha+ 1)\overline{\beta}^{\Lambda}|\mathcal{E}|.\] (14) [Proof.] We first observe that in the \(h\)th step of the \(l\)th game the attacker does not save their energy by attacking fewer edges. Since \(z_{l,h}(\mathcal{E},\emptyset,\,\emptyset)>z_{l,h}(\overline{\mathcal{E}}^{ \Lambda}_{l,h},\mathcal{E}^{\Lambda}_{l,h}),\mathcal{E}^{\mathrm{D}}_{l,h})\) and \(c((\mathcal{V},\emptyset))\geq c((\mathcal{V},(\mathcal{E}\setminus(\overline{ \mathcal{E}}^{\Lambda}_{l,h}\cup\mathcal{E}^{\Lambda}_{l,h})\cup\mathcal{E}^{ \mathrm{D}}_{l,h})))\) are always satisfied for any edges \(\overline{\mathcal{E}}^{\Lambda}_{l,h},\mathcal{E}^{\mathrm{A}}_{l,h}, \mathcal{E}^{\mathrm{D}}_{l,h}\), the function \(U^{\mathrm{A}}_{h}\) always has the highest value if the attacker strongly attacks all edges \(\mathcal{E}\). It then follows that the attacker with enough energy, i.e., \(\kappa^{\Lambda}+\rho^{\Lambda}((l-1)T+h-1)-(\tilde{\beta}^{\Lambda}_{l}+ \hat{\beta}^{\Lambda}_{h-1})\geq\overline{\beta}^{\Lambda}|\mathcal{E}|\) is satisfied, will choose to attack all edges with strong signals. Similar to the proof in Lemma 1, inequalities \(z_{l,\alpha}(\mathcal{E},\emptyset,\,\emptyset)>z_{l,\alpha}(\overline{ \mathcal{E}}^{\Lambda}_{l,\alpha},\mathcal{E}^{\mathrm{A}}_{l,\alpha}, \mathcal{E}^{\mathrm{D}}_{l,\alpha})\) and \(c((\mathcal{V},\emptyset))\geq c((\mathcal{V},(\mathcal{E}\setminus(\overline{ \mathcal{E}}^{\Lambda}_{l,\alpha}\cup\mathcal{E}^{\mathrm{A}}_{l,\alpha})\cup \mathcal{E}^{\mathrm{D}}_{l,\alpha})))\) are always satisfied for any step \(\alpha\). Hence, the attacker will choose to attack all edges with strong signals in any step \(\alpha\) given enough energy. This can be achieved if the attacker has high enough stored energy, i.e., (14) is satisfied, or if the attacker has high enough recharge rate, i.e., \(\rho^{\Lambda}\geq\overline{\beta}^{\Lambda}|\mathcal{E}|\). These conditions enable the attacker to attack all edges strongly while still satisfying the energy constraint (3) above for all steps. **Proposition 6**.: _A sufficient condition for all agents not to achieve consensus at infinite time is that the attacker's parameters satisfy \(\rho^{\Lambda}/\overline{\beta}^{\Lambda}\geq|\mathcal{E}|\)._ [Proof.] By Lemma 5, the attacker always strongly attacks all edges with strong signals in a game at any step \(\alpha\) given either sufficient recharge rate or sufficient stored energy at the beginning of the game. Consequently, if the attacker's recharge rate satisfies \(\rho^{\Lambda}/\overline{\beta}^{\Lambda}\geq|\mathcal{E}|\), the attacker will attack \(\mathcal{E}\) with stronger jamming signals at all steps of all games, separating every agent at all times. As a result, there are \(n\) clusters formed, and hence, obviously, consensus is not reached. **Remark 7**.: _Note that the necessary conditions and the sufficient condition above consider \(z_{k}=x^{\mathrm{T}}L_{c}x\) in (6) which is a nonincreasing function. It is possible to consider other Laplacian matrices, e.g., Laplacian of the underlying graph \(\mathcal{G}\), however the function \(z_{k}\) may not be nonincreasing anymore. For example, we consider a simple path graph 1-2-3 with initial states \(x_{0}=[10,0,-5]^{\mathrm{T}}\) and Laplacian of graph \(\mathcal{G}\) considered in state difference function \(z_{k}\). With weights of the utility functions (7) and (8) \(a=1\) and \(b=0\) and under consensus protocol (1) and (2) with weights \(a_{12}=0.1\) and \(a_{23}=0.8\), the players' utilities in the first game with \(h=1\) are \(U^{\mathrm{A}}_{1}=-U^{\mathrm{D}}_{1}=148\) without any attacks, and \(U^{\mathrm{A}}_{1}=U^{\mathrm{A}}_{0}=-U^{\mathrm{D}}_{1}=125\) if both edges are attacked. This implies that not attacking any edge may actually be optimal for the attacker even with large enough energy. As a consequence, with Laplacian of graph \(\mathcal{G}\) considered in state difference function \(z_{k}\), the analysis becomes more complicated and some of the theoretical results do not hold anymore, e.g., the sufficient condition in Proposition 6._ ### Example on a Gap Between Necessary Condition and Sufficient Condition In this subsection we provide an example that illustrates the gap between the necessary condition for preventing consensus in Theorem 4 and the sufficient condition in Proposition 6. Here we suppose that the defender has a very high recharge rate (i.e., \(\rho^{\mathrm{D}}\) is much larger than \(\beta^{\mathrm{D}}\)) such that it can recover any normally-attacked edges at any \(k\) (note that the condition in Theorem 4 only consists of the attacker's parameters). This will force the attacker to attack with strong jamming signals to disconnect any agent. We consider a graph \(\mathcal{G}\) as in Fig. 4, with \(x[0]=[-5,0,-20,10]\), \(h=2\), and \(\kappa^{\mathrm{A}}=\rho^{\mathrm{A}}\). The weight of the utility functions are set to be \(a=1\) and \(b=0\). We test various values of \(1\leq\rho^{\mathrm{A}}/\overline{\beta}^{\mathrm{A}}\leq 2\), implying that the attacker can attack one edge with strong signals at all time without running out of energy. Thus, the attacker needs to attack \(e_{12}\) (min-cut edge of \(\mathcal{G}\)) at all times in order to prevent consensus, since it is the only edge which, if attacked, will make the graph disconnected. Note that this ratio \(1\leq\rho^{\mathrm{A}}/\overline{\beta}^{\mathrm{A}}\leq 2\) satisfies the necessary condition for preventing consensus in Theorem 4, but not the sufficient condition in Proposition 6. Specifically in this example we test whether consensus is prevented or not for various value of \(\rho^{\mathrm{A}}/\overline{\beta}^{\mathrm{A}}\) based on agent states at time \(k=20\). It is interesting to note from Table 1 that even with a relatively small value of \(\rho^{\mathrm{A}}/\overline{\beta}^{\mathrm{A}}<|\mathcal{E}|\), consensus can still be prevented by the attacker. From this example, we observe that there is a gap between the necessary condition and the sufficient condition. Note that this gap may be larger for a more connected \(\mathcal{G}\) as well as for network consisting of more agents, where typically \(|\mathcal{E}|>>\lambda\). Later in Section 8, we provide more detailed examples which illustrate the effect of these parameters' values on consensus. As the last result of the section, we state that for a special case with the complete graph under \(b=0\) and \(h=1\), i.e., a single-step game without rolling horizon, the condition in Theorem 4 is also sufficient, i.e., there is no gap between the necessary condition and the sufficient condition. **Proposition 8**.: _Suppose that \(b=0\) and \(h=1\). In the complete graph \(\mathcal{G}\), a sufficient condition for consensus not to happen is \(\rho^{\mathrm{A}}/\overline{\beta}^{\mathrm{A}}\geq n-1\)._ [Proof.] With \(h=1\), the attacker will spend all of its energy at the only step of the game. With \(\rho^{\mathrm{A}}/\overline{\beta}^{\mathrm{A}}\geq n-1\), the attacker is always able to disconnect the complete graph \(\mathcal{G}\). In the complete graph \(\mathcal{G}\), every agent is connected to all other agents regardless of their states, implying that there is no agent that can be prioritized to be isolated by the attacker (different from the example above). Then, with \(b=0\), the attacker is ensured to separate the furthest agent. This implies that, at each game (and at each \(k\)), the attacker will always attack the same edges, resulting in disconnected \(\mathcal{G}_{k}^{\mathrm{D}}\) at each time. We note that in different class of graphs (including in other symmetric graphs such as cycle graphs or star graphs), it is more challenging to derive a tighter sufficient condition. This is because agents have direct access only to some other agents which makes cluster forming based on the agent states more difficult. ## 6 Clustering Analysis In this section, we derive some results on the number of formed clusters of agents at infinite time. From Proposition 6, the result implies the simple case where if the attacker has enough energy such that \(\rho^{\mathrm{A}}/\overline{\beta}^{\mathrm{A}}\geq|\mathcal{E}|\), then the attacker can attack all the edges of the underlying topology \(\mathcal{G}\) so that the number of clusters is \(n\) (i.e., all the agents are separated). The next result discusses a relation between the attacker's cost and energy recharge rate with the maximum number of clusters that the attacker may create through jamming. In the subsequent results of this section, we suppose that \(b=0\). We first define a vector which characterizes the maximum number of clusters of \(\mathcal{G}\), given the parameters \(\rho^{\mathrm{A}}\) and \(\overline{\beta}^{\mathrm{A}}\). Specifically, we define a vector \(\Theta\in\mathbb{R}^{|\mathcal{E}|}\) with elements \(\Theta_{j}:=\max_{|\mathcal{E}^{\mathrm{A}}|=j}\overline{n}(\mathcal{V}, \mathcal{E}\setminus\mathcal{E}^{\mathrm{A}})\), with \(\overline{n}(\mathcal{V},\mathcal{E}\setminus\mathcal{E}^{\mathrm{A}})\) being the number of agent groups of \((\mathcal{V},\mathcal{E}\setminus\mathcal{E}^{\mathrm{A}})\). **Proposition 9**.: _An upper bound on the number of formed clusters at infinite time is \(\Theta_{\lfloor\rho^{\mathrm{A}}/\overline{\beta}^{\mathrm{A}}\rfloor}\)._ [Proof.] The vector \(\Theta\) consists of the maximum number of formed groups \(\overline{n}(\mathcal{V},\mathcal{E}\setminus\mathcal{E}^{\mathrm{A}})\) given the number of attacked edges as the element index. Since some edges need to be attacked consistently in order to divide the agents into different clusters, the number of formed clusters at infinite time is never more than the maximum number of groups at any time \(k\) given the same number of strongly attacked edges. Recall that \(\lfloor\rho^{\mathsf{A}}/\overline{\beta}^{\mathsf{A}}\rfloor\) is the maximum achievable number of edges that can be strongly attacked at all times. Given the known graph topology \(\mathcal{G}\), we then can imply that \(\Theta_{\lfloor\rho^{\mathsf{A}}/\overline{\beta}^{\mathsf{A}}\rfloor}\) gives the maximum number of clusters at infinite time. \(\Box\) We continue by addressing a special case where all the agents in the network are connected with each other. **Corollary 10**.: _In the complete graph \(\mathcal{G}\), the attacker cannot divide the agents into more than_ \[1+\sum_{j=1}^{(n-1)}\min\Bigl{\{}1,\Bigl{\lfloor}\frac{2\rho^{\mathsf{A}}}{j \overline{\beta}^{\mathsf{A}}(2n-j-1)}\Bigr{\rfloor}\Bigr{\}} \tag{15}\] _number of clusters._ [Proof.] In the complete graph, every agent is connected to all other \(n-1\) agents. From Proposition 9, we can derive the vector \(\Theta\) of the complete graph \(\mathcal{G}\) as \[\Theta= [1,\ldots,1,2,\ldots,2,3,\ldots,n-1,n]^{\mathrm{T}},\] where the value of the \((n-1)\)th entry is 2, the value of the \(((n-1)+(n-2))\)th entry is 3, and so on. This is because in the complete graph \(\mathcal{G}\) the attacker needs to attack \((n-1)\) number of edges to disconnect the graph, further \((n-2)\) number of edges to make three groups of agents, further \((n-3)\) number of edges to make four groups of agents, and so on, until \((n-1)+(n-2)+\cdots+1=n(n-1)/2\) agents to make \(n\) groups. The value of the \(\lfloor\rho^{\mathsf{A}}/\overline{\beta}^{\mathsf{A}}\rfloor\)th entry of this \(\Theta\) matrix for the complete graph can be written as in (15). This value determines the upper bound of the number of clusters. \(\Box\) In Proposition 9, we use the information of the graph structure to obtain the vector \(\Theta\). We remark that if the graph structure \(\mathcal{G}\) is not known, then the number of clusters at infinite time is in general upper bounded by \(\lfloor\rho^{\mathsf{A}}/\overline{\beta}^{\mathsf{A}}\rfloor+1\). This is because the attacker can attack continuously at all time at most \(\lfloor\rho^{\mathsf{A}}/\overline{\beta}^{\mathsf{A}}\rfloor\) number of edges, and in the most vulnerable graph with \(\lambda=1\), i.e., tree graphs, any attacked edge will result in a new group. To illustrate the relationship between \(\Theta\) and \(\rho^{\mathsf{A}}/\overline{\beta}^{\mathsf{A}}\), we look at the graph in Fig. 4 from the last section. Here, \(\Theta=[2,2,3,4]^{\mathrm{T}}\), whereas the values of \(\lfloor\rho^{\mathsf{A}}/\overline{\beta}^{\mathsf{A}}\rfloor+1\) are 2, 3, 4, 5 for \(\rho^{\mathsf{A}}/\overline{\beta}^{\mathsf{A}}=1\), 2, 3, and 4, respectively. Note that for any value of \(\rho^{\mathsf{A}}/\overline{\beta}^{\mathsf{A}}\), inequality \(\Theta_{\lfloor\rho^{\mathsf{A}}/\overline{\beta}^{\mathsf{A}}\rfloor}\leq \lfloor\rho^{\mathsf{A}}/\overline{\beta}^{\mathsf{A}}\rfloor+1\) is always satisfied, indicating that knowing the graph structure helps to better estimate the upper bound of the number of clusters. ## 7 Equilibrium Characterization In this game the strategy choices are all finite in form of edges attacked and recovered. Here, we characterize the equilibrium/optimal strategies of the players in certain situations for the case where the players' horizon length is 1 so that they myopically update their strategies every time step. In this section, we state some results when \(a=0\), i.e., when the players do not consider the agents' states but agent-group index in determining their strategies so that the defender (resp., attacker) has higher (resp., lower) utility when more agents belong to the same group. Similar to the analysis in [20], here we explore some possible optimal strategy candidates for the players in a game. However, since a game consists of several steps in this formulation, the subgame perfect equilibrium is more involved to characterize, compared to the case of a game consisting of one step as in [20]. In the \(\alpha\)th step of each game, there are three possibilities in function \(c(\cdot)\) as shown in Table 2 (Cases 1, 2, and 3). From this table, we characterize the optimal strategies of both players in each case: * **Case 1:** When \(c(\mathcal{G})=c(\mathcal{G}^{\mathrm{D}}_{l,\alpha})\), the attacker's utility in one time step is \(c(\mathcal{G})\), which implies that the attacker should not attack any edge either with normal signals or strong signals, with the utilities of both players equal to zero. The players' strategies in this case are called Combined Strategy 1. * **Case 2:** When \(c(\mathcal{G}^{\mathrm{D}}_{l,\alpha})=c(\mathcal{G}^{\mathrm{A}}_{l,\alpha})\), the defender does not recover any attacked edge, whereas the attacker should attack some edges either with strong or normal signals. The players' strategies in this case are classified as Combined Strategy 2. * **Case 3:** Here both players will attack/recover nonzero number of edges. In particular, the attacker will attack with normal signals and potentially with strong signals. The players' strategies here are called Combined Strategy 3. \begin{table} \begin{tabular}{|c|c|c|} \hline Case & \(c(\mathcal{G}^{\mathrm{A}}_{l,\alpha})\) & \(c(\mathcal{G}^{\mathrm{D}}_{l,\alpha})\) \\ \hline 1 & \(c(\mathcal{G}^{\mathrm{A}}_{l,\alpha})=c(\mathcal{G})\) & \(c(\mathcal{G}^{\mathrm{D}}_{l,\alpha})=c(\mathcal{G}^{\mathrm{A}}_{l,\alpha})\) \\ \hline 2 & \(c(\mathcal{G}^{\mathrm{A}}_{l,\alpha})<c(\mathcal{G})\) & \(c(\mathcal{G}^{\mathrm{D}}_{l,\alpha})=c(\mathcal{G}^{\mathrm{A}}_{l,\alpha})\) \\ \hline 3 & \(c(\mathcal{G}^{\mathrm{A}}_{l,\alpha})<c(\mathcal{G})\) & \(c(\mathcal{G}^{\mathrm{D}}_{l,\alpha})>c(\mathcal{G}^{\mathrm{A}}_{l,\alpha})\) \\ \hline \end{tabular} \end{table} Table 2: Possible cases of attack and recovery actions We will then discuss the equilibrium for this game in Proposition 11 below. For simplicity, we only consider the case when \(h=1\). The case of \(h>1\) can be examined based on the characterization here for \(h=1\). **Proposition 11**.: _The optimal strategies for the players with \(h=1\) satisfy the following:_ 1. _Combined Strategy_ 1 _if_ \(\tilde{\beta}^{\rm A}_{l}+\beta^{\rm A}>\kappa^{\rm A}+\rho^{\rm A}(l-1)T\)_,_ 2. _Otherwise,_ 1. _Combined Strategy_ 2 _if_ 1. \(\tilde{\beta}^{\rm D}_{l}+\beta^{\rm D}>\kappa^{\rm D}+\rho^{\rm D}(l-1)T\)_, or_ 2. \(\tilde{\beta}^{\rm D}_{l}+\beta^{\rm D}\leq\kappa^{\rm D}+\rho^{\rm D}(l-1)T\) _and_ \(U^{\rm A}_{l}([(\kappa^{\rm A}+\rho^{\rm A}(l-1)T-\tilde{\beta}^{\rm A}_{l})/ \overline{\beta}^{\rm A}],\emptyset,\emptyset)=\max_{\overline{\cal E}^{\rm A}_ {k},{\cal E}^{\rm A}_{k},{\cal E}^{\rm D}_{k}}U^{\rm A}_{l}(\overline{\cal E}^ {\rm A}_{k},{\cal E}^{\rm A}_{k},{\cal E}^{\rm D}_{k})\)_,_ 2. _Combined Strategy_ 3 _if_ \(\tilde{\beta}^{\rm D}_{l}+\beta^{\rm D}\leq\kappa^{\rm D}+\rho^{\rm D}(l-1)T\) _and_ \(U^{\rm A}_{l}([(\kappa^{\rm A}+\rho^{\rm A}(l-1)T-\tilde{\beta}^{\rm A}_{l})/ \overline{\beta}^{\rm A}],\emptyset,\emptyset)\neq\max_{{\cal E}^{\rm A}_{k},{ \cal E}^{\rm A}_{k},{\cal E}^{\rm D}_{k}}U^{\rm A}_{l}(\overline{\cal E}^{\rm A }_{k},{\cal E}^{\rm A}_{k},{\cal E}^{\rm D}_{k})\)_._ [Proof.] With \(a=0\), we observe that the defender always recovers from the optimal attack at the last step given sufficient energy, which implies that it always recovers for \(h=1\) if \(\tilde{\beta}^{\rm D}_{l}+\beta^{\rm D}\leq\kappa^{\rm D}+\rho^{\rm D}((l-1)T)\) is satisfied. Similar to the defender, the attacker obtains the least utility, i.e., zero, by not attacking for the case of \(h=1\). Therefore, the attacker will attack at least one edge as long as it has enough energy to do so. We prove each point of the proposition statement as below. **(1):** We now suppose that \(\tilde{\beta}^{\rm A}_{l}+\beta^{\rm A}>\kappa^{\rm A}+\rho^{\rm A}((l-1)T)\) (point (1) in the statement) is satisfied, i.e., the attacker does not have enough energy to even attack one edge normally. In this case, Combined Strategy 1 becomes optimal since there is no other choice, i.e., the attacker cannot attack even one edge with normal signals. In the rest of the proof, we assume that \(\tilde{\beta}^{\rm A}_{l}+\beta^{\rm A}\leq\kappa^{\rm A}+\rho^{\rm A}((l-1)T)\) is satisfied. **(\(2a(i)\)):** We now continue by providing the conditions for Combined Strategy 2. Similarly to the attacker above, we observe that the defender cannot recover any edge if \(\tilde{\beta}^{\rm D}_{l}+\beta^{\rm D}>\kappa^{\rm D}+\rho^{\rm D}((l-1)T)\), implying that \(c({\cal G}^{\rm A}_{l,\alpha})<c({\cal G})\) and \(c({\cal G}^{\rm D}_{l,\alpha})=c({\cal G}^{\rm A}_{l,\alpha})\) (corresponds to point \((2a(i))\)). **(\(2a(ii)\)):** We then suppose that \(\tilde{\beta}^{\rm D}_{l}+\beta^{\rm D}\leq\kappa^{\rm D}+\rho^{\rm D}((l-1)T)\) is satisfied. It then follows that given enough energy for the defender, the attacker needs to attack nonzero number of edges with strong signals to satisfy \(c({\cal G}^{\rm A}_{l,\alpha})<c({\cal G})\) and \(c({\cal G}^{\rm D}_{l,\alpha})=c({\cal G}^{\rm A}_{l,\alpha})\). In order for Combined Strategy 2 to be optimal, the attacker then needs to attack edges strongly without attacking with normal signals at all, i.e., \({\cal E}^{\rm A}_{k}=\emptyset\). Thus, \(\overline{\beta}^{\rm A}\) needs to be sufficiently low to make strong attack feasible. Specifically, \(U^{\rm A}_{l}(\overline{\cal E}^{\rm A}_{k},\emptyset,\emptyset)=\max_{ \overline{\cal E}^{\rm A}_{k},{\cal E}^{\rm A}_{k},{\cal E}^{\rm D}_{k}}U^{ \rm A}_{l}(\overline{\cal E}^{\rm A}_{k},{\cal E}^{\rm A}_{k},{\cal E}^{\rm D} _{k})\), with \(|\overline{\cal E}^{\rm A}_{k}|=\lfloor(\kappa^{\rm A}+\rho^{\rm A}((l-1)T)- \tilde{\beta}^{\rm A}_{l})/\overline{\beta}^{\rm A}\rfloor\) indicating the maximum number of edges the attacker attacks strongly. This corresponds to point \((2a(ii))\). **(2b):** Consequently, if \(\tilde{\beta}^{\rm D}_{l}+\beta^{\rm D}\leq\kappa^{\rm D}+\rho^{\rm D}((l-1)T)\) and \(U^{\rm A}_{l}(\overline{\cal E}^{\rm A}_{k},\emptyset,\emptyset)\neq\max_{ \overline{\cal E}^{\rm A}_{k},{\cal E}^{\rm A}_{k},{\cal E}^{\rm D}_{k}}U^{\rm A }_{l}(\overline{\cal E}^{\rm A}_{k},{\cal E}^{\rm A}_{k},{\cal E}^{\rm D}_{k})\) are true, then the attacker normally attacks nonzero number of edges and the defender recovers nonzero number of edges, which imply that Combined Strategy 3 is optimal (point \(2b\)). \(\square\) **Remark 12**.: _The characterization of optimal strategies in Proposition 11 also holds for a more general class of agent-group indices other than \(c({\cal G}^{\prime})\) defined in (5), as long as the utility function structure (7) and (8) does not change. Specifically, it holds for those indices that belong to the class given by_ \[{\cal C}:=\{\tilde{c}:2^{\cal V}\times 2^{\cal E}\rightarrow\mathbb{R }:\tilde{c}(({\cal V},\overline{\cal E}\cup{\cal E}^{\prime}))\geq\tilde{c}(({ \cal V},\overline{\cal E})),\] \[\overline{\cal E},{\cal E}^{\prime}\subseteq{\cal E}\}. \tag{16}\] _The condition \(\tilde{c}(({\cal V},\overline{\cal E}\cup{\cal E}^{\prime}))\geq\tilde{c}(({ \cal V},\overline{\cal E}))\) implies that not attacking results in the maximum value of \(\tilde{c}({\cal G}^{\rm A}_{l,\alpha})\) of the attacker. Similarly, for the defender, this condition implies that not recovering given the attacks results in the minimum value of \(\tilde{c}({\cal G}^{\rm D}_{l,\alpha})\). This condition is necessary for ensuring the equilibrium as in Proposition 11, since it guarantees that attacking/recovering nonzero number of edges (corresponding to Combined Strategy 3) is always optimal for the players as long as they have the energy to do so._ In general, since the cases discussed above are for one step only, for longer \(h>1\) the optimal strategies will take form of a set of combined strategies. For example, if \(h=3\), the sequence of optimal strategies may be {Combined Strategy 1, Combined Strategy 2, Combined Strategy 2}. On the other hand, for \(a>0\), the condition in Proposition 11 becomes more complicated to characterize since attacking more edges does not necessarily result in the highest possible utility. ## 8 Simulation Results ### Consensus and Clustering across Parameters Here we show how the consensus varies across different weights of the utility functions and the initial states. #### 8.1.1 Varying Weights \(a\) and \(b\) We consider the 4-agents line/path graph 1-2-3-4 with initial states \(x_{0}=[1,0.75,0.75,-1]^{\rm T}\). The parameters are \(\beta^{\rm A}=\beta^{\rm D}=1\), \(h=\overline{\beta}^{\rm A}=2\), \(\kappa^{\rm A}=\rho^{\rm A}=2.6\), \(\rho^{\rm D}=0.3\), and \(\kappa^{\rm D}=0.8\), which satisfy the necessary condition for preventing consensus in Proposition 3, but not the sufficient condition in Proposition 3. With \(b=1-a\), Figs. 5 and 6 show the agent states with small \(a\) (at \(a=0.1\)) and large \(a\) (at \(a=0.9\)), respectively. Figs. 7 and 8 illustrate the status of the edges in \(\mathcal{G}_{k}^{\rm D}\) over discrete time \(k\). There, no line in the corresponding edge implies that the edge is strongly attacked; likewise, dashed red lines: normally attacked, dashed black lines: recovered, and solid black lines: not attacked. 1. \(x_{0}=[1,0.9,0.8,0.4,0.44,0.35,0.48,0.2,0.19,\,0.28]^{\rm T}\), 2. \(x_{0}=[1,0.9,0.8,0.4,0.44,0.35,0.48,-0.5,-0.1,-0.2]^{\rm T}\), 3. \(x_{0}=[0.6,0.5,0.8,0.4,0.44,0.35,0.48,0.58,0.8,0.75]^{\rm T}\). Note that in Case (1), agents 1-3 have closer initial states and are far from the other agents. Similarly, in Case (2), agents 8-10 have initial states that are different from the other agents. However, in Case (3), agent states are distributed approximately evenly in the range \([0.35,0.8]\) so that it is hard for the attacker to divide them into clusters. From Fig. 11, we can see that in Case (1), agents 1-3, which have weak connection to other agents (only connected by one edge), are grouped together and converge to the same state. This occurs by attacking the edge connecting agents 3 and 5. On the other hand, in Fig. 12 for Case (2), agents 8-10 are separated from the others because the edge connecting agents 5 and 8 is attacked continuously. Clearly, in Cases (1) and (2) it is easier for the attacker to separate agents since their initial states form clusters matching the network topology. In Case (3), however, the initial state values do not exhibit such properties and as a result, the states converge towards the same value as shown in Fig. 13. In this simulation, the attacker is not able to effectively attack certain edges at all times; as a consequence, the agents are not divided into clusters and thus consensus happens. The attacker may be able to prevent consensus with higher weight \(a\), as discussed in Section 8.1.1 above. For obtaining Figs. 11-13, we solve combinatorial optimization problems to find optimal strategies of the players. We remark that the computational complexity of this problem depends on the number of edges \(\mathcal{E}\) of \(\mathcal{G}\). We have reduced the complexity by disregarding some combinations of edges that are clearly not optimal; for example, attacking only the edge connecting agents 4 and 7 does not disconnect the graph, and thus cannot be the best move for the attacker. #### 8.1.3 Varying Energy and Cost Parameters We continue by discussing the effect of the attacker's recharge rate \(\rho^{\rm A}\) and unit costs of attacks \(\beta^{\rm A}\) and \(\overline{\beta}^{\rm A}\) on the consensus and cluster forming. Recall that in the theoretical results in Sections 5 and 6, the ratios of \(\rho^{\rm A}\) to \(\overline{\beta}^{\rm A}\) and \(\rho^{\rm A}\) to \(\beta^{\rm A}\) are used to derive the necessary conditions and sufficient conditions for preventing consensus as well as the upper bound of the number of clusters formed at infinite time. Assuming that \(b=0\), the number of clusters is dictated by \(\rho^{\rm A}/\overline{\beta}^{\rm A}\) as discussed in Proposition 9. We show the number of clusters over different topologies of the underlying graph \(\mathcal{G}\) in Fig. 14. We consider networks with \(n=5\), with the edges positioned to yield the most connected topology, i.e., maximum \(\lambda\), given the same number of edges \(|\mathcal{E}|\). Note that, with \(n=5\), there are at most \(n(n-1)/2=10\) number of edges in the underlying graph \(\mathcal{G}\) (which happens for the complete graph \(\mathcal{G}\)). We observe that with \(\rho^{\rm A}/\overline{\beta}^{\rm A}\geq|\mathcal{E}|\), the agents are divided into 5 clusters (all agents are separated) as shown in the upper left area of the figure indicated by "5" as derived in Proposition 6 whereas in the lower right area indicated by "1" the agents converge to the same cluster. It is clear that in a more connected graph, the agents are more likely to converge to a fewer number of clusters. ### Players' Performance Under Varying Horizon Length and Game Period In this subsection, we evaluate the players' performance under varying horizon length \(h\) and game period \(T\). To evaluate the performance of the players, we introduce the _applied utilities_\(\hat{U}^{\rm A}_{k}:=az_{k}(\overline{\mathcal{E}}^{\rm A*}_{k},\mathcal{E}^{ \rm A*}_{k},\mathcal{E}^{\rm D*}_{k})-bc(\mathcal{G}^{\rm D*}_{k})\) and \(\hat{U}^{\rm D}_{k}:=-az_{k}(\overline{\mathcal{E}}^{\rm A*}_{k},\mathcal{E}^{ \rm A*}_{k},\mathcal{E}^{\rm D*}_{k})+bc(\mathcal{G}^{\rm D*}_{k})\), with \(\mathcal{G}^{\rm D*}_{k}=(\mathcal{V},((\mathcal{E}\setminus(\overline{ \mathcal{E}}^{\rm A*}_{k}\cup\mathcal{E}^{\rm A*}_{k}))\cup\mathcal{E}^{\rm D* }_{k})\). These are el Figure 11: Agent states in Case 1 Figure 12: Agent states in Case 2 Figure 13: Agent states in Case 3 ments of utility functions \(U_{l}^{\rm A}\) and \(U_{l}^{\rm D}\) corresponding to the \(\alpha\)th step, \(\alpha=k\) mod \(T+1\), of the game with index \(l=\lfloor k/T\rfloor+1\), where the obtained strategies \((\overline{\mathcal{E}}_{(l-1)T+\alpha-1}^{\rm A*},\mathcal{E}_{(l-1)T+\alpha- 1}^{\rm D*},\mathcal{E}_{(l-1)T+\alpha-1}^{\rm D*})=(\overline{\mathcal{E}}_{ l,\alpha}^{\rm A*},\mathcal{E}_{l,\alpha}^{\rm D*},\mathcal{E}_{l,\alpha}^{\rm D*})\) are applied. Since \(U_{l}^{\rm A}=-U_{l}^{\rm D}\), having higher applied utility for the attacker implies lower applied utility for the defender. Note that the values of \(h\) and \(T\) are uniform among the players. In this subsection, we consider the weight \(a_{ij}=\hat{a}\), \(\hat{a}<1/n\) in (2) which implies that different agents have different convergence speeds depending on the number of their neighbors. Furthermore, we consider various initial states \(x_{0}\) for the agents in order to more accurately evaluate the attacker's performance and the pattern of applied utilities \(\hat{U}_{k}^{\rm A}\). We use up to \(1000\) randomly generated initial states in this simulation for each agent ranging from \(-1\) to \(1\). Throughout this subsection, we use parameters \(n=3\), \(\rho^{\rm A}=1.1\), \(\kappa^{\rm A}=7\), \(\overline{\beta}^{\rm A}=2\beta^{\rm A}=1\). between the red and the yellow dashed lines is clearer however, suggesting that the attacker still benefits by having \(h=3\) (compared to the very little difference in the path graph case). The attacker's different behavior for the path graph and the complete graph \(\mathcal{G}\) suggests that in a less connected graph, the effectiveness of longer \(h\) may saturate from a lower value compared to the one in a more connected graph \(\mathcal{G}\), given the attacker's energy parameters. In general, we observe that having a longer \(h\) may result in a better applied utility for the attacker over time due to its role as a leader of the game, i.e., the attacker moves first and is able to choose its strategy that minimizes the defender's best response. Additionally, there is also a clear pattern on when \(\sum\hat{U}_{k}^{\text{A}}\) increases; this implies that the variation of initial states may not affect the attacker's optimal strategy, except in some cases as explained above. We also remark that the effect of different values of \(h\) is also influenced by the underlying graph \(\mathcal{G}\). Specifically, in a less connected graph \(\mathcal{G}\), having a very short horizon may even be more harmful compared to the case with a more connected \(\mathcal{G}\). For example, in Fig. 16, the difference of \(\sum\hat{U}_{k}^{\text{A}}\) in the path graph between \(h=1\) and \(h=2\) is much more apparent than in the complete graph. The possible reason is that in the path graph, it is easier for the attacker to disconnect all agents and make \(n\) groups at some time steps. Thus, with large enough \(h\), the attacker can save enough energy to make \(n\) groups more often. On the other hand, we also observe that increasing horizon length from \(h=2\) to \(h=3\) has minimal effect on the attacker's utility for the path graph, indicating that increasing horizon length past a certain value may not be beneficial anymore. As we see later, the similar phenomenon also happens for varying values of \(T\). #### 8.2.2 Players' Performance Under Varying Game Period We then continue by simulating the case of varying value of game period \(T\) (value of \(h\) is set to be \(h=3\) for both players so that the assumption \(T\leq h\) is always satisfied). The average value of \(\sum\hat{U}_{k}^{\text{A}}\) over time is shown in Fig. 16, where in general, the attacker with shorter game period \(T\) has higher applied utility especially at later time for both the path graph and the complete graph \(\mathcal{G}\). The attacker with shorter \(T\) will be more adaptive to the changes of the agents' and players' conditions. In the context of this game, the attacker with shorter \(T\) may delay the attack further to maximize its utility later. This in turn increases the attacker's utility at later time, similar to the case of longer \(h\) discussed above. Note that the yellow dashed and solid lines are the same as the yellow lines in Fig. 15, and we observe that the green and the purple lines do not differ as much as the red and the blue lines in Fig. 15, indicating that for the attacker, having a large value of \(T\) may not be as disadvantageous as having short \(h\). Table 4 shows the average number of edges attacked by normal and strong jamming signals given different values of \(h\) and \(T\). It is interesting to note that for \(h>T\), the attacker never attacks any edge with normal signals, indicating that it prefers to save its energy to use it later for more powerful attacks. Consequently, the number of edges attacked strongly with \(h>T\) becomes more than those in the case of \(h=T\), which results in the larger applied utilities as described above. We can also observe that in the case of \(h=3\) and \(T=1\), the attacker is able to strongly attack more edges than the other cases in Table 4 in average at \(k=19\), even though at \(k=9\) it attacks slightly fewer edges than the case of closer values of \(h\) and \(T\). This suggests that the attacker tends to save its energy more in the case of larger value of \(h\) and smaller \(T\). ## 9 Conclusion We have formulated a two-player game in a cluster forming of resilient multiagent systems played over time. The players consider the impact of their actions on future communication topology and agent states, and adjust their strategies according to a rolling horizon approach. Necessary conditions and sufficient conditions for forming clusters among agents have been derived. We have discussed the effect of the weights of the utility functions and different initial states on cluster forming, and evaluated the effects of varying horizon length and game period on the players' performance. Possible future extensions include the case where the players' utility functions are not zero-sum, the case where the players do not have perfect knowledge, and the setting where each agent is capable to decide its own strategies in a decentralized way. We have also considered in [22] the case where the players' horizon lengths and game periods are not uniform. This case can be further generalized to decentralized settings where agents decide their own strategies in an asynchronous way. \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline \multirow{2}{*}{\(h\)} & \multirow{2}{*}{\(T\)} & \(\sum_{m=0}^{k}[\mathcal{E}_{m}^{\text{A}*}]\) (Normal) & \(\sum_{m=0}^{k}[\overline{\mathcal{E}}_{m}^{\text{A}*}]\) (Strong) \\ \cline{3-5} & & \(k=9\) & \(k=19\) & \(k=9\) & \(k=19\) \\ \hline 1 & & 7 & 16 & 5 & 6 \\ \hline 2 & 1 & 0 & 0 & 8 & 13.959 \\ \hline & & 0 & 0 & 7.993 & 13.971 \\ \hline 3 & 2 & 0 & 0 & 8 & 13.970 \\ \cline{2-5} & 3 & 2.970 & 4.970 & 7.003 & 11.015 \\ \hline \end{tabular} \end{table} Table 4: Average total number of edges attacked in the path graph \(\mathcal{G}\) Furthermore, it is also interesting to consider a case where the players may not have a complete knowledge of the other players. This incomplete version of the game is considered in [23].
2307.10796
The Post-AGB Star IRAS 07253-2001: Pulsations, Long-Term Brightness Variability and Spectral Peculiarities
The observations and comprehensive study of intermediate initial mass stars at the late stages of evolution, and after the asymptotic giant branch (AGB) in particular, are of crucial importance to identify the common properties for the stars of given group and to reveal binaries among them. This work aims to investigate photometric and spectral peculiarities of a poorly studied post-AGB candidate and infrared source IRAS 07253-2001. We present the new multicolour $UBVR_{C}I_{C}YJHK$ photometry obtained with the telescopes of the Caucasian mountain observatory and analyse it together with the data acquired by the All Sky Automated Survey for SuperNovae. We report on the detection of multiperiod brightness variability caused by pulsations. A beating of close periods, the main one of 73 days and additional ones of 68 and 70 days, leads to amplitude variations. We have also detected a long-term sine trend in brightness with a period of nearly 1800 days. We suppose it to be orbital and IRAS 07253-2001 to be binary. Based on new low-resolution spectroscopic data obtained with the 2.5-m telescope of the Caucasian mountain observatory in 2020 and 2023 in the $\lambda$3500-7500 wavelength range we have identified spectral lines and compiled a spectral atlas. We have found the [N II], [Ni II] and [S II] forbidden emission lines in the spectrum and discuss their origin. The H$\alpha$ line has a variable double-peaked emission component. We have derived preliminary estimates of the star's parameters and detected a variation of radial velocity with a peak-to-peak amplitude of about 30 km s$^{-1}$.
N. P. Ikonnikova, M. A. Burlak, A. V. Dodin, A. A. Belinski, A. M. Tatarnikov, N. A. Maslennikova, S. G. Zheltoukhov, K. E. Atapin
2023-07-20T11:57:28Z
http://arxiv.org/abs/2307.10796v1
The Post-Agb Star Iras 07253-2001: Pulsations, Long-Term Brightness Variability and Spectral Peculiarities ###### Abstract The observations and comprehensive study of intermediate initial mass stars at the late stages of evolution, and after the asymptotic giant branch (AGB) in particular, are of crucial importance to identify the common properties for the stars of given group and to reveal binaries among them. This work aims to investigate photometric and spectral peculiarities of a poorly studied post-AGB candidate and infrared source IRAS 07253-2001. We present the new multicolour \(UBVR_{C}I_{C}YJHK\) photometry obtained with the telescopes of the Caucasian mountain observatory and analyse it together with the data acquired by the All Sky Automated Survey for SuperNovae. We report on the detection of multiperiod brightness variability caused by pulsations. A beating of close periods, the main one of 73 days and additional ones of 68 and 70 days, leads to amplitude variations. We have also detected a long-term sine trend in brightness with a period of nearly 1800 days. We suppose it to be orbital and IRAS 07253-2001 to be binary. Based on new low-resolution spectroscopic data obtained with the 2.5-m telescope of the Caucasian mountain observatory in 2020 and 2023 in the \(\lambda\) 3500-7500 wavelength range we have identified spectral lines and compiled a spectral atlas. We have found the [N II], [Ni II] and [S II] forbidden emission lines in the spectrum and discuss their origin. The H\(\alpha\) line has a variable double-peaked emission component. We have derived preliminary estimates of the star's parameters and detected a variation of radial velocity with a peak-to-peak amplitude of about 30 km s\({}^{-1}\). Khoucynne slova: stars: AGB and post-AGB: evolution--stars: binaries--stars: variable--stars: individual: IRAS 07253-2001 ## 1 Introduction One of the most urgent tasks in exploring the evolution of stars of intermediate initial masses (1-8 \(M_{\odot}\)) is to study these objects in the transition from the asymptotic giant branch (AGB) to planetary nebulae. During the thermal pulsing of AGB phase these stars suffer large mass loss and supply the interstellar medium with nucleosynthesis products created during the stars' evolution and, thus, along with supernova remnants stimulate the further development of their host galaxies (Iben and Renzini, 1983). As it turned out from observations, the vast majority of objects in the post-asymptotic (post-AGB) stage of evolution are variable stars. The type of brightness variability depends on the temperature of the star, that is, on its position on the horizontal evolutionary track. Cooler objects pulsate, and, as a rule, not with one single frequency (Sasselov, 1984; Arkhipova et al., 2010; Hrivnak et al., 2020) while the hot ones show rapid (with a characteristic time of several days or less) irregular variability, which may be due to variations in the stellar wind power, as well as to the pulsations of the compact core (Handler et al., 1997; Arkhipova et al., 2013). Besides, the variations of circumstellar reddening in inhomogeneous dust shells play a significant role in the occurrence of photometric variability of post-AGB stars (Arkhipova et al., 2010; Hrivnak et al., 2022). Currently, the pulsation theory of stars in the late stages of evolution is under development. It's rather difficult to construct such a theory because for AGB and post-AGB stars the convection processes and outflows of matter are of crucial importance, and these are hard to simulate (Fadeev, 2019). The pulsation characteristics obtained from observations for as many stars as possible provide valuable information for computing pulsation models. A considerable portion of the currently known post-AGB objects are binaries (Van Winckel, 2003, 2007). These stars are not contact systems at this stage of evolution, but they should have been subjected to strong interaction in the past, when the main star was on AGB and had a larger size (Van Winckel, 2017), so it is important to distinguish between binary and single objects and to consider them separately when comparing with theoretical evolutionary models. Among the 209 most probable post-AGB objects presented in the catalogue of Szczerba et al. (2007), there are stars that have not been investigated well enough. One of them is the infrared (IR) source IRAS 07253-2001, which was included in the list of post-AGB candidates by Garcia-Lario et al. (1990). For the first time, the authors obtained the \(JHK\) observations for the star and constructed the energy distribution based on these data, and the IRAS data in the wavelength range from 12 to 60 \(\mu\)m as well. In Garcia-Lario et al. (1990), the IR source was wrongly identified with the bright neighboring star HD 59049, nevertheless the \(JHK\)-observations refer to IRAS 07253-2001, and not to HD 59049. Later the object was added to the sample of possible OH/IR masers (Blommaert et al., 1993) but the 1612 MHz emission was not detected. Neither H\({}_{2}\)O (Suarez et al., 2007) nor SiO (Yoon et al., 2014) maser emission associated with IRAS 07253-2001 was found. Blommaert et al. (1993) classified IRAS 07253-2001 as an oxygen-rich (O-rich) AGB object which was later confirmed by Suh and Hong (2017) who included the source in the catalogue of O-rich AGB objects. Reddy and Parthasarathy (1996) obtained the \(BVI\) photometry and a low-resolution spectrum for IRAS 07253-2001. The authors defined the spectral class as F5 Ie and derived a satisfactory model fit to the spectral energy distribution in the wavelength region from 0.4 to 100 \(\mu\)m which was a sum of radiation from the photosphere with \(T_{\rm eff}=7000\) K and \(\log g=1\) and the dust shell heated to \(T_{d}=210\) K. At the same time the authors indicated the presence of both cold and warm dust shells in the system. According to the calculations of Reddy and Parthasarathy (1996) the star has a total \(V\) extinction of \(A_{V}=2\,\fm 1\), radius of \(R_{*}=54R_{\odot}\), is surrounded by a dust shell with \(R_{d}=1.0\times 10^{5}R_{\odot}\) and is located at a distance of \(d=10\) kpc. Suarez et al. (2006) classified IRAS 07253-2001 as an F2 supergiant based on a low-resolution spectrum. Our aim was to study the photometric behaviour of the star and its spectral peculiarities, and to determine the star's parameters based on both archive and our new data. Here we present the analysis of photometric and spectroscopic data for the star obtained with the telescopes of the Caucasian mountain observatory of the Sternberg astronomical institute of the Lomonosov Moscow State University (CMO SAI MSU) and the photometry acquired by the All Sky Automated Survey for SuperNovae (ASAS-SN). We report on the detection of photometric and spectral variability and estimate the star's parameters. The obtained observational data have demonstrated that the equipment, weather conditions and the skills of the staff at the CMO SAI MSU have proved suitable to get good quality data for the observational project devoted to post-AGB stars and related objects which was started on the telescopes of the Crimean astronomical station of SAI MSU more than 30 years ago. ## 2 Observations ### \(UBVR_{c}I_{c}\)-photometry Optical photometry for the star was obtained on the 60-cm Ritchey-Cretien telescope (RC600) at the CMO SAI MSU. The telescope is equipped with a set of photometric filters and an Andor iKon-L CCD (2048\(\times\)2048 pixels of 13.5 \(\mu\)m, the pixel scale is 0\(\farcs\)67 pixel\({}^{-1}\), the field of view is \(22^{\prime}\times 22^{\prime}\)). For a more detailed description of the telescope and instrumentation we refer to Berdnikov et al. (2020). The observations were carried out in remote control mode. We have observed IRAS 07253-2001 for four seasons of visibility in 2019-2023. A complete set of exposures for each night consisted of 2-3 frames in each of the \(UBVR_{C}I_{C}\) filters. On one photometric night (January 2, 2020) we made a series of frames at close airmass for the standard field SA104 presented in Landolt (2009). Astrometry and photometry for the standard stars from the field were taken from the database of Peter Stetson1. Based on the photometry for the standard field, we derived the transformation coefficients to calibrate our instrumental photometry to the standard system. Then we selected a sample of rather bright stars of \(V=11\,\fm 0\)-\(14\,\fm 5\) in the vicinity of IRAS 07253-2001 and transformed their instrumental magnitudes to the standard system using the derived coefficients. Based on the reduced data for the IRAS 07253-2001 field we chose two stars with brightness and colours close to those of IRAS 07253-2001 to be used later as comparison stars in differential photometry. According to the ASAS-SN database the selected stars did not show variability on a timescale of about 4000 days. Fig. 1 provides a finding chart for IRAS 07253-2001 and comparison stars. Their 2MASS designations and our derived \(UBVR_{C}I_{C}\) magnitudes are listed in Table 1. Footnote 1: [https://www.canfar.net/storage/list/STETSON/Standards/L104](https://www.canfar.net/storage/list/STETSON/Standards/L104) We present the resulting photometry for IRAS 07253-2001 in Table 2 (in its entirety the table is provided in electronic form ([http://lnfm1.sai.msu.ru/~davnv/iras07253/UBVRcIc.txt](http://lnfm1.sai.msu.ru/~davnv/iras07253/UBVRcIc.txt))) where for every night we list the mean time of observation and magnitudes in each of photometric bands averaged over 2-3 frames. Our uncertainties defined as standard deviations for each night and averaged over all nights are \(\Delta U=0\,\fm 023\), \(\Delta B=0\,\fm 007\), \(\Delta V=0\,\fm 008\), \(\Delta R_{C}=0\,\fm 010\), \(\Delta I_{C}=0\,\fm 007\). ### IR-photometry Near-IR photometry was carried out on the 2.5-m telescope of the CMO SAI MSU with the ASTRONIRCAM camera-spectrograph (Nadjip et al., 2017) during six seasons of visibility in 2018-2023. We used the dithering mode to obtain images in the \(JHK\) bands of the MKO-NIR system (Mauna Kea Observatories Near-InfraRed (Simons and Tokunaga, 2002; Tokunaga et al., 2002)) and also in the \(Y\) band in 2021-2023. We took 10-15 images in each filter for each pointing. The initial processing of raw images described in detail in Tatarnikov et al. (2023) included the correction for non-linearity and bad pixels, dark subtraction, flat-fielding and background subtraction. Then we performed aperture-based photometry. Usually we used HD 59049 (A2 III/IV) as a comparison star: it is close to IRAS 07253-2001 in IR brightness and appeared in the field of view of the camera (\(4\arcmin 6\times 4\arcmin 6\)). Its MKO-NIR magnitudes (\(Y=9\fm 53\), \(J=9\fm 40\), \(H=9\fm 29\), \(K=9\fm 28\)) were calculated from the 2MASS magnitudes according to the transforming equations given in Leggett et al. (2006). Sometimes during the 2021-2022 season HD 59049 was not caught by the camera. Then we used HD 59095 (A3 IV/V) as a comparison star. Its magnitudes (\(Y=8\fm 52\), \(J=8\fm 42\), \(H=8\fm 37\), \(K=8\fm 37\)) were derived similarly but adjusted so that the brightness differences with HD 59049 corresponded to those observed when both comparison stars came into view. Tables 3 and 4 present the resulting \(YJHK\)-photometry. The magnitudes were calculated as mean values for each pointing. The errors were computed as standard deviations, they do not include the uncertainties of the comparison stars' magnitudes. Their mean values are \(\Delta Y=0\,\fm 009\), \(\Delta J=0\,\fm 014\), \(\Delta H=0\fm 015\), \(\Delta K=0\fm 013\). ### ASAS-SN data The observational data from the All Sky Automated Survey for SuperNovae (Kochanek et al., 2017; Shappee et al., 2014) conducted on robotic telescopes appeared very useful for the study of the star's photometric behaviour. About 920 \(V\) brightness estimates with an accuracy of \(0\,\fm 02\) were obtained by the ASAS-SN project for IRAS 07253-2001 from February 14, 2012 till May 25, 2018 (HJD = 2455972.9-2458264.5). The \(g\)-band observations started in 2018. In this work we used the \(g\) data obtained from February 17, 2018 till April 19, 2022 (HJD = 2458226.6-2459689.6). A total amount of 580 \(g\) brightness estimates with an accuracy of 0\(\,\)m\({}^{.}\)02 was obtained for the interval. ### Spectroscopic observations Spectroscopic observations of IRAS 07253-2001 were carried out in 2020 and 2023 on the 2.5-m telescope of the CMO SAI MSU with the new low-resolution Transient Double-beam Spectrograph (TDS) equipped with holographic gratings (Potanin et al., 2020). The detectors in use are Andor Newton 940P cameras with \(512\times 2048\) E2V CCD42-10 CCDs. A long slit of width of 1\(\,\)\(\farcs\)0 was selected which provided the best spectral resolution but at the cost of losing some light if seeing was worse than 1\(\,\)\(\farcs\)0. The light losses at the slit may be different for the program and standard stars due to varying seeing and the accuracy of centering the star in the slit. Therefore it's impossible to obtain absolute flux-calibrated spectra with our spectrograph when a 1\(\,\)\(\farcs\)0 slit is used. The spectra covered the range \(\lambda\) 3500-7500. The spectral resolution was 1300 for the \(\lambda\) 3500-5720 region (blue channel) and 2500 for the \(\lambda\) 5720-7500 region (red channel). The log of observations can be found in Table 5. The moments of spectroscopic observations are marked in Fig. 4. The reduction sequence was performed using a number of self-developed Python scripts. The processing algorithm is described in Potanin et al. (2020). Although we did not aim to derive absolute stellar fluxes, and moreover we used continuum-normalized spectra, nevertheless it was necessary to observe standard stars to \begin{table} \begin{tabular}{c|c|c|c|c|c|c} \hline Star & ID 2MASS & \(U\) & \(B\) & \(V\) & \(R_{C}\) & \(I_{C}\) \\ \hline St99 & 07275650-2007120 & 14.598 & 14.243 & 13.466 & 13.051 & 12.641 \\ St122 & 07271826-2009324 & 13.494 & 13.419 & 12.797 & 12.443 & 12.083 \\ \hline \end{tabular} \end{table} Table 1: \(UBVR_{C}I_{C}\)-photometry for the comparison stars Figure 1: Finding chart for the field of IRAS 07253-2001 in \(V\). eliminate small-scale features present in the spectrum due to the transmission inhomogeneities of the device and the atmospheric absorption bands. The stars from the list of spectrophotometric standards compiled at the European Southern Observatory2 were used during observations in 2020. In 2023 we used an A0 V star HIP 38789 as a spectrophotometric standard. It is located close to IRAS 07253-2001, its spectrum was obtained with a signal-to-noise ratio \(S/N\approx 500\) just after the object. Footnote 2: [https://www.eso.org/sci/observing/tools/standards/spectra/stanlis.html](https://www.eso.org/sci/observing/tools/standards/spectra/stanlis.html) The continuum-normalized spectrum of the standard star was processed using the pySME project3(Piskunov and Valenti, 2017; Wehrhahn et al., 2023) and we managed to deduce stellar parameters (the effective temperature \(T_{\rm eff}=9630\,K\), the surface gravity \(\log g=3.89\), the overall metallicity \({\rm[Me/H]=0.14}\) and the microturbulence velocity \(\xi_{t}=2.0\) km s\({}^{-1}\)) which fitted the spectral lines with an accuracy better than 1%. The simulated spectrum was integrated with the standard transmission curves for the filters4 and flux-calibrated based on the \(V\) photometry taken from the Simbad database, whereas the \(B\) photometry served to get interstellar extinction by assuming a standard law of interstellar extinction. The resulting value \(A_{V}=0\).21 is in agreement with the absence of diffuse interstellar bands in the observed spectrum. This approach makes it possible to reconstruct a pixel-to-pixel transmission curve and to remove telluric absorptions from the observed spectrum under the condition of observing at the same airmass and with the full width of the slit being filled with stellar light. The latter condition is due to the fact that non-uniform illumination of the slit affects the line profiles; the difference in the profiles of the tellurics between the object and the standard will lead to incomplete compensation of them when divided by the transmission curve and to the appearance of residual artefacts in the spectrum. A similar effect will arise if a shift of wavelength occurs between the observations of the object and the standard due to device deformations. \begin{table} \begin{tabular}{c|c|c|c} \hline JD, 240000+ & \(J\), mag & \(H\), mag & \(K\), mag \\ \hline [MISSING_PAGE_POST] \hline \end{tabular} \end{table} Table 3: \(JHK\)-photometry for IRAS 07253-2001 in 2018–2021 ## 3 Observational Data Analysis ### Search for periodicity A preliminary examination of the photometric data showed that the star's brightness varies quasi-periodically with time. In order to determine a period we used dense data sets of the \(V\) and \(g\) photometry obtained by ASAS-SN (Fig. 2). To perform a frequency analysis we used the program WINEFK developed by V. P. Goranskij5 which implements a discrete Fourier transform for sets of data with arbitrary spacing in time (Deeming, 1975). \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c|c|c} \hline JD, 2400000+ & \(Y\), mag & \(J\), mag & \(H\), mag & \(K\), mag & JD, 2400000+ & \(Y\), mag & \(J\), mag & \(H\), mag & \(K\), mag \\ \hline [MISSING_PAGE_POST] & & & \\ \hline \end{tabular} SNR is a signal-to-noise ratio for the continuum in the resulting spectrum near \(\lambda\,6000\). \end{table} Table 4: \(YJHK\)-photometry for IRAS 07253-2001 in 2021–2023 \begin{table} \begin{tabular}{c|c|c|c|c|c|c} \hline Date & \begin{tabular}{c} HJD \\ 2450000+ \\ \end{tabular} & \begin{tabular}{c} \(T_{\rm exp}\), \\ s \\ \end{tabular} & SNR & Standard \\ \hline 2020/01/18 & 8867.4 & \(300\times 3\) & 130 & BD +25 4655 \\ 2020/12/14 & 9198.5 & \(400\times 2\) & 170 & Feige 66 \\ 2023/01/06 & 9951.4 & \(900\times 3\) & 190 & HIP 38789 \\ 2023/01/10 & 9955.4 & \(1200\times 3\) & 340 & HIP 38789 \\ \hline \end{tabular} \end{table} Table 5: Log of spectroscopic observations First, we removed the trend which was satisfactorily fitted with a quadratic polynomial. Then, we employed the Fourier transform and successive whitening - the subtraction of phase-smoothed periodic variation from the observed light curve. As a result, three periodic components in the \(V\) light curve for 2012-2018 were identified: the primary period of \(P=73\fdg 0\) and, after successive pre-whitening the data, \(P=67\fdg 8\)\(\tt{h}\)\(P=43\fdg 1\). The \(g\) data for 2018-2022 processed similarly yielded the primary period of \(P=73\fdg 4\) and, after whitening applied, the close values of \(P=69\fdg 9\) and \(P=66\fdg 4\), but also \(P=44\fdg 8\). The amplitude spectra for the 10-100 days period range obtained from the \(V\) and \(g\) photometry with the primary periods of \(P=73\fdg 0\) and \(P=73\fdg 4\) marked are shown in Fig. 3 as well as the phase curves folded on these periods. The maximum peak-to-peak variations are \(\Delta V=0\fdg 3\) and \(\Delta g=0\fdg 35\). The detected brightness variability of IRAS 07253-2001 is typical for the F0-F8 supergiants at the post-AGB stage of evolution. Semi-regular brightness variations of these stars are characterized by small amplitudes (from \(0\fdg 1\) to \(0\fdg 6\)), the periods of 30-100 days, switching between modes of close frequencies--the properties which were described in Sasselov (1984) for the UU Her type stars and confirmed later for a number of other post-AGB objects (Arkhipova et al., 1993; Hrivnak and Lu, 2000; Kiss et al., 2007). A combined study of light, colour and radial velocity curves supports the idea that these stars vary in brightness due to pulsations (Hrivnak et al., 2013, 2018). ### Multicolour photometry analysis The \(UBVR_{C}I_{C}\) light and \(U-B\), \(B-V\), \(R_{C}-I_{C}\) colour curves resulted from our observations performed on the RC600 telescope during four seasons of visibility in 2019-2023 are shown in Fig. 4. It's clearly seen that the star undergoes semi-regular brightness oscillations in the \(UBVR_{C}I_{C}\) bands with varying amplitude superimposed on a long-term trend. The maximum amplitudes of brightness variations were observed during the first two seasons: \(\Delta U=0\fdg 40\), \(\Delta B=0\fdg 35\), \(\Delta V=0\fdg 25\), \(\Delta R_{C}=0\fdg 22\) and \(\Delta I_{C}=0\fdg 19\). With the trend fitted by a quadratic polynomial removed, the frequency analysis yielded a primary period of \(P_{0}=73\fdg 3\). Using the WINEFK program we subtracted a phase-smoothed periodic oscillation with \(P_{0}=73\fdg 3\) from the Figure 2: The \(V\) (grey dots) and \(g\) (black dots) ASAS-SN light curves spanning the intervals 2012–2018 and 2018–2022, respectively. observed light curve and found a close period \(P_{1}=69\,^{\circ}\!\!.8\). The values are in good agreement with the periods derived from the ASAS-SN data. We obtained near-IR photometry for six seasons during 2018-2023. Figures 5 shows the \(YJH\) light and \(J-H\), \(H-K\) colour curves. The near-IR variations in the \(YJHK\) bands are about \(0\,^{\rm m}\!\!.15\) within each season that is larger than the observational error. The near-IR measurements are less numerous than the optical ones and the expected amplitude of oscillations is smaller as well, so we were not able to detect a periodic component of brightness variations. A long-term trend can be traced well and is similar to that in the optical. During the first two seasons of optical observations when the periodic oscillations were more prominent a clear correlation between the brightness and the \(B-V\) and \(R_{C}-I_{C}\) colours was seen (Fig. 6): the star was redder when fainter, which is indicative of temperature changes due to pulsations. The \(U-B\) colour correlation with brightness is less pronounced for the first season (black dots) and almost absent for the second one (grey dots). One can also see that the mean brightness in all the bands was lower in 2020-2021 (grey dots) whereas the season-mean colours did not change. ### Long-term trend of brightness Figure 7 shows the summary \(V\) light curve incorporating the ASAS-SN and RC600 data over the 2014-2023 interval. Figure 7 implies that there is a sine wave with a large enough period. We processed the 2014-2023 \(V\) data set with WINEFK and in the 500-3000 days range we found a period of \(P=1810\)\(\pm\) 200 days. Figures 8 and 9 show the phase \(VJHK\) light and \(U-B\), \(B-V\), \(R_{C}-I_{C}\), \(J-H\), \(H-K\) colour curves, respectively, folded on this period. The optical and near-IR brightness varies with phase as well as the \(J-H\) colour does, whereas the \(U-B\), \(B-V\), \(R_{C}-I_{C}\) and \(H-K\) colours do Figure 3: Amplitude spectra for the \(V\) (a) and \(g\) (c) ASAS-SN data and phase curves folded on the periods corresponding to the most significant peaks in the amplitude spectra (b, d). not. The long-term brightness variability with a period of about 1800 days which we have found is not surprising for post-AGB objects. For example, the RV Tau stars of the RVb subtype which are post-AGB stars as well demonstrate a long-term modulation of mean brightness with the periods of 470-2800 days in addition to their pulsation activity Soszynski et al., 2017). In our current understanding, this type of variability is considered related to binarity and the presence of a circumbinary dusty disc which produces variable obscuration of the central source due to orbital motion (Kiss and Bodi, 2017). Hotter post-AGB objects, which have already left the instability strip, sometimes show a long-term modulation of mean brightness, too. V510 Pup (IRAS 08005-2356), a bipolar protoplanetary nebula with a binary central star, may serve as an example. Manick et al. (2021) discovered its brightness variability with a period of \(P=2654\)\(\pm\) 124 days based on optical (\(V\)) and near-IR (\(JHKL\)) photometric data. The authors also performed spectroscopic monitoring of the star and detected a variation of radial velocity with the same period of \(P=2654\) days which they adopted as the orbital period of the system. Thus, taking into consideration the fact that some post-AGB objects demonstrate a long-term periodic trend of brightness we consider that the found period is orbital and the most likely reason for this type of brightness variability with optical colours being constant is the varying obscuration of the central source by large particles of the dusty disc which produce neutral absorp Figure 4: Light and colour curves based on the RC600 data obtained in 2019–2023. The dashed line corresponds to a quadratic polynomial fit of the \(V\) data. Vertical line segments indicate the moments of acquiring spectra. tion during the orbital motion as was detected for IRAS 19135+3937 (Gorlova et al. (2015) and our unpublished data). ### Analysis of spectroscopic data In total, we obtained four spectra for IRAS 07253-2001 in 2020 and 2023. The best-quality spectrum (with the highest SNR) was acquired on January 10, 2023 under very good weather conditions with the longest exposures. That spectrum was mainly used for the following analysis. Figure 10 shows the continuum-normalized spectrum for that date with the identified lines marked. For line identification we used the VALD3 database (Ryabchikova et al., 2015). In addition to hydrogen lines there are numerous absorptions of neutral and once-ionized metals Fe I, Fe II, Mg I, Mn I, Sc II, Ni I, Si II, etc. The presence of strong lines of S I (\(\lambda\) 6744, \(\lambda\) 6749, \(\lambda\) 6757) and C I (\(\lambda\) 6010-6020, \(\lambda\) 6588, \(\lambda\) 7107-7120) is worth mentioning. We also detected the lines of \(s\)-process elements: barium Ba II (\(\lambda\) 5853, \(\lambda\) 6142, \(\lambda\) 6498), strontium Sr II (\(\lambda\) 4078 and \(\lambda\) 4215) and yttrium Y II (\(\lambda\) 4884). Broad absorptions at \(\lambda\) 5797, \(\lambda\) 6286, \(\lambda\) 6613 may be identified as diffuse interstellar bands (DIBs). In the spectrum of IRAS 07253-2001 the K Ca II line (\(\lambda\) 3933) is shallower than the blend H\(\epsilon\)+H Ca II in contrast to what is seen in the spectra of other post-AGB stars of close spectral classes (V887 Her, V1648 Agl and V448 Lac) where these lines are quite equal in depth (Hrivnak et al. (1989); Suarez et al. (2006) and our unpublished data). Figure 5: Near-IR light and colour curves spanning 2018–2023. The dashed line corresponds to a cubic polynomial fit of the \(J\) data. Figure 6: The “colour–brightness” diagrams showing the 2019–2020 (black dots) and 2020–2021 (grey dots) data. Figure 7: The \(V\) light curve based on the ASAS-SN (grey dots) and RC600 (black dots) data. #### 3.4.1 Envelope emission lines An important feature of the spectrum is the presence of the emission component of H\(\alpha\). The H\(\alpha\) profile is shown in Fig. 11 where we plot the spectra obtained on January 18, 2020, December 14, 2020, and January 10, 2023. We do not show the spectra obtained on January 6, 2023 as there is almost no difference in the H\(\alpha\) profile with that from January 10, 2023 (Fig. 11). The H\(\alpha\) line demonstrates a double-peaked emission component and varies significantly with time. The times of spectroscopic observations are marked in Fig. 4. On January 18, 2020 when the star was at pulsation maximum with \(V=12.\!\!^{\rm m}53\), the emission component was the faintest. The next spectroscopic observation carried out on December 14, 2020 coincided with a pulsation minimum when the brightness was \(V=12.\!\!^{\rm m}89\). At that moment the H\(\alpha\) emission in the normalized spectrum appeared considerably stronger, with the central absorption almost absent. On January 10, 2023 at \(V=12.\!\!^{\rm m}66\) the emission component was double-peaked again and had intermediate normalized intensity. The emission equivalent width variation can be explained if we assume that the stellar continuum varies in brightness whereas the emission Figure 8: Phase light curves folded on the period of \(P=1810^{d}\) incorporating the ASAS-SN (grey dots) and CMO (black dots) data. comes from a circumbinary gaseous envelope and has constant intensity. Thus, a double-peaked profile arises when the emission is added on the photospheric absorption. We did not expect to detect any forbidden lines in the spectrum of a cool star and surprisingly we have found the emissions [N II] \(\lambda\,6548\), \(\lambda\,6584\), [S II] \(\lambda\,6716\), \(\lambda\,6731\), [Ni II] \(\lambda\,6667\), \(\lambda\,7378\), \(\lambda\,7412\) and [Fe II] \(\lambda\,7155\). As far as we know, this phenomenon has not been observed for F0-F8 post-AGB supergiants unlike hot post-AGB stars with \(T_{\rm eff}>15\,000\) K which spectra are a sum of radiation from the central star and a low-excitation gas shell as it was shown for IRAS 14331-6435 (Arkhipova et al., 2018) and IRAS 18379-1707 (Ikonnikova et al., 2020). We list the equivalent widths for [N II] \(\lambda\,6584\), [S II] \(\lambda\,6716\), \(\lambda\,6731\), [Ni II] \(\lambda\,6667\), \(\lambda\,7378\), \(\lambda\,7412\) in Table 6. The lines being faint, the uncertainties in the measurements of equivalent widths are as large as 15-20%. The line intensity ratios [S II] \(F(\lambda\,6716)/F(\lambda\,6731)\) and [Ni II] \(F(\lambda\,6667)/F(\lambda\,7378)\) barely depend on electron temperature \(T_{e}\) and may be used to estimate the electron density \(N_{e}\) in the region where the emissions arise. We have rejected the [Ni II] \(\lambda\,7412\) line from the analysis as it is located at the edge of our spectral range and is possibly distorted. Before we compare the observed intensity Figure 9: Phase colour curves folded on the period of \(P=1810^{d}\) based on the CMO data. ## References * [1] A. B. K. K. K. (1983) The \(\mu\)-\(\mu\) mass spectrum of the \(\mu\)-\ ratios with the predicted ones, it's necessary to correct them for reddening. Reddy and Parthasarathy (1996) give the value \(A_{V}=0\hbox{$.\!\!^{\rm m}$}90\) (or \(E(B-V)=0\hbox{$.\!\!^{\rm m}$}29\)) for the interstellar extinction and \(A_{V}=2\hbox{$.\!\!^{\rm m}$}1\) (or \(E(B-V)=0\hbox{$.\!\!^{\rm m}$}68\)) for the total one which includes the circumstellar part. Vickers et al. (2015) adopted \(E(B-V)=0\hbox{$.\!\!^{\rm m}$}46\,\pm\,0\hbox{$.\!\!^{\rm m}$}05\). As we don't know the distance to the object (see Section 4), we can't yet estimate \(E(B-V)\) using the interstellar extinction maps. Taking into account the ambiguity of \(E(B-V)\), we derived the intensity ratios for [S II] \(F(\lambda\,6716)/F(\lambda\,6731)\approx 0.67\) and [Ni II] \(F(\lambda\,6667)/F(\lambda\,7378)\approx 0.20\) using the equivalent widths for these lines and the modelled spectral energy distribution (see Section 3.4.2 for details). To estimate \(N_{e}\) for the region where the [S II] and [Ni II] emissions originate we compared the observed ratios with the theoretical ones based on the emissivities calculated under non-LTE conditions by Giannini et al. (2015). We derived \(N_{e}=(1.5\hbox{--}2.5)\times 10^{3}\) cm\({}^{-3}\) for the [S II] zone and a significantly larger value of \(N_{e}=(1\hbox{--}3)\times 10^{6}\) cm\({}^{-3}\) for the [Ni II] region for the temperature range \(T_{e}=5000\hbox{--}15\,000\) K. This result is in agreement with the conclusion of Bautista et al. (1996) articulated for gas nebulae that the value of \(N_{e}\) derived from the [Ni II] lines appears larger than that derived from [S II]. The origin of forbidden emission lines seen in the spectrum of IRAS 07253-2001 is still questionable. If we consider IRAS 07253-2001 binary then the forbidden lines may point to the presence of a hot component in the system, most likely a white dwarf, and to detect it ultraviolet observations would be needed. #### 3.4.2 Determination of the stellar parameters We have tried to determine the stellar parameters by comparing the observed spectrum with the synthetic one found by the pySME program (see the reference above) using the MARCS model atmospheres (Gustafsson at el., 2008). A model with \(T_{\rm eff}=6300\pm 300\) K, \(\log g=2.0\pm 0.6\), \(\xi_{t}=4.0\pm 1.7\) km s\({}^{-1}\), [Me/H] \(=-1.2\pm 0.2\) fits the hydrogen lines and most of the metal lines well with an exception for the Ca II lines which appear stronger in the model spectrum than in the observed one and for the S I and C I lines enhanced in our spectrum but absent in the model one (Fig. 10). To obtain more reliable stellar parameters and to estimate abundances in the atmosphere a high-resolution spectroscopy combined with a non-LTE approach would be needed. This star with its lower metallicity along with \begin{table} \begin{tabular}{c|c|c} \hline Line & \(\lambda_{\rm lab}\), Å & \(EW\), Å \\ \hline [N II] 1F & 6583.45 & 0.057 \\ [Ni II] 2F & 6666.80 & 0.037 \\ [S II] 2F & 6716.47 & 0.024 \\ [S II] 2F & 6730.85 & 0.036 \\ [Ni II] 2F & 7377.83 & 0.221 \\ [Ni II] 2F & 7411.61 & 0.108 \\ \hline \end{tabular} \end{table} Table 6: Equivalent widths of emission lines in the spectrum of IRAS 07253-2001 for January 10, 2023 Figure 11: The H\(\alpha\) profiles in the normalized spectra obtained on January 18, 2020, December 14, 2020, and January 10, 2023. the presence of strong enough lines of S and C provides a good illustration of the peculiar abundances of some post-AGB stars noted in early works, e.g. Waelkens et al. (1991) for HD 52961. Thus, some A-F supergiants demonstrate almost solar abundances of C, N, O, S and Zn but the abundances of Fe, Mg, Ca, Si, Cr and some other elements are much lower than solar. Lamers (1992) suggested that the atmospheres of post-AGB stars were originally solar in composition but at the end of the AGB stage some heavy nuclei were captured by dust grains and blown away from the atmosphere. So, the present atmospheres of these stars are comprised of heavy-elements-depleted gas. Now we are going to compare the derived parameters with the results published previously. Using a low-resolution spectrum covering \(\lambda\) 5800-8500, Reddy and Parthasarathy (1996) derived an F5 I(e) spectral class for the star. An effective temperature of \(T_{\rm eff}=7000\) K was determined based on the \(T_{\rm eff}\)-spectral class calibration proposed by Flower (1977). Note that the Straizys (1982) calibration gives \(T_{\rm eff}=6500\) K for this spectral and luminosity class. Reddy and Parthasarathy (1996) adopted the value of \(\log g=1.0\) from the tables with the \(\log g\)-luminosity relations (Flower, 1977). IRAS 07253-2001 was assigned an F2 I spectral type by Suarez et al. (2006) based on a low-resolution spectrum (spectral dispersion of 2.47 A pixel\({}^{-1}\)) covering \(\lambda\) 4272-6812. Molina (2018) derived \(T_{\rm eff}=7826\pm 91\) K relying on the same spectrum and the empirical relation \(T_{\rm eff}=(8114\pm 65)+(146\pm 24)({\rm Ca\,II\,K})\) which incorporates the equivalent width of Ca II K (\(\lambda\) 3933). Note that the line used was out of the observed wavelength coverage. Other parameters presented by Molina (2018), namely [Fe/H] \(=-0.81\pm 019\) and \(\log g=1.28\pm 0.21\), do not seem reliable as well, since they were estimated from the equations given therein and the equations include the equivalent widths of the Fe I (\(\lambda\) 4271) blend and Fe, Ti II (\(\lambda\) 4172-4179) lines but the measurements of these spectral features are not listed in Table 1 of Molina (2018). #### 3.4.3 Radial velocity measurements Since the model spectrum fits the most lines quite well, we can use it for measuring radial velocity. For this purpose we have selected eight spectral regions with best coincidence. All the selected areas relate to the red channel because the wavelength calibration of TDS is much more accurate for the red channel than for the blue one. Besides, a large number of sky emission lines allows the correction of calibration shift which arises due to deformation of the spectrograph. The resulting calibration accuracy relative to sky lines is about 3 km s\({}^{-1}\). After the wavelengths are calibrated relative to sky lines, they are corrected to the solar system barycenter. In order to compute radial velocity relative to the model spectrum, the latter is convolved with a Gauss function to fit the widths of absorption lines. The observed spectrum in each of the selected areas is additionally normalized and scaled to fit best the continuum level and line depths of the model spectrum. Radial velocity and two scaling parameters are computed using the least-square method. A mean radial velocity averaged over the selected areas was found to be \(V_{R}=23\pm 6\) km s\({}^{-1}\) for January 18, 2020, \(V_{R}=6\pm 6\) km s\({}^{-1}\) for December 14, 2020, \(V_{R}=39\pm 5\) km s\({}^{-1}\) for January 6 and 10, 2023, taking into account the calibration error. The difference in radial velocity between December 2020 and January 2023 is clearly seen at first sight: the absorption lines are shifted, whereas the interstellar and forbidden lines stay in place. So, the star shows a variation of radial velocity with an amplitude not less than 30 km s\({}^{-1}\). A typical range of radial velocity due to pulsational motions in the atmosphere of post-AGB stars is about 10 km s\({}^{-1}\)(Hrivnak et al., 2018). Therefore, the derived variation of radial velocity may reveal orbital motion, and moreover the semi-amplitude value of \(K_{1}\approx 15\) km s\({}^{-1}\) falls into the range of \(K_{1}\) for the known binary post-AGB stars (see Oomen et al. (2018) and reference therein). Conclusion The primary results of this work are the following: For the first time a long-term multi-colour photometry in the optical and near-IR range was acquired for the post-AGB object IRAS 07253-2001. Based on the ASAS-SN and our data, low-amplitude quasi-periodic brightness variability with the main period of about 73 days and close periods of 68 and 70 days was found. The variation of brightness amplitude is caused by the beating of close frequencies. The colour-brightness relation shows evidence for temperature variations due to pulsations. The variability pattern and pulsational periods are in agreement with what is observed for typical post-AGB stars with F0-F8 spectral types (Arkhipova et al., 2010, 2011; Hrivnak et al., 2010, 2022). Long-term brightness variability with a period of about 1800 days was found based on the ASAS-SN and our multicolour data and this period is likely orbital. Based on spectroscopic data of higher resolution that previous measurements we identified spectral lines and compiled a spectral atlas. We fitted the spectrum and found the atmospheric parameters for the star: \(T_{\rm eff}=6300\pm 300\) K, \(\log g=2.0\pm 0.6\), \(\xi_{t}=4.0\pm 1.7\) km s\({}^{-1}\), [Me/H] \(=-1.2\pm 0.2\). A variation of radial velocity with an amplitude of about 30 km s\({}^{-1}\) was detected which we interpret as an evidence for binarity. The H\(\alpha\) emission component was shown to be variable. We draw a conclusion that it originates from the stellar envelope. Forbidden emission lines radiated by a gas envelope were detected in the spectrum. We suppose that they are excited by the hot star in the binary. The acquired data covering a wide wavelength range from 0.35 \(\mu\)m (the \(U\)-band) to 2.2 \(\mu\)m (the \(K\)-band) can be used later for modelling the spectral energy distribution of the star and determining the dust shell parameters. There is no doubt about the star's evolutionary status: it is surely a post-AGB supergiant. But we are not able to assess its mass by comparing its parameters (effective temperature and luminosity) with model simulations (e.g., Miller Bertolami (2016)). We hoped to constrain the luminosity using the GAIA data. But the parallax is negative in the GAIA DR2 catalogue \(\pi=-2.2\pm 0.4\) mas (Brown et al., 2018). The value \(\pi=2.4\pm 0.4\) mas from the GAIA DR3 catalogue (Brown et al., 2021) yields a distance of \(d=452^{+167}_{-90}\) pc (Bailer-Jones et al., 2021) and luminosity of \(L=25.6^{+22.4}_{-9.2}L_{\odot}\)(Oudmaijer et al., 2022) which is out of range of the luminosities predicted for post-AGB models \(3000\,L_{\odot}<L<15\,000\,L_{\odot}\)(Miller Bertolami, 2016). It's worth mentioning that the parameter RUWE (Renormalised Unit Weight Error) which parametrizes the quality of astrometric solution is \(42.89\gg 1\) for IRAS 07253-2001 which indicates an extremely high uncertainty in parallax that makes it very unreliable. ## Acknowledgements This work has been supported by the M. V. Lomonosov Moscow State University Program of Development (scientific and educational school "Fundamental and applied space research"). We are grateful to the staff of the 2.5-m telescope of the CMO SAI MSU who carried out single observations, namely B. S. Safonov, O. V. Vozyakova, O. V. Egorov, V. S. Lander. S. G. Zheltoukhov and A. M. Tatarnikov acknowledge support from the Russian Scientific Foundation (grant No. 23-22-00182). ## Conflict of Interest The authors declare that there is no conflict of interest.
2310.01254
Containment for Guarded Monotone Strict NP
Guarded Monotone Strict NP (GMSNP) extends Monotone Monadic Strict NP (MMSNP) by guarded existentially quantified predicates of arbitrary arities. We prove that the containment problem for GMSNP is decidable, hereby settling an open question of Bienvenu, ten Cate, Lutz, and Wolter, later restated by Bourhis and Lutz. Our proof of decidability also comes with a 2NEXPTIME upper bound on the complexity of the problem, which matches the lower bound for containment of MMSNP previously obtained by Bourhis and Lutz. In order to obtain these results, we significantly improve the state of knowledge of the model-theoretic properties of GMSNP. Bodirsky, Kn\"auer, and Starke previously showed that every GMSNP sentence defines a finite union of CSPs of $\omega$-categorical structures. We refine their construction by adding a restricted form of homogeneity to the properties of these structures, making the logic amenable to future complexity classifications for query evaluation using techniques developed for infinite-domain CSPs.
Alexey Barsukov, Michael Pinsker, Jakub Rydval
2023-10-02T14:41:28Z
http://arxiv.org/abs/2310.01254v3
# Containment for Binary Guarded Monotone SNP ###### Abstract. Guarded Monotone Strict NP (GMSNP) extends the class of Monotone Monadic Strict NP (MMSNP) by allowing existentially quantified relations of arities greater than 1 but restricting them to always be guarded by input relations. The containment problem is characterized for MMSNP by the existence of a recoloring which is a mapping between the sets of second-order variables of the two given logical sentences that satisfies some specific properties. This paper extends this characterization to GMSNP problems, where the input signature consists of unary and binary relation symbols. Funded by the European Union (ERC, POCOCOP, 101071674). Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Research Council Executive Agency. Neither the European Union nor the granting authority can be held responsible for them. ###### Contents * 1 Introduction * 2 Preliminaries * 3 The _GMSNP_ * 3.1 The _GMSNP_ * 3.2 The _GMSNP_ * 3.3 The _GMSNP_ * 3.4 The _GMSNP_ * 3.5 The _GMSNP_ * 3.6 The _GMSNP_ * 3.7 The _GMSNP_ * 3.8 The _GMSNP_ * 3.9 The _GMSNP_ * 3.10 The _GMSNP_ * 3.11 The _GMSNP_ * 3.12 The _GMSNP_ * 3.13 The _GMSNP_ * 3.14 The _GMSNP_ * 3.15 The _GMSNP_ * 3.16 The _GMSNP_ * 3.17 The _GMSNP_ * 3.18 The _GMSNP_ * 3.19 The _GMSNP_ * 3.20 The _GMSNP_ * 3.21 The _GMSNP_ * 3.22 The _GMSNP_ * 3.23 The _GMSNP_ * 3.24 The _GMSNP_ * 3.25 The _GMSNP_ * 3.26 The _GMSNP_ * 3.27 The _GMSNP_ * 3.28 The _GMSNP_ * 3.3.1 The _GMSNP_ * 3.3.2 The _GMSNP_ * 3.3.3 The _GMSNP_ * 3.3.4 The _GMSNP_ * 3.3.5 The _GMSNP_ * 3.3.6 The _GMSNP_ * 3.3.7 The _GMSNP_ * 3.3.8 The _GMSNP_ * 3.3.9 The _GMSNP_ * 3.3.1 The _GMSNP_ * 3.3.1 The _GMSNP_ * 3.3.2 The _GMSNP_ * 3.3.3 The _GMSNP_ * 3.3.1 The _GMSNP_ * 3.3.2 The _GMSNP_ * 3.3.3 The _GMSNP_ * 3.3.4 The _GMSNP_ * 3.3.5 The _GMSNP_ * 3.3.6 The _GMSNP_ * 3.3.7 The _GMSNP_ * 3.3.8 The _GMSNP_ * 3.3.9 The _GMSNP_ * 3.3.1 The _GMSNP_ * 3.3.1 The _GMSNP_ * 3.3.2 The _GMSNP_ * 3.3.3 The _GMSNP_ * 3.3.1 The _GMSNP_ * 3.3.2 The _GMSNP_ * 3.3.3 The _GMSNP_ * 3.3.4 The _GMSNP_ * 3.3.5 The _GMSNP_ * 3.3.6 The _GMSNP_ * 3.3.7 The _GMSNP_ * 3.3.8 The _GMSNP_ * 3.3.9 The _GMSNP_ * 3.3.1 The _GMSNP_ * 3.3.1 The _GMSNP_ * 3.3.2 The _GMSNP_ * 3.3.3 The _GMSNP_ * 3.3.1 The _GMSNP_ * 3.3.2 The _GMSNP_ * 3.3.3 The _GMSNP_ * 3.3.4 The _GMSNP_ * 3.3.5 The _GMSNP_ * 3.3.6 The _GMSNP_ * 3.3.7 The _GMSNP_ * 3.3.8 The _GMSNP_ * 3.3.9 The _GMSNP_ * 3.3.1 The _GMSNP_ * 3.3.1 The _GMSNP_ * 3.3.2 The _GMSNP_ * 3.3.3 The _GMSNP_ * 3.3.4 The _GMSNP_ * 3.3.5 The _GMSNP_ * 3.3.6 The _GMSNP_ * 3.3.7 The _GMSNP_ * 3.3.8 The _GMSNP_ * 3.3.9 The _GMSNP_ * 3.3.1 The _GMSNP_ * 3.3.1 The _GMSNP_ * 3.3.2 The _GMSNP_ * 3.3.3 The _GMSNP_ * 3.3.4 The _GMSNP_ * 3.3.5 The _GMSNP_ * 3.3.6 The _GMSNP_ * 3.3.7 The _GMSNP_ * 3.3.8 The _GMSNP_ * 3.3.9 The _GMSNP_ * 3.3.1 The _GMSNP_ * 3.3.1 The _GMSNP_ * 3.3.2 The _GMSNP_ * 3.3.3 The _GMSNP_ * 3.3.1 The _GMSNP_ * 3.3.2 The _GMSNP_ * 3.3.3 The _GMSNP_ * 3.3.4 The _GMSNP_ * 3.3.5 The _GMSNP_ * 3.3.6 The _GMSNP_ * 3.3.7 The _GMSNP_ * 3.3.8 The _GMSNP_ * 3.3.9 The _GMSNP_ * 3.3.1 The _GMSNP_ * 3.3.1 The _GMSNP_ * 3.3.2 The _GMSNP_ * 3.3.3.1 The _GMSNP_ * 3.3.2 The _GMSNP_ * 3.3.3 The _GMSNP_ * 3.3.4 The _GMSNP_ * 3.3.5 The _GMSNP_ * 3.3.6 The _GMSNP_ * 3.3.7 The _GMSNP_ * 3.3.8 The _GMSNP_ * 3.3.9 The _GMSNP_ * 3.3.1 The _GMSNP_ * 3.3.2 The _GMSNP_ * 3.3.3 The _GMSNP_ * 3.3.1 The _GMSNP_ * 3.3.2 The _GMSNP_ * 3.3.3 The _GMSNP_ * 3.3.4 The _GMSNP_ * 3.3.5 The _GMSNP_ * 3.3.6 The _GMSNP_ * 3.3.7 The _GMSNP_ * 3.3.8 The _GMSNP_ * 3.3.9 The _GMSNP_ * 3.3.1 The _GMSNP_ * 3.3.1 The _GMSNP_ * 3.3.2 The _GMSNP_ * 3.3.3 The _GMSNP_ * 3.3.1 The _GMSNP_ * 3.3.3.2 The _GMSNP_ * 3.3.3.3 The _GMSNP_ * 3.3.4 The _GMSNP_ * 3.3.5 The _GMSNP_ * 3.3.6 The _GMSNP_ * 3.3.7 The _GMSNP_ * 3.3.8 The _GMSNP_ * 3.3.9 The _GMSNP_ * 3.3.1 The _GMSNP_ * 3.3.1 The _GMSNP_ * 3.3.2 The _GMSNP_ * 3.3.3 The _GMSNP_ * 3.3.1 The _GMSNP_ * 3.3.2 The _GMSNP_ * 3.3.3 The _GMSNP_ * 3.3.4 The _GMSNP_ * 3.3.5 The _GMSNP_ * 3.3.6 The _GMSNP_ * 3.3.7 The _GMSNP_ * 3.3.8 The _GMSNP_ * 3.3.9 The _GMSNP_ * 3.3.1 The _GMSNP_ * 3.3.1 The _GMSNP_ * 3.3.2 The _GMSNP_ * 3.3.3.1 The _GMSNP_ * 3.3.2 The _GMSNP_ * 3.3.3.2 The _GMSNP_ * 3.3.3.3 The _GMSNP_ * 3.3.4 The _GMSNP_ * 3.3.4 The _GMSNP_ * 3.3.5 The _GMSNP_ * 3.3.6 The _GMSNP_ * 3.3.7 The _GMSNP_ * 3.3.8 The _GMSNP_ * 3.3.9 The _GMSNP_ * 3.3.1 The _GMSNP_ * 3.3.1 The _GMSNP_ * 3.3.1 The _GMSNP_ * 3.3.1 The _GMSNP_ * 3.3.2 The _GMSNP_ * 3.3.3 The _GMSNP_ * 3.3.1 The _GMSNP_ * 3.3.2 The _GMSNP_ * 3.3.3 The _GMSNP_ * 3.3.4 The _GMSNP_ * 3.3.5 The _GMSNP_ * 3.3.6 The _GMSNP_ * 3.3.7 The _GMSNP_ * 3.3.8 The _GMSNP_ * 3.3.9 The _GMSNP_ * 3.3.1 The _GMSNP_ * 3.3.1 The _GMSNP_ * 3.3.1 The _GMSNP_ * 3.3.1 The _GMSNP_ * 3.3.1 The _GMSNP_ * 3.3.2 The _GMSNP_ * 3.3.1 The _GMSNP_ * 3.3.2 The _GMSNP_ * 3.3.3.3 The _GMSNP_ * 3.3.4 The _GMSNP_ * 3.3.5 The _GMSNP_ * 3.3.6 The _GMSNP_ * 3.3.7 The _GMSNP_ * 3.3.8 The _GMSNP_ * 3.3.9 The _GMSNP_ * 3.3.1 The _GMSNP_ * 3.3.1 The _GMSNP_ * 3.3.1 The _GMSNP_ * 3.3.1 The _GMSNP_ * 3.3.1 The _GMSNP_ * 3.3.2 The _GMSNP_ * 3.3.3 The _GMSNP_ * 3.3.1 The _GMSNP_ * 3.3.2 The _GMSNP_ * 3.3.3.3 The _GNP_ * 3.3.3.4 The _GMSNP_ * 3.3.4 The _GMSNP_ * 3.3.5 The _GMSNP_ * 3.3.6 The _GMSNP_ * 3.3.7 The _GMSNP_ * 3.3.8 The _GNP_ * 3.3.9 The _GMSNP_ * 3.3.1 The _GMSNP_ * 3.3.1 The _GMSNP_ * 3.3.1 The _GMSNP_ * 3.3.1 The _GMSNP_ * 3.3.1 The _GMSNP_ * 3.3.1.1 The _GMSNP_ * 3.3.1.2 The _GMSNP_ * 3.3.1 The _GMSNP_ * 3.3.1 The _GMSNP_ * 3.3.1 The _GMSNP_ * 3.3.2 The _GMSNP_ * 3.3.2 The _GMSNP_ * 3.3.3.3 The _GNP_ * 3.3.1.1 The _GMSNP_ * 3.3.1.2 The _GMSNP_ * 3.3.2 The _GMSNP_ * 3.3.3.3 The _GMSNP_ * 3.3.3.4 The essential role in subsequent statements of this paper. The procedure that transforms a sentence to a logically equivalent sentence with this property works correctly only when the input relations have arity at most 2. Other steps of the proof do not depend on the arity of input relations. Therefore, proving that "tuple-biconnectedness" can be achieved for arbitrary arities would suffice to characterize the containment for the whole class GMSNP. _The roadmap._ Section 2 defines most of the necessary notions that are used in this paper. In Section 3, the _well-prepared_ form is described, and it is shown that w.l.o.g. one can consider only well-prepared GMSNP sentences. Section 4 explains how to construct a right \(\omega\)-categorical structure whose CSP describes the given GMSNP sentence. This structure has many useful properties as shown in Section 5. Section 6 introduces Ramsey structures and modifies the \(\omega\)-categorical template so that it satisfies the desired properties. The characterization of containment for binary GMSNP is proven in Section 7. ## 2. Preliminaries ### Csp Everywhere in this article \(\tau\) denotes the signature of _input_ relational structures, and \(\sigma\) denotes the _existential_ signature, i.e., the set of existentially quantified relations of a sentence. Distinct signatures of the same kind are distinguished by subscripts or superscripts. Let \(\mathbb{A}\) be a \(\tau\)-structure. The _domain_ of \(\mathbb{A}\) is denoted by \(A\). For another \(\tau\)-structure \(\mathbb{B}\), a mapping \(h\colon A\to B\) is a _homomorphism_ if, for every \(R\in\tau\) of arity \(k\) and every \(\mathbf{a}\in A^{k}\), \(\big{(}\mathbf{a}\in R^{\mathbb{A}}\implies h(\mathbf{a})\in R^{\mathbb{B}} \big{)}\). Here, \(h(\mathbf{a})\) stands for the tuple of \(B^{k}\) obtained from \(\mathbf{a}\) by applying \(h\) componentwise, i.e., \(\mathbf{a}=(a_{1},\ldots,a_{k})\implies h(\mathbf{a}):=\big{(}h(a_{1}),\ldots,h (a_{k})\big{)}\). If \(h\) is a homomorphism from \(\mathbb{A}\) to \(\mathbb{B}\), then it is denoted as \(h\colon\mathbb{A}\to\mathbb{B}\). The family of all finite relational \(\tau\)-structures that are homomorphically mapped to \(\mathbb{A}\) is called \(\operatorname{CSP}(\mathbb{A})\). The same notation is also used for a computational problem, where, for a given input finite \(\tau\)-structure \(\mathbb{B}\), one has to output "yes" if \(\mathbb{B}\in\operatorname{CSP}(\mathbb{A})\) and "no" otherwise. A homomorphism \(h\colon\mathbb{A}\to\mathbb{B}\) is _injective/surjective_ if the mapping \(h\colon A\to B\) is injective/surjective. An injective homomorphism \(e\colon\mathbb{A}\to\mathbb{B}\) is an _embedding_ if, for every \(R\in\tau\) of arity \(k\) and every \(\mathbf{a}\in A^{k}\), \(\big{(}\mathbf{a}\in R^{\mathbb{A}}\Longleftrightarrow h(\mathbf{a})\in R^{ \mathbb{B}}\big{)}\), denoted \(e\colon\mathbb{A}\hookrightarrow\mathbb{B}\). An embedding \(i\colon\mathbb{A}\hookrightarrow\mathbb{B}\) is an _isomorphism_ if it is surjective. A homomorphism \(h\colon\mathbb{A}\to\mathbb{A}\) is an _endomorphism_. An embedding \(e\colon\mathbb{A}\hookrightarrow\mathbb{A}\) is a _self-embedding_. An isomorphism \(\alpha\colon\mathbb{A}\to\mathbb{A}\) is an _automorphism_. Denote by \(\operatorname{End}(\mathbb{A}),\operatorname{Emb}(\mathbb{A}),\operatorname{Aut }(\mathbb{A})\) the sets of endomorphisms, self-embeddings and automorphisms of a structure \(\mathbb{A}\). For a \(\tau\)-structure \(\mathbb{B}\) and a subset \(A\subseteq B\), a \(\tau\)-structure \(\mathbb{A}\) is the _substructure of_\(\mathbb{B}\)_induced on \(A\) if its domain is \(A\), and, for all \(R\in\tau\) of arity \(k\), \(R^{\mathbb{A}}=R^{\mathbb{B}}\cap A^{k}\). Notice that \(\mathbb{A}\hookrightarrow\mathbb{B}\). ### Model theory Let \(\mathbb{A}\) be a \(\tau\)-structure and \(\mathbb{A}^{\prime}\) be a \((\tau\cup\sigma)\)-structure. If \(\mathbb{A}\) and \(\mathbb{A}^{\prime}\) have the same domain and, for all \(R\in\tau\), we have \(R^{\mathbb{A}}=R^{\mathbb{A}^{\prime}}\), then \(\mathbb{A}\) is the _\(\tau\)-product_ of \(\mathbb{A}^{\prime}\), and \(\mathbb{A}^{\prime}\) is a _\(\sigma\)-expansion_ of \(\mathbb{A}\). Also, \(\sigma\)-expansion \(\mathbb{A}^{\prime}\) is sometimes written in the form \((\mathbb{A};X_{1},\ldots,X_{s})\), where \(\sigma=\{X_{1},\ldots,X_{s}\}\). A structure \(\mathbb{A}\) is _homogeneous_ if, for all finite induced substructures \(\mathbb{B}\) and \(\mathbb{C}\) of \(\mathbb{A}\), if there is an isomorphism \(i\colon\mathbb{B}\to\mathbb{C}\), then there is an automorphism \(\alpha\in\operatorname{Aut}(\mathbb{A})\) such that \(\alpha\big{|}_{\mathbb{B}}=i\). If this property holds for all 1-element induced substructures, then \(\mathbb{A}\) is _1-homogeneous_. The group \(\operatorname{Aut}(\mathbb{A})\) acts on the domain \(A\) and also on its powers \(A^{n}\) in the componentwise way, for all \(n\in\mathbb{N}\). A structure \(\mathbb{A}\) is _\(\omega\)-categorical_ if \(A\) is countably infinite and if \(\operatorname{Aut}(\mathbb{A})\) is _oligomorphic_, i.e., for all \(n\in\mathbb{N}\), the action of \(\operatorname{Aut}(\mathbb{A})\) on \(A^{n}\) has finitely many orbits. Notice that, for every \(\omega\)-categorical structure \(\mathbb{A}\), the following holds: \(\overline{\operatorname{Aut}(\mathbb{A})}\subseteq\operatorname{Emb}(\mathbb{ A})\subseteq\operatorname{End}(\mathbb{A}).\) If \(\operatorname{End}(\mathbb{A})=\operatorname{Emb}(\mathbb{A})\), then \(\mathbb{A}\) is a _core_. Let \(\mathbb{A}\) and \(\mathbb{B}\) be two \(\omega\)-categorical structures. A mapping \(c\colon A\to B\) is _canonical_, if, for all \(n\geq 1\) and any two \(\mathbf{a}_{1},\mathbf{a}_{2}\in A^{n}\), if there exists \(\alpha\in\operatorname{Aut}(\mathbb{A})\) such that \(\alpha(\mathbf{a}_{1})=\mathbf{a}_{2}\), then there exists \(\beta\in\operatorname{Aut}(\mathbb{B})\) such that \(\beta\big{(}c(\mathbf{a}_{1})\big{)}=c(\mathbf{a}_{2})\). That is, a canonical mapping preserves orbits of tuples. For a countable set \(A\), it is possible to order the elements of \(A\) with some one-to-one mapping \(A\to\mathbb{N}\). For a set \(A=\{a_{1},a_{2},\ldots\}\), one can define a metric \(d\colon A^{A}\times A^{A}\to\mathbb{R}\) on the set of functions \(A^{A}\) from \(A\) to itself: put \(d(f,g)=1/n\) if, for all \(i<n\), \(f(a_{i})=g(a_{i})\) and \(f(a_{n})\neq g(a_{n})\); and also put \(d(f,f)=0\). This metric induces a topology of _pointwise convergence_ on \(A^{A}\). For a set \(X\subseteq A^{A}\), a function \(f\) is called a _limit point_ of \(X\) if, for every \(\varepsilon>0\), there is \(g\in X\) such that \(d(f,g)<\varepsilon\). Denote by \(\overline{X}\) the union of \(X\) and the set of its limit points. A structure \(\mathbb{A}\) is called _model-complete_ if \(\overline{\operatorname{Aut}(\mathbb{A})}=\operatorname{Emb}(\mathbb{A})\), i.e., every self-embedding \(e\colon\mathbb{A}\hookrightarrow\mathbb{A}\) is a limit point of \(\operatorname{Aut}(\mathbb{A})\). It follows from the definitions that \(\mathbb{A}\) is a _model-complete core_ if \(\overline{\operatorname{Aut}(\mathbb{A})}=\operatorname{End}(\mathbb{A})\). Every \(\omega\)-categorical structure is hom-equivalent to a model-complete core which is unique up to isomorphism. ### Logic For two \(\tau\)-sentences \(\Phi\) and \(\Psi\), \(\Phi\) is _contained_ in \(\Psi\), denoted \(\Phi\subseteq\Psi\), if, for every finite \(\tau\)-structure \(\mathbb{A}\), we have that \(\mathbb{A}\models\Phi\implies\mathbb{A}\models\Psi\). Two \(\tau\)-sentences \(\Phi\) and \(\Psi\) are _logically equivalent_ if \(\Phi\subseteq\Psi\subseteq\Phi\). Two families of sentences \(\mathcal{L},\mathcal{M}\) have the same _expressive power_ if, for every \(\Phi\) in \(\mathcal{L}\), there is a logically equivalent \(\Psi\) in \(\mathcal{M}\), and vice versa. A \(\tau\)-sentence \(\Phi\)_reduces in P-time_ to a \(\tau^{\prime}\)-sentence \(\Phi^{\prime}\) if there is an algorithm \((\tilde{\cdot})\) that receives at the input a \(\tau\)-structure \(\mathbb{A}\) and outputs in time polynomial in the size of \(\mathbb{A}\) a \(\tau^{\prime}\)-structure \(\tilde{\mathbb{A}}\) such that \(\mathbb{A}\models\Phi\) if and only if \(\tilde{\mathbb{A}}\models\Phi^{\prime}\). Two sentences \(\Phi\) and \(\Psi\) are _P-time equivalent_, denoted \(\Phi\sim_{p}\Psi\), if \(\Phi\) reduces in P-time to \(\Psi\) and vice versa. Let \(\rho\) be a finite relational signature and let \(\phi\) be a conjunction of non-negated atomic \(\rho\)-formulas. The _canonical database_ of \(\phi\) is a \(\rho\)-structure \(\mathbb{A}_{\phi}\) such that there is a one-to-one correspondence between its domain and the set of variables of \(\phi\): \(A_{\phi}=\{a_{x}\mid x\text{ is a variable of }\phi\}\); and, for each \(R\in\rho\), the tuple \((a_{x_{1}},\ldots,a_{x_{k}})\) is in \(R^{\mathbb{A}_{\phi}}\) if and only if \(\phi\) contains the atom \(R(x_{1},\ldots,x_{k})\). The conjunction \(\phi\) is called the _canonical conjunctive query_ of \(\mathbb{A}_{\phi}\). An atom \(R(\mathbf{x})\)_guards_ another atom \(S(\mathbf{y})\) if all the variables of \(\mathbf{y}\) belong to \(\mathbf{x}\). A sentence of _guarded monotone strict NP_ (GMSNP) logic has the form \(\exists X_{1}\cdots\exists X_{s}\;\forall\mathbf{x}\;\phi(\mathbf{x})\), where \(\phi\) is a conjunction \(\neg\phi_{1}\wedge\cdots\wedge\neg\phi_{m}\) of formulas \(\neg\phi_{i}:=\neg(\alpha_{i}\wedge\beta_{i})\) such that 1. each \(\alpha_{i}\) is a conjunction of non-negated \(\tau\)-atoms and non-negated \(\sigma\)-atoms, 2. each \(\beta_{i}\) is a conjunction of negated \(\sigma\)-atoms, and, 3. each atom of \(\beta_{i}\) is guarded by some atom of \(\alpha_{i}\). Further in this paper, we consider the following fragment of GMSNP that has the same expressive power as GMSNP. Let \(\sigma\) be represented in the form \(\sigma:=\sigma_{\mathrm{mon}}\cup\bigcup_{R\in\tau}\sigma_{R}\), where each \(M\in\sigma_{\mathrm{mon}}\) is unary and each \(X\in\sigma_{R}\) has the same arity as the corresponding \(R\in\tau\). A sentence of MMSNP\({}_{2}\) logic has the form \(\exists X_{1}\cdots\exists X_{s}\exists M_{1}\cdots\exists M_{u}\;\forall \mathbf{x}\;\phi(\mathbf{x})\), where \(\phi\) is a conjunction \(\neg\phi_{1}\wedge\cdots\wedge\neg\phi_{m}\) of formulas \(\neg\phi_{i}:=\neg(\alpha_{i}\wedge\beta_{i})\) such that 1. each \(\alpha_{i}\) is a conjunction of non-negated \(\tau\)-atoms, 2. each \(\beta_{i}\) is a conjunction of non-negated \(\sigma\)-atoms and negated \(\sigma\)-atoms, 3. all relation symbols \(M_{1},\ldots,M_{u}\) are unary and belong to \(\sigma_{\mathrm{mon}}\), and, 4. for every \(R\in\tau\) and every \(X\in\sigma_{R}\), if \(\beta_{i}\) contains one of the atoms \(X(\mathbf{x})\), \(\neg X(\mathbf{x})\), then \(\alpha_{i}\) contains the atom \(R(\mathbf{x})\). **Example 2.1**.: (No-Monochromatic-Arc-Triangle) Consider the following problem: a directed graph \(\mathbb{G}\) satisfies \(\Phi\) if and only if one can color its arcs in two colors \(B\) and \(W\) such that the resulting expansion \((\mathbb{G};B,W)\) does not contain a directed \(3\)-cycle as a subgraph, where all \(3\) arcs have the same color. This problem can be described by an MMSNP\({}_{2}\) sentence. \[\exists B,W\ \forall x,y,z\begin{pmatrix}\neg\big{(}E(x,y)\wedge B(x,y) \wedge W(x,y)\big{)}\wedge\\ \neg\big{(}E(x,y)\wedge\neg B(x,y)\wedge\neg W(x,y)\big{)}\wedge\\ \neg\big{(}E(x,y)\wedge E(y,z)\wedge E(z,x)\wedge B(x,y)\wedge B(y,z)\wedge B (z,x)\big{)}\wedge\\ \neg\big{(}E(x,y)\wedge E(y,z)\wedge E(z,x)\wedge W(x,y)\wedge W(y,z)\wedge W (z,x)\big{)}\end{pmatrix}\] ## 3. Well-prepared form ### Hom-closed, Injective, and Pure GMSNP The first step towards well-prepared form is to make the negated conjuncts of \(\Phi\) be closed under identifying variables within one conjunct and to require two semantic properties that will be very useful later in this paper. **Definition 3.1**.: _A sentence \(\Phi\) in GMSNP with the first-order part \(\phi\) is hom-closed if, for every \((\tau\cup\sigma)\)-structure \(\mathbb{A}\) of size \(m\), \(\mathbb{A}\) satisfies all negated conjuncts of \(\phi\) if and only if \(\mathbb{A}\) satisfies all negated conjuncts of \(\phi\) with at most \(m\) variables._ _Hom-closed procedure._ A GMSNP sentence can be transformed to a hom-closed one by the same procedure that is used for MMSNP, see Lemma 4.4 in [1]. Let \(\neg\psi\) be some negated conjunct of \(\Phi\) that contains distinct variables \(x\) and \(y\). Add to \(\Phi\) the negated conjunct \(\neg\psi_{x\sim y}\) that is obtained from \(\psi\) by replacing each occurrence of \(y\) with \(x\). Do this for each two distinct variables of \(\neg\psi\) and repeat this for every newly added conjunct. As the number of variables decreases within each new conjunct, this process will terminate. Notice that each newly added negated conjunct keeps the sentence logically the same, so the result is logically equivalent to \(\Phi\). The next lemma directly follows from the construction. **Lemma 3.2**.: _Let \(\Phi\) be a GMSNP\(\tau\)-sentence and let \(\Phi^{\prime}\) be a \(\tau\)-sentence that is obtained from \(\Phi\) by the "hom-closed" procedure. Then, \(\Phi^{\prime}\) is in GMSNP and is hom-closed._ A tuple \(\mathbf{t}\) is _injective_ if all its elements are pairwise distinct. Similarly, an atom \(R(\mathbf{t})\) is _injective_ if the tuple \(\mathbf{t}\) is injective. **Definition 3.3**.: _A GMSNP\(\tau\)-sentence is injective if it accepts only \(\tau\)-structures, where every relational tuple is injective._ **Definition 3.4**.: _A GMSNP\(\tau\)-sentence is pure if it accepts only \(\tau\)-structures, where no relational \(\tau\)-tuple is guarded by another relational \(\tau\)-tuple._ Let \(X\subseteq Y\) be two sets, and let \(\mathbf{y}\in Y^{k}\) be some tuple of elements of \(Y\). Write \(X\subseteq\mathbf{y}\) if, for any \(x\in X\), there exists \(i\in[k]\) such that \(x=y_{i}\). Let \(\phi\) be a conjunction of \(\tau\)-atoms. A subset \(V\) of the set of variables of \(\phi\) is called _maximal_ if \(\phi\) contains an atom \(R(\mathbf{v})\) such that every \(V\subseteq\mathbf{v}\) and, for all \(W\) such that \(V\varsubsetneq W\), \(\phi\) contains no atom \(R(\mathbf{w})\) such that \(W\subseteq\mathbf{w}\). For example, if \(\phi\equiv R(x,y)\wedge R(y,x)\wedge R(y,z)\), then its maximal sets are \(\{x,y\}\) and \(\{y,z\}\). _Making a sentence injective and pure_. Let \(\Phi\) be a hom-closed GMSNP sentence. For every negated conjunction \(\neg(\alpha_{i}\wedge\beta_{i})\) of \(\Phi\) denote by \(\phi_{i}\) the part consisting of \(\tau\)-atoms, and denote by \(\psi_{i}\) the rest of the conjunction. Let \(\mathcal{V}_{i}\) be the family of all maximal sets of variables of \(\phi_{i}\). For \(V\in\mathcal{V}_{i}\), denote by \(\mathbb{F}_{i,V}\) the substructure of the canonical database of \(\phi_{i}\) that is induced on the subset \(V\). For every \(V\in\mathcal{V}_{i}\), let \(\mathcal{F}_{i,V}\) be the family of all possible \(\{<\}\)-expansions of \(\mathbb{F}_{i,V}\), where \(<\) is interpreted as a linear ordering. For every \(\mathbb{F}\) in each such family \(\mathcal{F}_{i,V}\), associate \(\mathbb{F}\) with an \(|F|\)-ary relation symbol \(R_{\mathbb{F}}\), and denote by \(\tau_{1}\) the signature consisting of all such symbols \(R_{\mathbb{F}}\). Call a function \(c\colon\mathcal{V}_{i}\to\bigcup_{V\in\mathcal{V}_{i}}\mathcal{F}_{i,V}\) a _choice_ if, for every \(V\in\mathcal{V}_{i}\), we have that \(c(V)\in\mathcal{F}_{i,V}\). Denote by \(\mathcal{C}_{i}\) the set of all choices for \(\phi_{i}\). For every \(c\in\mathcal{C}_{i}\), denote \(\phi_{i}^{c}:=\bigwedge_{V\in\mathcal{V}_{i}}R_{c(V)}(\mathbf{v})\). Let \(R_{\mathbb{F}}(\mathbf{x}),R_{\mathbb{G}}(\mathbf{y})\) be two \(\tau_{1}\)-atoms. They are called _inconsistent_ if \(\mathbf{x}\) and \(\mathbf{y}\) have a non-empty intersection, say \(Z\), and if the identity mapping \(i\colon Z\to Z\) is not an isomorphism between the substructures of \(\mathbb{F}\) and \(\mathbb{G}\) induced on \(Z\). Otherwise, the atoms are _consistent_. Before constructing the pure and injective sentence, the initial sentence \(\Phi\) is transformed as follows. For every negated conjunct \(\neg\phi(\mathbf{x})\) of \(\Phi\) and for every \(\tau\)-atom \(R(\mathbf{y})\) such that \(\mathbf{y}\subseteq\mathbf{x}\), it is required that either \(\phi(\mathbf{x})\) already contains this atom or that \(\Phi\) contains the negated conjunct \(\neg\big{(}\phi(\mathbf{x})\wedge R(\mathbf{y})\big{)}\). If it does not hold, then this negated conjunct is added to \(\Phi\). The resulting sentence is hom-closed and logically equivalent to \(\Phi\). Therefore, one may assume w.l.o.g. that \(\Phi\) already satisfies this property. The pure and injective GMSNP sentence \(\Phi^{\prime}\) that is associated with \(\Phi\) is constructed as follows. The existential signature is the same. First, add to \(\Phi^{\prime}\) the negated conjunct \(\neg\big{(}R(\mathbf{x})\wedge S(\mathbf{y})\big{)}\) for every two \(\tau_{1}\)-atoms \(R(\mathbf{x}),S(\mathbf{y})\) such that either \(R(\mathbf{x})\) guards \(S(\mathbf{y})\) or \(R(\mathbf{x})\) and \(S(\mathbf{y})\) are inconsistent. This makes \(\Phi^{\prime}\) pure. Then replace every negated conjunct of \(\Phi\) of the form \(\neg(\phi_{i}\wedge\psi_{i})\) with the conjunction \(\bigwedge_{c\in\mathcal{C}_{i}}\neg(\phi_{i}^{c}\wedge\psi_{i})\). Finally, for all \(R\in\tau_{1}\) of arity \(\ell\) and all distinct \(i,j\in[\ell]\), add to \(\Phi^{\prime}\) the negated conjunct \(\neg R(x_{1},\ldots,x_{\ell})\), where all variables are pairwise distinct except for \(x_{i}\) and \(x_{j}\) that are equal. This makes \(\Phi^{\prime}\) injective. **Lemma 3.5**.: _Let \(\Phi\) and \(\Psi\) be hom-closed \(\tau\)-sentences in GMSNP, and let \(\Phi^{\prime}\) and \(\Psi^{\prime}\) be \(\tau_{1}\)-sentences in GMSNP that are obtained from \(\Phi\) and \(\Psi\) by applying the procedure above. Then, \(\Phi^{\prime}\) and \(\Psi^{\prime}\) both are in GMSNP, they are hom-closed, injective, pure, and \(\Phi\subseteq\Psi\) if and only if \(\Phi^{\prime}\subseteq\Psi^{\prime}\)._ Proof.: Every \(\sigma\)-atom is guarded by some \(\tau\)-atom that is contained in some maximal set. In \(\Phi^{\prime}\), for every maximal set \(V\) there is a \(\tau_{1}\)-atom \(R(\mathbf{v})\) such that \(V=\mathbf{v}\), so every \(\sigma\)-atom is guarded in \(\Phi^{\prime}\). Also, every negated conjunct that was added to satisfy "injective" and "pure" conditions does not violate the GMSNP properties. Therefore, \(\Phi^{\prime}\) and \(\Psi^{\prime}\) are in GMSNP. These sentences are injective because each of them contains the conjunction of negated \(\tau_{1}\)-atoms, where some two variables are the same. Also, they are pure because each of them contains the conjunction over all possible pairs of atoms, where one atom guards the other. Suppose that \(\Phi^{\prime}\) is not hom-closed. Then, for some \((\tau_{1}\cup\sigma)\)-structure \(\mathbb{A}\) of size \(m\), it satisfies all the conjuncts with at most \(m\) variables but, for some \(\ell>m\), there is a conjunct \(\neg\phi^{\prime}(\mathbf{x})\) with \(\ell\) variables such that, for some \(\mathbf{a}\in A^{\ell}\), we have that \(\mathbb{A}\models\phi^{\prime}(\mathbf{a})\). Choose \(\ell\) to be minimal. As \(\ell>m\), for some \(i,j\in[\ell]\) such that \(i\neq j\), \(a_{i}=a_{j}\). must satisfy all negated conjuncts that reject non-injective tuples, so every tuple of \(\mathbb{A}\) is injective. Therefore, \(\mathbb{A}\) must satisfy all conjuncts that reject structures containing guarded input tuples. Suppose that \(\neg\phi^{\prime}(\mathbf{x})\) is of the form \(\neg\big{(}R(\mathbf{x}_{1})\wedge S(\mathbf{x}_{2})\big{)}\), where \(R(\mathbf{x}_{1})\) and \(S(\mathbf{x}_{2})\) are inconsistent. Notice that \(x_{i}\) and \(x_{j}\) cannot be in the same tuple \(\mathbf{x}_{1}\) or \(\mathbf{x}_{2}\) as \(\mathbb{A}\) has only injective tuples. But then \(\Phi^{\prime}\) must contain a conjunct that is obtained from \(\neg\phi^{\prime}\) by identifying \(x_{i}\) with \(x_{j}\), because the resulting atoms \(R(\mathbf{x}_{1}^{i\sim j})\) and \(S(\mathbf{x}_{2}^{i\sim j})\) are still inconsistent. The structure \(\mathbb{A}\) has to violate this conjunct but that would contradict the minimality of \(\ell\). Therefore, \(\neg\phi^{\prime}\) must be obtained from some negated conjunct \(\neg\phi\) of \(\Phi\) by replacing its \(\tau\)-atoms with \(\tau_{1}\)-atoms that are associated with some choice function. Consider the conjunct \(\neg\phi_{x_{i}\sim x_{j}}\) of \(\Phi\), where \(x_{i}\) is identified with \(x_{j}\). Let \(\neg\phi^{\prime}_{x_{i}\sim x_{j}}\) be the corresponding conjunct of \(\Phi^{\prime}\). Let \(\mathbf{a}_{i\sim j}\) be obtained from \(\mathbf{a}\) by removing \(a_{j}\). Then \(\mathbb{A}\models\phi^{\prime}_{x_{i}\sim x_{j}}(\mathbf{a}_{i\sim j})\), that contradicts the minimality of \(\ell\). The "injective-pure" procedure induces P-time equivalence of \(\Phi\) and \(\Phi^{\prime}\) (of \(\Psi\) and \(\Psi^{\prime}\)). Notice that it is possible to extend the definition of maximal sets to finite \(\tau\)-structures. For a \(\tau\)-structure \(\mathbb{A}\), let \(\phi_{\mathbb{A}}\) be its canonical conjunctive query. Let \(c\) be some choice function in \(\mathcal{C}_{\phi_{\mathbb{A}}}\). Then, denote by \(p(\mathbb{A})\) the canonical database of \(\phi^{c}_{\mathbb{A}}\). The existential signature \(\sigma\) is left unchanged, so there is a one-to-one correspondence between \(\sigma\)-expansions of \(\mathbb{A}\) and \(\sigma\)-expansions of \(p(\mathbb{A})\). It follows from the construction that a \(\sigma\)-expansion \(\mathbb{A}^{\sigma}\) solves \(\Phi\) if and only if the corresponding \(\sigma\)-expansion \(p(\mathbb{A})^{\sigma}\) solves \(\Phi^{\prime}\). A similar statement holds for \(\Psi\) and \(\Psi^{\prime}\). Therefore, the "injective-pure" procedure preserves containment. **Example 3.6**.: Let \(\tau\) be a signature with a single binary relation symbol: \(\tau=\{E(\cdot,\cdot)\}\). Suppose that we apply the "injective-pure" procedure. The isomorphism types that are associated with the relations of \(\tau_{1}\) are given on Figure 1. There is one unary type and eight binary relations. Binary \(\tau_{2}\)-atoms that do not guard each other are consistent if the tuples intersect in one element, and if this element has a loop in the first relation if and only if it has a loop in the second relation. For example, suppose that \(S_{1}\) denotes the type "\(Exy\)" on Figure 1 and \(S_{2}\) denotes the type "\(Exx,Exy\)". Then, we add to the sentence the following negated conjunct: \(\neg\big{(}S_{1}(x,y)\wedge S_{2}(y,z)\big{)}\) because these atoms are inconsistent. ### Connected MMSNP2 It is convenient to think about existential relations as of colors of relational tuples. The logic MMSNP\({}_{2}\) that is a fragment of GMSNP consists of exactly those sentences, where every \(\sigma\)-relational tuple coincides with some \(\tau\)-tuple. By the following result, we can assume that a GMSNP sentence is already written in the form of MMSNP\({}_{2}\). Figure 1. The only isomorphism type for 1-element structure, and all isomorphism types for 2-element structures. **Theorem 3.7** (Theorem 4.3 in [10]).: GMSNP _and \(\operatorname{MMSNP}_{2}\) have the same expressive power._ _Remark 3.8_.: Notice that the \(\operatorname{MMSNP}_{2}\) sentence associated with an injective and pure GMSNP must also be injective and pure. Although the "hom-closed" property may not be preserved by this transformation, one can just apply the "hom-closed" procedure one more time and get a hom-closed injective and pure sentence. A conjunction \(\phi(\mathbf{x})\) of atomic formulas is _connected_ if the variables of \(\mathbf{x}\) cannot be split into two disjoint sets \(\mathbf{y},\mathbf{z}\) such that \(\phi(\mathbf{x})\) can be written in the form \(\psi_{1}(\mathbf{y})\wedge\psi_{2}(\mathbf{z})\). A sentence \(\Phi\) in \(\operatorname{MMSNP}_{2}\) is _connected_ if every negated conjunct of \(\Phi\) is connected. It is well-known that every \(\operatorname{MMSNP}_{2}\) sentence is logically equivalent to a disjunction of connected \(\operatorname{MMSNP}_{2}\) sentences [1]. It has to be justified that characterizing containment for connected \(\operatorname{MMSNP}_{2}\) sentences extends to all \(\operatorname{MMSNP}_{2}\) sentences. Notice, that every \(\operatorname{MMSNP}_{2}\) problem can be described as some CSP. **Theorem 3.9** (Theorem 4.18 in [1]).: _Every connected \(\operatorname{MMSNP}_{2}\) problem is \(\operatorname{CSP}(\mathbb{A})\), for some \(\omega\)-categorical structure \(\mathbb{A}\). Every problem in \(\operatorname{MMSNP}_{2}\) is the union of finitely many problems \(\operatorname{CSP}(\mathbb{A}_{1})\cup\cdots\cup\operatorname{CSP}(\mathbb{A }_{k})\), where, for \(i\in[k]\), \(\mathbb{A}_{i}\) is \(\omega\)-categorical._ Homomorphisms between \(\omega\)-categorical structures work as follows. **Lemma 3.10** (Lemma 4.1.7 in [1]).: _An \(\omega\)-categorical structure \(\mathbb{A}\) homomorphically maps to an \(\omega\)-categorical structure \(\mathbb{B}\) if and only if every finite substructure of \(\mathbb{A}\) homomorphically maps to \(\mathbb{B}\)._ This is a well-known result about containment for CSP. **Proposition 3.11**.: _Let \(\mathbb{A}_{1},\ldots,\mathbb{A}_{k}\) and \(\mathbb{B}_{1},\ldots,\mathbb{B}_{\ell}\) be \(\omega\)-categorical structures. Suppose that, the problem \(\operatorname{CSP}(\mathbb{A}_{1})\cup\cdots\cup\operatorname{CSP}(\mathbb{A }_{k})\) is contained in \(\operatorname{CSP}(\mathbb{B}_{1})\cup\cdots\cup\operatorname{CSP}(\mathbb{B }_{\ell})\). Then, for every \(i\in[k]\), there exists \(j\in[\ell]\) such that \(\mathbb{A}_{i}\to\mathbb{B}_{j}\)._ Proof.: Suppose that there exists \(i\in[k]\) such that, for all \(j\in[\ell]\), \(\mathbb{A}_{i}\not\to\mathbb{B}_{j}\). Then, by Lemma 3.10, there are finite substructures \(\mathbb{C}_{1},\ldots,\mathbb{C}_{\ell}\) of \(\mathbb{A}_{i}\) such that, for \(j\in[\ell]\), \(\mathbb{C}_{j}\not\to\mathbb{B}_{j}\). The disjoint union \(\mathbb{C}_{1}\sqcup\cdots\sqcup\mathbb{C}_{\ell}\) homomorphically maps to \(\mathbb{A}_{i}\), therefore, it belongs to \(\operatorname{CSP}(\mathbb{B}_{1})\cup\cdots\cup\operatorname{CSP}(\mathbb{B }_{\ell})\). Thus, for some \(j\in[\ell]\), there is a homomorphism \(\mathbb{C}_{1}\sqcup\cdots\sqcup\mathbb{C}_{\ell}\to\mathbb{B}_{j}\). This contradicts the assumption that \(\mathbb{C}_{j}\not\to\mathbb{B}_{j}\). The following is a straightforward consequence of Theorem 3.9 and Proposition 3.11. **Corollary 3.12**.: _Let \(\Phi:=\Phi_{1}\vee\cdots\vee\Phi_{k}\) and \(\Psi:=\Psi_{1}\vee\cdots\vee\Psi_{\ell}\) be two finite disjunctions of connected \(\operatorname{MMSNP}_{2}\) sentences. Then, \(\Phi\) is contained in \(\Psi\) if and only if, for every \(i\in[k]\), there is \(j\in[\ell]\) such that \(\Phi_{i}\) is contained in \(\Psi_{j}\)._ _Remark 3.13_.: If an injective and pure \(\operatorname{MMSNP}_{2}\) sentence \(\Phi\) is logically equivalent to a disjunction \(\Phi_{1}\vee\cdots\vee\Phi_{\ell}\) of connected \(\operatorname{MMSNP}_{2}\) sentences, then, for all \(i\in[\ell]\), \(\Phi_{i}\) is also injective and pure. If some of them is not hom-closed, then we can apply the "hom-closed" procedure one more time. ### Well-prepared \(\operatorname{MMSNP}_{2}\) By Lemma 3.5, Theorem 3.7, and Corollary 3.12, it can be assumed that a sentence \(\Phi\) is already in connected \(\operatorname{MMSNP}_{2}\), and that it is also hom-closed, injective, and pure. The transformations of this section modify the existential signature \(\sigma\) and leave the input signature unchanged. At the end, \(\Phi\) obtains all the properties that are sufficient to characterize the containment using \(\omega\)-categorical CSPs. An MMSNP\({}_{2}\) sentence \(\Phi\) is _biconnected_ if no negated conjunct of \(\Phi\) can be written in the form \(\neg\big{(}\psi_{1}(\mathbf{x},z)\wedge\psi_{2}(z,\mathbf{y})\big{)}\), where \(\mathbf{x}\) and \(\mathbf{y}\) are disjoint. _Biconnected procedure._ An MMSNP\({}_{2}\) sentence can be transformed to a biconnected one by the same procedure that is used for MMSNP, see Lemma 4.4 in [1]. For every negated conjunct of the form \(\neg\big{(}\psi_{1}(\mathbf{x},z)\wedge\psi_{2}(z,\mathbf{y})\big{)}\), introduce a new unary existential relation \(P\) and replace this conjunct with \(\neg\big{(}\psi_{1}(\mathbf{x},z)\wedge P(z)\big{)}\wedge\neg\big{(}\neg P(z) \wedge\psi_{2}(z,\mathbf{y})\big{)}\). **Definition 3.14**.: _A conjunction \(\phi(\mathbf{x})\) is \(2\)-separable if its variables \(\mathbf{x}\) can be split into three disjoint tuples \(\mathbf{s},\mathbf{y},\mathbf{z}\) such that \(\mathbf{s}\) contains at most \(2\) variables and \(\phi(\mathbf{x})\) can be written in the form: \(\psi_{1}(\mathbf{s},\mathbf{y})\wedge\psi_{2}(\mathbf{s},\mathbf{z})\). In this case, we say that \(\mathbf{s}\) separates \(\phi(\mathbf{x})\)._ **Definition 3.15**.: _A conjunction \(\phi\) is tuple-biconnected if, for every tuple \(\mathbf{s}\) that separates \(\phi\), this conjunct does not contain the atom \(R(\mathbf{s})\) for any \(R\in\tau\). A structure is tuple-biconnected if its canonical conjunctive query is._ **Definition 3.16**.: _Let \(\mathbb{A}_{1},\mathbb{A}_{2}\) be two \((\tau\cup\sigma)\)-structures that satisfy the first-order part of \(\Phi\). Suppose that, for some \(R\in\tau\), two tuples \(R(\mathbf{a}_{1})\) and \(R(\mathbf{a}_{2})\) induce substructures \(\mathbb{B}_{1}\hookrightarrow\mathbb{A}_{1},\mathbb{B}_{2}\hookrightarrow \mathbb{A}_{2}\) such that the mapping \(f\colon\mathbb{B}_{1}\to\mathbb{B}_{2}\), \(a_{i}^{1}\mapsto a_{i}^{2}\) is an isomorphism. Let \(\mathbb{C}\) be obtained by identifying each \(a_{i}^{1}\) with \(a_{i}^{2}\). If \(\mathbb{C}\) satisfies the first-order part of \(\Phi\), then \(\Phi\) is called tuple-biconnected._ _Tuple-biconnected procedure._ Consider a negated conjunct of \(\Phi\) that is \(2\)-separable, i.e., it can be written in the form \(\neg\big{(}\psi_{1}(\mathbf{s},\mathbf{x})\wedge\psi_{2}(\mathbf{s},\mathbf{ y})\big{)}\) such that \(\mathbf{x}\) and \(\mathbf{y}\) are non-empty and share no variables. Then, for every such \(2\)-tuple \(\mathbf{s}\) that separates the conjunct and for every binary \(R\in\tau\), add to \(\sigma_{R}\) a new relation symbol \(P\) and add the following two conjuncts to \(\Phi\): \[\neg\big{(}\psi_{1}(\mathbf{s},\mathbf{x})\wedge R(\mathbf{s})\wedge P( \mathbf{s})\big{)}\wedge\neg\big{(}\neg P(\mathbf{s})\wedge R(\mathbf{s}) \wedge\psi_{2}(\mathbf{s},\mathbf{y})\big{)}.\] After that, remove the original conjunct \(\neg\big{(}\psi_{1}(\mathbf{s},\mathbf{x})\wedge\psi_{2}(\mathbf{s},\mathbf{ y})\big{)}\) from \(\Phi\) unless it is tuple-biconnected. Repeat this step for the newly added conjuncts until the second condition is satisfied. As both tuples \(\mathbf{x},\mathbf{y}\) are non-empty, the number of variables in each new conjunct decreases. This implies that this process will eventually terminate. **Lemma 3.17**.: _Let \(\Phi\) be a biconnected injective pure MMSNP\({}_{2}\) sentence. Then, after the "tuple-biconnected" procedure, \(\Phi\) becomes tuple-biconnected._ Proof.: Let \(\mathbb{A}_{1},\mathbb{A}_{2},\mathbb{B}_{1},\mathbb{B}_{2}\), and \(\mathbb{C}\) be as in Definition 3.16. Let \(R(\mathbf{b})=R(b_{1},b_{2})\) be the atom that induces the substructures \(\mathbb{B}_{1},\mathbb{B}_{2}\). Assume that \(\mathbb{C}\) does not satisfy the first-order part \(\theta\) of \(\Phi\), i.e. for some negated conjunct \(\neg\phi\) and for some elements \(\mathbf{c}\) of \(\mathbb{C}\), we have that \(\mathbb{C}\models\phi(\mathbf{c})\). As \(\phi\) is connected, one of \(b_{1},b_{2}\) belongs to \(\mathbf{c}\). As both \(\mathbb{A}_{1}\) and \(\mathbb{A}_{2}\) satisfy \(\theta\), the tuple \(\mathbf{c}\) has non-empty intersection with both \(A_{1}\setminus B_{1}\) and \(A_{2}\setminus B_{2}\). If \(\mathbf{c}\) does not contain some element of \(\mathbf{b}\), say \(b_{2}\), then one of \(\mathbb{A}_{1}\) and \(\mathbb{A}_{2}\) does not satisfy \(\theta\), so \(\mathbf{b}\subset\mathbf{c}\). Observe that \(\phi(\mathbf{c})\) is \(2\)-separable by \(\mathbf{b}\), then, by the "tuple-biconnected" procedure, there exists conjuncts that prevent one of \(\mathbb{A}_{1}\) and \(\mathbb{A}_{2}\) to satisfy \(\theta\). **Definition 3.18**.: _An MMSNP\({}_{2}\) sentence is well-prepared if it is connected, hom-closed, injective, pure, biconnected, tuple-biconnected, and satisfies the following properties._ 1. \(\sigma\)-relations partition vertices and tuples. _That is,_ \(\Phi\) _contains the following negated conjuncts that require_ \(\sigma_{\mathrm{mon}}\)_-colors to partition the elements of the input:_ \[\neg\big{(}\bigwedge_{M\in\sigma_{\mathrm{mon}}}\neg M(x)\big{)}\wedge \bigwedge_{M\neq M^{\prime}\in\sigma_{\mathrm{mon}}}\neg\big{(}M(x)\wedge M^{ \prime}(x)\big{)}.\] _Then, for every \(R\in\tau\), \(\Phi\) contains the following negated conjuncts that require \(\sigma_{R}\)-colors to partition the \(R\)-tuples of the input:_ \[\neg\big{(}R(\mathbf{x})\wedge\bigwedge_{X\in\sigma_{R}}\neg X(\mathbf{x}) \big{)}\wedge\bigwedge_{X\neq X^{\prime}\in\sigma_{R}}\neg\big{(}R(\mathbf{x}) \wedge X(\mathbf{x})\wedge X^{\prime}(\mathbf{x})\big{)}.\] 2. \(\sigma_{R}\)-relations color only \(R\)-tuples. _For every_ \(R\in\tau\) _of arity_ \(k\)_, every_ \(X\in\sigma_{R}\)_, every_ \(M_{1},\ldots,M_{k}\in\sigma_{\mathrm{mon}}\)_, every_ \(Y\in\tau\cup\sigma\setminus(\sigma_{\mathrm{mon}}\cup\sigma_{R})\)_, and every_ \(\mathbf{x}\) _and_ \(\mathbf{y}\) _of corresponding arities such that_ \(\mathbf{x}\) _guards_ \(\mathbf{y}\)_,_ \(\Phi\) _contains the conjunct_ \[\neg\big{(}R(\mathbf{x})\wedge\bigwedge_{i=1}^{k}M_{i}(x_{i})\wedge X(\mathbf{ x})\wedge Y(\mathbf{y})\big{)}\] _that forbids relations not from_ \(\sigma_{\mathrm{mon}}\cup\sigma_{R}\) _to be contained within a fully colored_ \(R\)_-tuple. Also, for every_ \(R\in\tau\)_, every_ \(M_{1},\ldots,M_{k}\in\sigma_{\mathrm{mon}}\)_, every_ \(X,Y\in\sigma_{R}\)_, and all distinct_ \(\mathbf{x}\) _and_ \(\mathbf{y}\) _of corresponding arities such that_ \(\mathbf{x}\) _guards_ \(\mathbf{y}\)_,_ \(\Phi\) _contains the conjunct_ \[\neg\big{(}R(\mathbf{x})\wedge\bigwedge_{i=1}^{k}M_{i}(x_{i})\wedge X(\mathbf{ x})\wedge Y(\mathbf{y})\big{)}\] _that requires that if a_ \(\sigma_{R}\)_-tuple is guarded by a fully colored_ \(R\)_-tuple, then they coincide._ 3. The clauses of \(\Phi\) are fully colored. _This means that, for any negated conjunct_ \(\neg\phi\) _of_ \(\Phi\) _except for the first two conjuncts and for any variable_ \(x\) _of_ \(\phi\)_, there is_ \(M\) _in_ \(\sigma_{\mathrm{mon}}\) _such that_ \(\phi\) _contains_ \(M(x)\)_. And, similarly, for any negated conjunct_ \(\neg\phi\) _of_ \(\Phi\) _except the first two conjuncts and for any_ \(\tau\)_-atom_ \(R(\mathbf{x})\) _of_ \(\phi\)_, there is_ \(X\) _in_ \(\sigma_{R}\) _such that_ \(\phi\) _contains_ \(X(\mathbf{x})\)_._ 4. Tuple-colors uniquely determine vertex colors. _Consider an input_ \(\tau\)_-structure_ \(\mathbb{A}\) _and some its_ \(\sigma\)_-expansion that witnesses_ \(\mathbb{A}\) _being a YES instance. Then, if two_ \(R\)_-tuples_ \((x_{1},\ldots,x_{k})\) _and_ \((y_{1},\ldots,y_{k})\) _of_ \(\mathbb{A}^{\prime}\) _are colored with the same_ \(\sigma_{R}\)_-relation_ \(X\)_, then, for all_ \(i\in[k]\)_,_ \(x_{i}\) _and_ \(y_{i}\) _are colored with the same_ \(\sigma_{\mathrm{mon}}\)_-relation._ Call the negated conjuncts from items 1 and 2_partitioning_. Call a \((\tau\cup\sigma)\)-structure _partitioned_ if it satisfies partitioning conjuncts. **Lemma 3.19**.: _Every connected, injective, and pure sentence in \(\mathrm{MMSNP}_{2}\) is logically equivalent to a well-prepared sentence in \(\mathrm{MMSNP}_{2}\)._ Proof.: First, we may assume that the sentence \(\Phi\) is already connected, hom-closed, injective, and pure. Then consecutively apply the "biconnected" and "tuple-biconnected" procedures. By Lemma 3.17, both of them preserve the aforementioned properties of \(\Phi\). In order to obtain the properties 1-3, one applies the following well-known procedure. First, we require that, for every \(R\in\tau\) and every \(X\in\sigma_{R}\) and every negated conjunct \(\neg\phi\) of \(\Phi\), if \(\neg\phi\) contains an atom \(R(\mathbf{x})\), then it must contain either \(X(\mathbf{x})\) or \(\neg X(\mathbf{x})\). If this does not hold for some \(\neg\phi\), then replace it with the following: \(\neg\big{(}\phi\wedge X(\mathbf{x})\big{)}\wedge\neg\big{(}\phi\wedge\neg X( \mathbf{x})\big{)}\). We also require that, for every \(M\in\sigma_{\mathrm{mon}}\) and every variable \(x\) of every negated conjunct, it must contain either \(M(x)\) or \(\neg M(x)\). If not, then do a similar replacement. Then replace the existential signature \(\sigma=\sigma_{\mathrm{mon}}\cup\bigcup_{R\in\tau}\sigma_{R}\) with a new signature \(\mathbf{2}^{\sigma}:=\mathbf{2}^{\sigma_{\mathrm{mon}}}\cup\bigcup_{R\in\tau} \mathbf{2}^{\sigma_{R}}\), where every symbol of \(\mathbf{2}^{\sigma_{R}}\) is associated with some Boolean combination of symbols of \(\sigma_{R}\). Then, for every negated conjunct of \(\Phi\), replace every Boolean combination with a corresponding \(\mathbf{2}^{\sigma}\)-atom and require that the colors of induce a partition of elements and relational \(\tau\)-tuples of the input. This is done by adding the partitioning conjuncts. In order to obtain property 4, we replace every \(k\)-ary \(\sigma_{R}\)-relation \(X\) with \(|\sigma_{\mathrm{mon}}|^{k}\) new relations of the set \(\left\{X_{M_{1},\ldots,M_{k}}\mid i\in[k],\;M_{i}\in\sigma_{\mathrm{mon}}\right\}\). We require that these new \(k\)-ary relations also form a partition by applying the corresponding part of the previous procedure. We also require that they must be consistent with the old unary relations. We do that by adding to \(\Phi\) the following conjunct for every pair \(\left(X_{M_{1},\ldots,M_{k}},M\right)\in\sigma_{R}\times\sigma_{\mathrm{mon}}\) and for every \(i\in[k]\) such that \(M_{i}\neq M\): \(\neg\big{(}R(x_{1},\ldots,x_{k})\wedge X_{M_{1},\ldots,M_{k}}(x_{1},\ldots,x_ {k})\wedge M(x_{i})\big{)}\). Within every negated conjunct of \(\Phi\), we replace an atom \(X(\mathbf{x})\) with \(X_{M_{1},\ldots,M_{k}}(\mathbf{x})\) if the conjunct also contains atoms \(M_{i}(x_{i})\), for all \(i\) in \([k]\). As, for each \(x\) of \(\mathbf{x}\), there is a unique atom \(M(x)\), this operation is well-defined. Notice that, by construction, the resulting sentence is in MMSNP\({}_{2}\) and that it is logically equivalent to the original sentence. Therefore, it is also injective and pure. Every newly added conjunct is connected, so the sentence is connected. Let \(\mathbb{A}\) be some \((\tau\cup\mathbf{2}^{\sigma})\)-structure of size \(m\) that satisfies all the negated conjuncts with at most \(m\) variables. In particular, all relational tuples of \(\mathbb{A}\) are injective, and no tuple guards another tuple. Therefore, \(m\) is greater than the number of variables in any of the newly added conjuncts, so, by assumption, \(\mathbb{A}\) must satisfy all of them. Then, as the original sentence \(\Phi\) is hom-closed, \(\mathbb{A}\) satisfies all negated conjuncts of \(\Phi\). Thus, the resulting sentence is also hom-closed. The "biconnected" and "tuple-biconnected" properties are preserved because each of the newly added conjuncts is both biconnected and tuple-biconnected. ## 4. Construction of the CSP template Since now, \(\Phi\) is a well-prepared MMSNP\({}_{2}\) sentence. The _colored obstruction set_ for \(\Phi\) is the set \(\mathcal{F}\) of all canonical databases for formulas \(\phi\) such that \(\neg\phi\) is a negated conjunct of \(\Phi\), except for the partitioning conjuncts. Any well-prepared MMSNP\({}_{2}\) problem can be reformulated as follows. Denote by \(\mathrm{Forb}\mathcal{F}\sigma\) the decision problem, where, for a finite input \(\tau\)-structure \(\mathbb{A}\), one has to decide if there is a partitioned \(\sigma\)-expansion \(\mathbb{A}^{\sigma}\) such that, for any \(\mathbb{F}\in\mathcal{F}\), there is no homomorphism: \(\mathbb{F}\not\rightarrow\mathbb{A}^{\sigma}\). Denote by \(\mathbb{B}_{\mathcal{F}}^{\mathrm{css}}\) the \((\tau\cup\sigma)\)-structure that is associated with \(\mathcal{F}\) as follows.1 Footnote 1: The “CSS” superscript stands for “Cherlin-Shelah-Shi” who are the authors of Theorem 4.1. **Theorem 4.1** ([15]).: _Let \(\mathcal{F}\) be a finite set of finite connected \((\tau\cup\sigma)\)-structures. Then, there exists a countable model-complete \((\tau\cup\sigma)\)-structure \(\mathbb{B}_{\mathcal{F}}^{\mathrm{css}}\) such that, for any finite \((\tau\cup\sigma)\)-structure \(\mathbb{A}\), there is an embedding \(\mathbb{A}\hookrightarrow\mathbb{B}_{\mathcal{F}}^{\mathrm{css}}\) if and only if, for any \(\mathbb{F}\in\mathcal{F}\), there is no homomorphism \(\mathbb{F}\rightarrow\mathbb{A}\). The structure \(\mathbb{B}_{\mathcal{F}}^{\mathrm{css}}\) is \(\omega\)-categorical and unique up to isomorphism._ Let \(\mathbb{B}_{\mathcal{F}}^{\mathrm{\tiny{MCC}}}\) be the \((\tau\cup\sigma)\)-structure such that \((\mathbb{B}_{\mathcal{F}}^{\mathrm{\tiny{MCC}}},\neq)\) is the model-complete core of \((\mathbb{B}_{\mathcal{F}}^{\mathrm{css}},\neq)\).2 The following result about \(\mathbb{B}_{\mathcal{F}}^{\mathrm{\tiny{MCC}}}\) makes the further construction clearer. Footnote 2: The “MCC” superscript stands for the “model-complete core”. **Lemma 4.2**.: _Let \(\mathbf{a}=(a_{1},a_{2})\) be a 2-tuple of elements of \(\mathbb{B}_{\mathcal{F}}^{\mathrm{\tiny{MCC}}}\). If, for both \(a_{1},a_{2}\), there are some \(M_{1},M_{2}\in\sigma_{\mathrm{mon}}\) such that \(\mathbb{B}_{\mathcal{F}}^{\mathrm{\tiny{MCC}}}\models M_{1}(a_{1})\wedge M_{2}(a _{2})\), then one of the following holds._ 1. _There is_ \(R\in\tau\) _and_ \(X\in\sigma_{R}\) _such that_ \(\mathbb{B}^{\mbox{\tiny MCC}}_{\mathcal{F}}\models R(\mathbf{a})\wedge X( \mathbf{a})\) _and that the substructure of_ \(\mathbb{B}^{\mbox{\tiny MCC}}_{\mathcal{F}}\) _induced by_ \(\{a_{1},a_{2}\}\) _contains no other relational tuples except for the 4 mentioned above. Tuples of this type are called_ fully colored_._ 2. _There is a subset_ \(\tau_{1}\subseteq\tau\) _such that_ \[\mathbb{B}^{\mbox{\tiny MCC}}_{\mathcal{F}}\models\bigwedge_{R\in\tau_{1}} \bigl{(}R(\mathbf{a})\wedge\bigwedge_{X\in\sigma_{R}}\neg X(\mathbf{a})\bigr{)} \wedge\bigwedge_{R^{\prime}\not\in\tau_{1}}\bigl{(}\neg R^{\prime}(\mathbf{a}) \wedge\bigwedge_{X\in\sigma_{R^{\prime}}}X(\mathbf{a})\bigr{)}.\] _Otherwise, for all \(R\in\tau\)\(\mathbb{B}^{\mbox{\tiny MCC}}_{\mathcal{F}}\models R(\mathbf{a})\) and, for all \(X\in\sigma_{R}\), \(\mathbb{B}^{\mbox{\tiny MCC}}_{\mathcal{F}}\models X(\mathbf{a})\)._ Proof.: Suppose that all elements of \(\mathbf{a}\) are colored with relations from \(\sigma_{\text{mon}}\). Because \(\Phi\) is well-prepared, if \(\mathbf{a}\) induces a fully colored \(R\)-tuple, then the substructure induced on \(\{a_{1},a_{2}\}\) does not contain any other relational tuples except for \(M_{1}(a_{1}),M_{2}(a_{2}),R(\mathbf{a})\), and \(X(\mathbf{a})\). Suppose that neither \(\mathbf{a}\) nor \(\mathbf{a}^{\prime}\) is contained in a fully colored tuple. Suppose that, for some \(R\in\tau\), \(\mathbb{B}^{\text{\tiny hom}}_{\mathcal{F}}\not\models R(\mathbf{a})\). Suppose also that for some \(X\in\sigma_{R}\), \(\mathbb{B}^{\text{\tiny hom}}_{\mathcal{F}}\not\models X(\mathbf{a})\). Then add \(\mathbf{a}\) to the relation \(X^{\mathbb{B}^{\text{\tiny hom}}}\) and denote the resulting structure by \(\mathbb{B}^{\prime}\). The identity mapping is a homomorphism from \(\mathbb{B}^{\mbox{\tiny MCC}}_{\mathcal{F}}\) to \(\mathbb{B}^{\prime}\). On the other hand, no structure of \(\mathcal{F}\) maps to \(\mathbb{B}^{\prime}\), this implies that every finite structure of \(\mathbb{B}^{\prime}\) embeds into \(\mathbb{B}^{\mbox{\tiny MCC}}_{\mathcal{F}}\), and thus homomorphically and injectively maps to \(\mathbb{B}^{\mbox{\tiny MCC}}_{\mathcal{F}}\), so there is an injective homomorphism \(\mathbb{B}^{\prime}\rightarrow\mathbb{B}^{\mbox{\tiny MCC}}_{\mathcal{F}}\). As \((\mathbb{B}^{\mbox{\tiny MCC}}_{\mathcal{F}},\neq)\) is a core, every its endomorphism is an embedding, which implies that \(\mathbb{B}^{\mbox{\tiny MCC}}_{\mathcal{F}}\) already contains the tuple \(X(\mathbf{a})\). Suppose that \(\mathbb{B}^{\mbox{\tiny MCC}}_{\mathcal{F}}\not\models X(\mathbf{a})\) for all \(X\in\sigma_{R}\); then add the tuple \(R(\mathbf{a})\) and similarly prove that \(\mathbf{a}\) is already contained in \(R^{\mathbb{B}^{\text{\tiny hom}}}_{\mathcal{F}}\). If some element of the tuple is uncolored with a \(\sigma_{\text{mon}}\)-relation, then one can prove analogously that \(\mathbf{a}\) belongs to any \(R\) and to any color of \(\sigma_{R}\). Let \(\mathbb{C}_{0}\) be the substructure of \(\mathbb{B}^{\mbox{\tiny MCC}}_{\mathcal{F}}\) induced on elements that have \(\sigma_{\text{mon}}\)-colors, i.e., all those \(x\in B^{\mbox{\tiny MCC}}_{\mathcal{F}}\) that satisfy \(\bigvee_{M\in\sigma_{\text{mon}}}M(x)\). Let \(C\) be the domain of \(\mathbb{C}_{0}\). For each \(\tau_{1}\subseteq\tau\), introduce a binary relation symbol \(S_{\tau_{1}}\). Denote by \(\mu\) the set of all such new relations: \(\mu:=\{S_{\tau_{1}}\mid\tau_{1}\subseteq\tau\}\). Let \(\mathbb{C}_{\Phi}\) be a \((\tau\cup\sigma\cup\mu)\)-structure with the same domain \(C\) as \(\mathbb{C}_{0}\), where the relation symbols are interpreted as follows. 1. For all \(R\in\tau\) of arity \(k\) and all \(\mathbf{a}\in C^{k}\), \[\mathbb{C}_{\Phi}\models R(\mathbf{a})\longleftrightarrow\mathbb{C}_{0} \models R(\mathbf{a})\wedge\bigvee_{X\in\sigma_{R}}X(\mathbf{a}).\] 2. For all \(S_{\tau_{1}}\in\mu\) and all \(\mathbf{a}\in C^{2}\), \[\mathbb{C}_{\Phi}\models S_{\tau_{1}}(\mathbf{a})\longleftrightarrow\mathbb{C }_{0}\models\bigwedge_{R\in\tau_{1}}\bigl{(}R(\mathbf{a})\wedge\bigwedge_{X\in \sigma_{R}}\neg X(\mathbf{a})\bigr{)}\wedge\bigwedge_{R^{\prime}\not\in\tau_{1} }\bigl{(}\neg R^{\prime}(\mathbf{a})\wedge\bigwedge_{X\in\sigma_{R^{\prime}}}X( \mathbf{a})\bigr{)}.\] 3. For all \(R\in\tau\) and all \(X\in\sigma_{R}\) and all \(\mathbf{a}\in C^{k}\), \[\mathbb{C}_{\Phi}\models X(\mathbf{a})\longleftrightarrow\mathbb{C}_{0} \models X(\mathbf{a})\wedge R(\mathbf{a}).\] ## 5. Properties of the CSP template **Lemma 5.1**.: _Every relation of \(\mathbb{C}_{\Phi}\) is first-order definable in \(\mathbb{C}_{0}\) and vice versa. Moreover, \(\operatorname{Aut}(\mathbb{C}_{\Phi})\cong\operatorname{Aut}(\mathbb{C}_{0})\), where the isomorphism is the identity mapping._ Proof.: All unary \((\tau\cup\sigma)\)-relations in \(\mathbb{C}_{0}\) are the same as in \(\mathbb{C}_{\Phi}\). For all \(R\in\tau\) of arity \(2\) and all \(\mathbf{a}\in C^{2}\), \[\mathbb{C}_{0}\models R(\mathbf{a})\longleftrightarrow\mathbb{C}_{\Phi}\models R (\mathbf{a})\vee\bigvee_{\tau_{1}\subseteq\tau,R\in\tau_{1}}S_{\tau_{1}}( \mathbf{a}).\] For all \(R\in\tau\) of arity 2 and all \(X\in\sigma_{R}\) and all \(\mathbf{a}\in C^{2}\), \[\mathbb{C}_{0}\models X(\mathbf{a})\longleftrightarrow\mathbb{C}_{\Phi}\models X (\mathbf{a})\vee\bigvee_{\tau_{1}\subseteq\tau,R\not\in\tau_{1}}S_{\tau_{1}}( \mathbf{a}).\] This implies that expanding \(\mathbb{C}_{0}\) and \(\mathbb{C}_{\Phi}\) by all first-order definable relations give the same structure. It is known that expanding by first-order definable relations does not change the automorphism group, so \(\operatorname{Aut}(\mathbb{C}_{\Phi})=\operatorname{Aut}(\mathbb{C}_{0})\). **Lemma 5.2**.: _Let \(\Phi\) be a well-prepared \(\operatorname{MMSNP}_{2}\)\(\tau\)-sentence and let \(\mathbb{A}\) be a \(\tau\)-structure. Then the following are equivalent: (1) \(\mathbb{A}\models\Phi\), (2) \(\mathbb{A}\) homomorphically and injectively maps to \(\mathbb{C}_{\Phi}^{\tau}\), and (3) \(\mathbb{A}\) homomorphically maps to \(\mathbb{C}_{\Phi}^{\tau}\)._ Proof.: (2) \(\Rightarrow\) (3) Trivial. (3) \(\Rightarrow\) (1) In \(\mathbb{C}_{\Phi}\), for every element and \(R\)-tuple there is a unique associated \(\sigma\)-color. Any homomorphism \(h\colon\mathbb{A}\to\mathbb{C}_{\Phi}^{\tau}\) is also a homomorphism w.r.t. the \(\sigma\)-expansion \(h\colon\mathbb{A}^{\sigma}\to\mathbb{C}_{\Phi}\), where one assigns a relation \(M\in\sigma_{\operatorname{mon}}\) to an element \(a\in A\) if \(h(a)\in M^{\mathbb{C}_{\Phi}}\) and the relation \(X\) to an \(R\)-tuple \((a_{1},\dots,a_{k})\) if \(\big{(}h(a_{1}),\dots,h(a_{k})\big{)}\in X^{\mathbb{C}_{\Phi}}\). Therefore, for no \(\mathbb{F}\in\mathcal{F}\), there is a homomorphism to \(\mathbb{A}^{\sigma}\), thus \(\mathbb{A}\models\Phi\). (1) \(\Rightarrow\) (2) Let \(\mathbb{A}^{\sigma}\) be a \(\sigma\)-expansion such that, for all \(\mathbb{F}\in\mathcal{F}\), \(\mathbb{F}\not\to\mathbb{A}^{\sigma}\). Then, by Theorem 4.1, there is an embedding \(\mathbb{A}^{\sigma}\hookrightarrow\mathbb{B}_{\mathcal{F}}^{\operatorname{css}}\). Then, there is an injective homomorphism from \(\mathbb{A}^{\sigma}\) to \(\mathbb{B}_{\mathcal{F}}^{\operatorname{mcc}}\). Then, as \(\Phi\) is well-prepared, \(\sigma\)-colors induce partitions of elements of \(\mathbb{A}^{\sigma}\) and of \(R\)-tuples of \(\mathbb{A}^{\sigma}\), which means that this injective homomorphism maps \(\mathbb{A}^{\sigma}\) to the substructure of \(\mathbb{B}_{\mathcal{F}}^{\operatorname{mcc}}\) denoted by \(\mathbb{C}_{\Phi}\). Therefore, there is an injective homomorphism for their \(\tau\)-reducts. **Lemma 5.3** (Lemmas 4.17, 4.18, and Corollary 4.19 in [1]).: _Let \(\Phi\) be a well-prepared \(\operatorname{MMSNP}_{2}\) sentence with colored obstruction set \(\mathcal{F}\). Then, \(\mathbb{B}_{\mathcal{F}}^{\operatorname{css}},\mathbb{B}_{\mathcal{F}}^{ \operatorname{mcc}}\), and \(\mathbb{C}_{0}\) are 1-homogeneous._ **Lemma 5.4**.: _Let \(\Phi\) be a well-prepared \(\operatorname{MMSNP}_{2}\) sentence with colored obstruction set \(\mathcal{F}\). Then, \(\mathbb{C}_{\Phi}\) are 1-homogeneous._ Proof.: The structures \(\mathbb{C}_{0}\) and \(\mathbb{C}_{\Phi}\) have the same domain \(C\), and, for every \(a\in C\), the substructure of \(\mathbb{C}_{0}\) induced on \(a\) is isomorphic to the substructure of \(\mathbb{C}_{\Phi}\) induced on \(a\). By Lemma 5.1, \(\operatorname{Aut}(\mathbb{C}_{0})\cong\operatorname{Aut}(\mathbb{C}_{\Phi})\), so the structure \(\mathbb{C}_{\Phi}\) is 1-homogeneous. **Definition 5.5**.: _A \((\tau\cup\sigma)\)-structure \(\mathbb{A}\) is tuple-homogeneous if, for all \(R\in\tau\) and all fully colored \(\mathbf{b},\mathbf{c}\in R^{\mathbb{A}}\), every isomorphism \(i\) between the substructures of \(\mathbb{A}\) induced on the elements of \(\mathbf{b}\) and \(\mathbf{c}\) extends to an automorphism \(\alpha\in\operatorname{Aut}(\mathbb{A})\)._ **Lemma 5.6** (Lemma 4.14 in [1]).: _Let \(\mathcal{F}\) be a finite family of finite structures. Then, for every \(k\in\mathbb{N}\), the orbits of \(k\)-tuples in \(\mathbb{B}_{\mathcal{F}}^{\operatorname{css}}\) can be defined by \(\phi_{1}\wedge\phi_{2}\), where \(\phi_{1}\) is a primitive positive formula and \(\phi_{2}\) is a conjunction of negated atomic formulas._ **Lemma 5.7**.: _Let \(\Phi\) be a well-prepared \(\operatorname{MMSNP}_{2}\) sentence with colored obstruction set \(\mathcal{F}\). Then, \(\mathbb{B}_{\mathcal{F}}^{\operatorname{css}}\) is tuple-homogeneous._ Proof.: Let \(\mathbf{x}_{1}\) and \(\mathbf{x}_{2}\) in \(R^{\operatorname{Bcss}}_{\mathcal{F}}\) be two fully colored tuples that induce isomorphic substructures of \(\mathbb{B}_{\mathcal{F}}^{\operatorname{css}}\). Let \(\psi_{1}\) and \(\psi_{2}\) be the primitive definitions of their orbits given by Lemma 5.6. Let \(\mathbb{A}_{1}\) and \(\mathbb{A}_{2}\) be the structures induced on the tuples \(\mathbf{x},\mathbf{y}\) and respectively on the elements that witness existentially quantified variables. If \(\mathbf{x}\) and \(\mathbf{y}\) are in different orbits, then \(\mathbb{B}_{\mathcal{F}}^{\operatorname{css}}\not\models\psi_{1}\wedge\psi_{2}\). Denote by \(\mathbb{C}\) the structure obtained from \(\mathbb{A}_{1}\) and \(\mathbb{A}_{2}\) by identifying elements of \(\mathbf{x}\) with elements of \(\mathbf{y}\) componentwise. As \(\mathbb{B}_{\mathcal{F}}^{\operatorname{css}}\not\models\psi_{1}\wedge\psi_{2}\), \(\mathbb{C}\) does not map homomorphically to \(\mathbb{B}_{\mathcal{F}}^{\operatorname{css}}\), that contradicts \(\Phi\) being tuple-biconnected. **Lemma 5.8**.: _Let \(\Phi\) be a well-prepared \(\mathrm{MMSNP}_{2}\) sentence with colored obstruction set \(\mathcal{F}\). Then, \(\mathbb{B}^{\textsc{mcc}}_{\mathcal{F}},\mathbb{C}_{0}\), and \(\mathbb{C}_{\Phi}\) are tuple-homogeneous._ Proof.: Let \(g\colon\mathbb{B}^{\textsc{mcc}}_{\mathcal{F}}\to\mathbb{B}^{\textsc{css}}_{ \mathcal{F}}\) and \(f\colon\mathbb{B}^{\textsc{css}}_{\mathcal{F}}\to\mathbb{B}^{\textsc{mcc}}_{ \mathcal{F}}\) be injective homomorphisms and let \(\mathbf{u},\mathbf{v}\) be relational tuples in \(R^{\mathbb{B}^{\textsc{mcc}}_{\mathcal{F}}}\) with isomorphic induced substructures. As \(\mathbb{B}^{\textsc{mcc}}_{\mathcal{F}}\) is a core, \(g(\mathbf{u})\) and \(g(\mathbf{v})\) induce isomorphic substructures too, as well as \(f(g(\mathbf{u}))\) and \(f(g(\mathbf{v}))\). Then, as \(\mathbb{B}^{\textsc{css}}_{\mathcal{F}}\) is tuple-homogeneous, there is an automorphism \(\alpha\) such that \(\alpha(g(\mathbf{u}))=g(\mathbf{v})\). As \(f\circ g\) and \(f\circ\alpha\circ g\) are embeddings and as \(\mathbb{B}^{\textsc{mcc}}_{\mathcal{F}}\) is model-complete, there are automorphisms \(\beta,\gamma\) such that \(\beta(\mathbf{u})=f(\alpha(g(\mathbf{u})))=\gamma(\mathbf{v})\). Then, \(\gamma^{-1}\circ\beta\) is the desired automorphism. Let \(\mathbf{u},\mathbf{v}\) be two relational \(R\)-tuples of \(\mathbb{C}_{0}\) that induce isomorphic substructures. Then they also induce isomorphic substructures within \(\mathbb{B}^{\textsc{hom}}_{\mathcal{F}}\). Then there is \(\alpha\in\mathrm{Aut}(\mathbb{B}^{\textsc{hom}}_{\mathcal{F}})\) such that \(\alpha(\mathbf{u})=\mathbf{v}\). Then the restriction \(\alpha\big{|}_{C}\) is the desired automorphism of \(\mathbb{C}_{0}\). If two \(R\)-tuples induce isomorphic substructures in \(\mathbb{C}_{\Phi}\), then they are also isomorphic in \(\mathbb{C}_{0}\), then there is an automorphism of \(\mathbb{C}_{0}\) mapping one to the other but it is also an automorphism of \(\mathbb{C}_{\Phi}\). **Lemma 5.9**.: _If \(\Phi\) is well-prepared, then there is a one-to-one correspondence between orbits of elements and relational \(\tau\)-tuples of \(\mathbb{C}_{\Phi}\) and colors of \(\sigma\)._ Proof.: As \(\Phi\) is well-prepared, for all \(R\in\tau\), no \(R\)-tuple can belong to distinct \(\sigma_{R}\) relations. Now, for some \(R\in\tau\) and for some \(X\in\sigma_{R}\) consider two tuples \(\mathbf{a}\) and \(\mathbf{b}\) in \(X^{\mathbb{C}_{\Phi}}\). As \(X^{\mathbb{C}_{\Phi}}\subseteq R^{\mathbb{C}_{\Phi}}\), we have that \(\mathbf{a},\mathbf{b}\in R^{\mathbb{C}_{\Phi}}\). \(\Phi\) is injective, so both \(\mathbf{a}\) and \(\mathbf{b}\) are injective. \(\Phi\) is pure, so the substructures of \(\mathbb{C}_{\Phi}\) induced by the elements of \(\mathbf{a}\) and \(\mathbf{b}\) respectively do not contain any other \(\tau\)-tuples except for \(R(\mathbf{a})\) and \(R(\mathbf{b})\). Consequently, they do not contain any other \(\sigma_{R}\)-tuples because \(\Phi\) is in \(\mathrm{MMSNP}_{2}\). \(\Phi\) is well-prepared, so the \(\sigma_{\mathrm{mon}}\)-relations assigned to the elements of \(\mathbf{a}\) and of \(\mathbf{b}\) depend on \(X\), i.e., for each \(M\in\sigma_{\mathrm{mon}}\) and each \(i\in[2]\), \(a_{i}\in M^{\mathbb{C}_{\Phi}}\) if and only if \(b_{i}\in M^{\mathbb{C}_{\Phi}}\). This implies that the tuples \(\mathbf{a}\) and \(\mathbf{b}\) induce isomorphic substructures. As \(\mathbb{C}_{\Phi}\) is tuple-homogeneous, we conclude that \(\mathbf{a}\) and \(\mathbf{b}\) belong to the same orbit. Similarly, the "biconnected" property implies the one-to-one correspondence between \(1\)-element orbits of \(\mathbb{C}_{\Phi}\) and \(\sigma_{\mathrm{mon}}\)-colors as in Lemma 4.20 in [1]. ## 6. Ramsey expansions For two structures \(\mathbb{A}\) and \(\mathbb{B}\), denote by \(\binom{\mathbb{B}}{\mathbb{A}}\) the set of all embeddings from \(\mathbb{A}\) to \(\mathbb{B}\). A homogeneous structure \(\mathbb{C}\) is _Ramsey_ if, for every two finite substructures \(\mathbb{A}\) and \(\mathbb{B}\) of \(\mathbb{C}\), every \(r\in\mathbb{N}\), and every mapping \(\chi\colon\binom{\mathbb{C}}{\mathbb{A}}\to[r]\), there exists \(e\in\binom{\mathbb{C}}{\mathbb{B}}\) such that \(\chi(e\circ f)=\chi(e\circ g)\) for all \(f,g\in\binom{\mathbb{B}}{\mathbb{A}}\). An \(\omega\)-categorical structure is _Ramsey_ if its expansion by all first-order definable relations is Ramsey. **Lemma 6.1** (Corollaries 5.7 and 5.9 in [1]).: _Let \(\mathcal{F}\) be a finite set of finite \((\tau\cup\sigma)\)-structures and let \(<\) be a binary relation. Then each of the structures \(\mathbb{B}^{\textsc{css}}_{\mathcal{F}},\mathbb{B}^{\textsc{mcc}}_{\mathcal{F}}, \mathbb{C}_{0}\) has a Ramsey \(\{<\}\)-expansion, where the relation \(<\) is a linear order._ Let \((\mathbb{C}_{\Phi},<)\) be a \(\{<\}\)-expansion of \(\mathbb{C}_{\Phi}\) such that \(<^{\mathbb{C}_{\Phi}}=<^{\mathbb{C}_{0}}\). **Lemma 6.2**.: \((\mathbb{C}_{\Phi},<)\) _is Ramsey._ Proof.: Kechris, Pestov, and Todorcevic [15] proved that an ordered \(\omega\)-categorical structure is Ramsey if and only if its automorphism is extremely amenable. As \(\mathrm{Aut}(\mathbb{C}_{\Phi})=\mathrm{Aut}(\mathbb{C}_{0})\) by Lemma 5.1, we also have that \(\mathrm{Aut}(\mathbb{C}_{\Phi},<)=\mathrm{Aut}(\mathbb{C}_{0},<)\). By Lemma 6.1, \((\mathbb{C}_{\Phi},<)\) is Ramsey. **Lemma 6.3**.: \((\mathbb{C}_{\Phi},<)\) _is tuple-homogeneous._ Proof.: By Lemma 5.8, \(\mathbb{C}_{\Phi}\) is tuple-homogeneous. \((\mathbb{Q},<)\) is homogeneous. By Lemma 5.4 in [1], for every two tuples \(\mathbf{a},\mathbf{b}\) of \((\mathbb{C}_{\Phi},<)\), there is an automorphism \(\alpha\) of \((\mathbb{C}_{\Phi},<)\) such that \(\alpha(\mathbf{a})=\mathbf{b}\) if and only if there are \(\beta_{1}\in\operatorname{Aut}(\mathbb{C}_{\Phi})\) and \(\beta_{2}\in\operatorname{Aut}(\mathbb{Q},<)\) such that \(\beta_{1}(\mathbf{a})=\beta_{2}(\mathbf{a})=\mathbf{b}\). Therefore, \((\mathbb{C}_{\Phi},<)\) is tuple-homogeneous. ### Imitating linear order Although the structure \(\mathbb{C}_{\Phi}\) has a one-to-one correspondence between \(\sigma\)-colors and orbits of elements and \(\tau\)-tuples and it has a \(\{<\}\)-expansion which is Ramsey, this one-to-one correspondence does not hold after adding the linear order. Therefore, this structure needs to be slightly modified in order to obtain the desired property. Denote by \(\Phi_{<}\) a well-prepared sentence that is obtained from \(\Phi\) as follows. For every \(R\in\tau\) of arity \(2\) and every \(X\in\sigma_{R}\), replace \(X\) with \(2\) new relation symbols \(X_{<},X_{>}\), each of them is associated with a linear ordering of a \(2\)-element set. Denote this new signature by \(\sigma_{<}\). Replace every negated conjunct that contains an atom \(X(\mathbf{x})\) with \(2\) conjuncts: one has \(X_{<}(\mathbf{x})\) at the place of \(X(\mathbf{x})\), and the other has \(X_{>}(\mathbf{x})\) correspondingly. Repeat this procedure until there are no \(X\)-atoms left for each \(X\in\sigma_{R}\) and each binary \(R\in\tau\). Notice that the resulting sentence \(\Phi_{<}\) is logically equivalent to \(\Phi\) and that it is also well-prepared. Denote by \(\mathcal{F}^{0}_{<}\) the colored obstruction set associated with \(\Phi_{<}\). A sequence of relational tuples \(\mathbf{s}_{1},\ldots,\mathbf{s}_{\ell}\) is a _cycle_ if there are \(\ell\) elements \(s_{1},\ldots,s_{\ell}\) such that, for \(i\in[\ell-1]\), \(\mathbf{s}_{i}\cap\mathbf{s}_{i+1}=\{s_{i}\}\), and \(\mathbf{s}_{\ell}\cap\mathbf{s}_{1}=\{s_{\ell}\}\). Denote by \(\mathcal{C}_{\infty}\) the family of all possible cycles consisting of binary fully colored \(\tau\)-relational tuples, where each tuple is colored with one \(\sigma_{<}\) relation in such a way that either all \(\sigma_{<}\) relations have the "\(<\)" subscript or all of them have the "\(>\)" subscript. The lemma below introduces a class \(\mathcal{F}_{<}\) of forbidden \((\tau\cup\sigma_{<})\)-structures that will be needed in the future. **Lemma 6.4**.: _Let \(\Phi\) be well-prepared, \(\mathcal{F}\) be its colored obstruction set, and \(\mathcal{F}_{<}:=\mathcal{F}^{0}_{<}\cup\mathcal{C}_{\infty}\). Then, for every finite \(\tau\)-structure \(\mathbb{A}\) the following are equivalent: (1) \(\mathbb{A}\models\Phi\), (2) \(\mathbb{A}\in\operatorname{Forb}\mathcal{F}\sigma\), and (3) \(\mathbb{A}\in\operatorname{Forb}\mathcal{F}_{<}\sigma_{<}\)._ Proof.: \((1)\Leftrightarrow(2)\) Follows from the definition of \(\operatorname{Forb}\mathcal{F}\sigma\) and from the fact that \(\Phi\) is well-prepared. \((1)\Leftrightarrow(3)\) If \(\mathbb{A}\models\Phi\), then introduce an arbitrary linear ordering of the elements of \(\mathbb{A}\): \((\mathbb{A},<)\); if a tuple \(R(a,b)\) was colored with \(X\), then color it with \(X_{<}\) if \(a<b\), and with \(X_{>}\) if \(a>b\). As \(<\) is a linear ordering, no structure of \(\mathcal{C}_{\infty}\) will be mapped to this \(\sigma_{<}\)-expansion of \(\mathbb{A}\), so \(\mathbb{A}\) will be a "yes" instance of \(\mathcal{F}_{<}\). Backwards, if \(\mathbb{A}\) is a "yes" instance of \(\mathcal{F}_{<}\), then there is a \(\sigma_{<}\)-expansion \(\mathbb{A}^{\sigma_{<}}\) such that no structure of \(\mathcal{F}_{<}\) maps to it. Then the \(\sigma\)-expansion \(\mathbb{A}^{\sigma}\) that witnesses that \(\Phi\) holds in \(\mathbb{A}\), is obtained from \(\mathbb{A}^{\sigma_{<}}\) by replacing each binary relation \(X_{<}\) and \(X_{>}\) with \(X\). Now construct the \(\omega\)-categorical \((\tau\cup\mu\cup\sigma_{<})\)-structure \(\mathbb{C}^{<}_{\Phi}\) that is associated with \(\mathcal{F}_{<}\). Consider the structure \((\mathbb{C}_{\Phi},<)\). The domain and the \((\tau\cup\mu)\)-relations of \(\mathbb{C}^{<}_{\Phi}\) are the same as in \(\mathbb{C}_{\Phi}\). However, \(\sigma\)-relations are replaced by \(\sigma_{<}\)-relations as follows. For every binary \(R\in\tau\) and every \(X\in\sigma_{R}\), replace the relation \(X^{\mathbb{C}_{\Phi}}\) with two relations \(X^{\mathbb{C}^{<}_{\Phi}}_{<},X^{\mathbb{C}^{<}_{\Phi}}_{>}\) such that, for every \((a,b)\in X^{\mathbb{C}_{\Phi}}\), put \((a,b)\in X^{\mathbb{C}^{<}_{\Phi}}_{<}\) if \(a<b\) and, otherwise, put \((a,b)\in X^{\mathbb{C}^{<}_{\Phi}}_{>}\). **Lemma 6.5**.: \((\mathbb{C}^{<}_{\Phi},<)\) _is Ramsey and there is a one-to-one correspondence between its orbits and the colors of \(\sigma_{<}\)._ Proof.: By construction, \(\operatorname{Aut}(\mathbb{C}^{<}_{\Phi},<)=\operatorname{Aut}(\mathbb{C}_{\Phi},<)\). So, by Lemma 6.2, \((\mathbb{C}^{<}_{\Phi},<)\) is Ramsey. If \(\mathbf{a}\) and \(\mathbf{b}\) both belong to some \(X\in\sigma_{<}\) in \((\mathbb{C}_{\Phi}^{<},<)\), then they are fully colored in \(\mathbb{C}_{\Phi}^{<}\). By the construction of \(\mathbb{C}_{\Phi}^{<}\), \(a_{1}<a_{2}\) if and only if \(b_{1}<b_{2}\). Moreover, by construction, \(\mathbf{a},\mathbf{b}\in X^{\mathbb{C}_{\Phi}}\). By Lemma 5.9, \(\mathbf{a}\) and \(\mathbf{b}\) induce isomorphic substructures in \(\mathbb{C}_{\Phi}\), so they also induce isomorphic substructures in \((\mathbb{C}_{\Phi}^{<},<)\). Then, because \(\operatorname{Aut}(\mathbb{C}_{\Phi}^{<},<)=\operatorname{Aut}(\mathbb{C}_{ \Phi},<)\) and by Lemma 6.3, they belong to the same orbit. The following theorem explains why Ramsey property is relevant. **Theorem 6.6** ([1]).: _Let \(\mathbb{A}\) be a countable homogeneous \(\tau\)-structure which is Ramsey and let \(\mathbb{B}\) be \(\omega\)-categorical. Then, for any mapping \(f\colon A\to B\), there exists a mapping in_ \[\{\beta\circ f\circ\alpha\mid\alpha\in\operatorname{Aut}(\mathbb{A}),\beta \in\operatorname{Aut}(\mathbb{B})\}\] _that is canonical from \(\mathbb{A}\) to \(\mathbb{B}\). In particular, if there exists a homomorphism from \(\mathbb{A}\) to \(\mathbb{B}\), then there also exists a canonical homomorphism from \(\mathbb{A}\) to \(\mathbb{B}\)._ ## 7. Recoloring and containment Let \(\mathcal{F}\) and \(\mathcal{G}\) be two families of finite \((\tau\cup\sigma_{1})\)-structures and finite \((\tau\cup\sigma_{2})\)-structures respectively. **Definition 7.1**.: _A recoloring (from \(\mathcal{F}\) to \(\mathcal{G}\)) is given by a mapping \(r:\sigma_{1}\to\sigma_{2}\) such that, for every \((\tau\cup\sigma_{1})\)-structure \(\mathbb{A}\), if, for some \(\mathbb{G}\in\mathcal{G}\), there is a homomorphism \(\mathbb{G}\to r(\mathbb{A})\), then, for some \(\mathbb{F}\in\mathcal{F}\), there is a homomorphism \(\mathbb{F}\to\mathbb{A}\)._ Let \(\Phi\) and \(\Psi\) be two well-prepared MMSNP\({}_{2}\) sentences and \(\mathcal{F}_{<}\), \(\mathcal{G}_{<}\) be the corresponding classes of forbidden \((\tau\cup\sigma_{\Phi})\)-structures and \((\tau\cup\sigma_{\Psi})\)-structures respectively. **Lemma 7.2**.: _If there is a recoloring \(r\) from \(\mathcal{F}_{<}\) to \(\mathcal{G}_{<}\), then \(\Phi\) is contained in \(\Psi\)._ Proof.: Let \(\mathbb{A}\) be a "yes" instance of \(\Phi\). The goal is to show that it is also a "yes" instance of \(\Psi\). By assumption, there exists a partitioned \(\sigma_{\Phi}\)-expansion \(\mathbb{A}^{\sigma_{\Phi}}\) such that no \(\mathbb{F}\in\mathcal{F}_{<}\) maps to \(\mathbb{A}^{\sigma_{\Phi}}\). Then, as \(r\) is a recoloring, no structure of \(\mathcal{G}_{<}\) maps homomorphically to \(r(\mathbb{A}^{\sigma_{\Phi}})\). Therefore, \(\mathbb{A}\) is a "yes" instance of \(\Psi\). **Lemma 7.3**.: _If \(\Phi\) is contained in \(\Psi\), then there is a recoloring from \(\mathcal{F}_{<}\) to \(\mathcal{G}_{<}\)._ Proof.: By Lemmas 3.10 and 5.2, there is a homomorphism \(h\colon\mathbb{C}_{\Phi}^{\tau}\to\mathbb{C}_{\Psi}^{\tau}\). By Lemma 6.5 and Theorem 6.6, \(h\) is canonical with respect to the orbits of \((\mathbb{C}_{\Phi}^{<},<)\) and \((\mathbb{C}_{\Psi}^{<},<)\) because they are Ramsey. By Lemma 6.5, there is a one-to-one correspondence between the relation symbols of \(\sigma_{\Phi}\) and the orbits of \(\sigma_{\Phi}\)-tuples of \((\mathbb{C}_{\Phi}^{<},<)\) and similarly for \((\mathbb{C}_{\Psi}^{<},<)\). Therefore, \(h\) well-defines a mapping \(r\) from \(\sigma_{\Phi}\) to \(\sigma_{\Psi}\) as follows: \(r(X)=Y\) if, for any tuple \(\mathbf{a}\) such that \(\mathbb{C}_{\Phi}^{<}\models X(\mathbf{a})\), we have that \(\mathbb{C}_{\Psi}^{<}\models Y(h(\mathbf{a}))\). This mapping is clearly a recoloring. **Theorem 7.4**.: _For any finite relational signature \(\tau\) with symbols of arity at most 2 and any two \(\tau\)-sentences \(\Phi\) and \(\Psi\) in_ GMSNP_, it is decidable to check whether \(\Phi\) is contained in \(\Psi\)._ Proof.: By Lemmas 3.2 and 3.5, one can transform \(\Phi\) and \(\Psi\) to hom-closed injective pure sentences \(\Phi_{1}\) and \(\Psi_{1}\) such that \(\Phi\subseteq\Psi\) if and only if \(\Phi_{1}\subseteq\Psi_{1}\). By Theorem 3.7, one can transform \(\Phi_{1}\) and \(\Psi_{1}\) to logically equivalent MMSNP\({}_{2}\) sentences \(\Phi_{2}\) and \(\Psi_{2}\) that can be written as disjunctions of connected MMSNP\({}_{2}\) sentences \(\Phi_{2}^{1}\vee\dots\vee\Phi_{2}^{\ell}\) and \(\Psi_{2}^{1}\vee\dots\vee\Psi_{2}^{m}\). By Corollary 3.12, \(\Phi_{2}\subseteq\Psi_{2}\) if and only if, for every \(i\in[\ell]\) there is \(j\in[m]\) such that \(\Phi_{2}^{i}\subseteq\Psi_{2}^{j}\). By Lemma 3.19, each such \(\Phi_{2}^{i}\) and \(\Psi_{2}^{j}\) can be transformed to logically equivalent well-prepared sentences \(\Phi_{3}\) and \(\Psi_{3}\) in MMSNP\({}_{2}\). Let \(\mathcal{F}_{<}\) and \(\mathcal{G}_{<}\) be the corresponding families of structures with existential signatures \(\sigma_{\Phi}\) and \(\sigma_{\Psi}\). Denote by \(\mathcal{C}_{\infty}^{\Phi}\) and \(\mathcal{C}_{\infty}^{\Psi}\) the subfamilies consisting of cycles. Notice that a mapping \(r\colon\sigma_{\Phi}\to\sigma_{\Psi}\) is a recoloring from \(\mathcal{C}_{\infty}^{\Phi}\) to \(\mathcal{C}_{\infty}^{\Psi}\) if and only if, for any two \(X,Y\in\sigma_{\Phi}\), they have the same subscript (either both \(<\) or both \(>\)) if and only if \(r(X)\) and \(r(Y)\) have the same subscript. Therefore, even though classes \(\mathcal{C}_{\infty}^{\Phi}\) and \(\mathcal{C}_{\infty}^{\Psi}\) are countably infinite, it takes finite time to check if a mapping \(r\) is a recoloring for them. Finally, one has to consider every possible mapping from \(\sigma_{\Phi}\) to \(\sigma_{\Psi}\) that is a recoloring from \(\mathcal{C}_{\infty}^{\Phi}\) to \(\mathcal{C}_{\infty}^{\Psi}\) and to check whether it is a recoloring from \(\mathcal{F}_{<}^{0}\) to \(\mathcal{G}_{<}^{0}\). It can be done in finite time. ## 8. Conclusion This paper provides a proof that containment is decidable for binary GMSNP. The obvious direction for further research is, of course, to extend it to arbitrary signatures. The proof of the main result relies on results achieved for \(\omega\)-categorical structures. Another interesting direction is to give an alternative proof of the decidability of containment without dealing with infinite structures.
2307.15836
Slipping and rolling on a rough accelerating surface
The two-dimensional motion of an object on a moving rough horizontal plane is investigated. Two cases are studied: the plane having a translational acceleration, and a rotating plane. For the first case, the motions of a point particle and a sphere are studied, and it is shown that the solution to the latter problem can be expressed in terms of the solution to the former one. Examples of constant acceleration and periodic acceleration along a fixed line, and specifically sinusoidal acceleration along a fixed line, are studied in more detail. Also a situation is investigated where the friction is anisotropic, that is the friction coefficient depends on the direction of the velocity. In this situation, there may be stick-slip motions, and these are investigated in detail. For the second case, the motion of a point particle on a rough turntable is investigated.
Mohammad Khorrami, Amir Aghamohammadi, Cina Aghamohammadi
2023-07-28T23:07:00Z
http://arxiv.org/abs/2307.15836v1
###### Abstract ###### Abstract The two-dimensional motion of an object on a moving rough horizontal plane is investigated. Two cases are studied: the plane having a translational acceleration, and a rotating plane. For the first case, the motions of a point particle and a sphere are studied, and it is shown that the solution to the latter problem can be expressed in terms of the solution to the former one. Examples of constant acceleration and periodic acceleration along a fixed line, and specifically sinusoidal acceleration along a fixed line, are studied in more detail. Also a situation is investigated where the friction is anisotropic, that is the friction coefficient depends on the direction of the velocity. In this situation, there may be stick-slip motions, and these are investigated in detail. For the second case, the motion of a point particle on a rough turntable is investigated. **Slipping and rolling on a rough accelerating surface** M. Khorrami\({}^{1,}\)*, A. Aghamohammadi\({}^{1}\), C. Aghamohammadi\({}^{2}\) Footnote *: Corresponding Author E-mail: [email protected] \({}^{1}\) Department of Fundamental Physics, Faculty of Physics, Alzahra University, Tehran, Iran \({}^{2}\) Cold Spring Harbor Laboratory, Cold Spring Harbor, NY, USA **Keywords:** slipping, rolling, friction, inertial force ## 1 Introduction The one-dimensional motion of a sliding point particle on a rough horizontal or an inclined plane is among the standard pedagogical problems in mechanics [1, 2]. In [3, 4], the two-dimensional motion of a particle sliding on an inclined plane has been studied for a special choice of the friction coefficient. Although friction theories have been developed for centuries and numerous situations have been studied in detail, but there are still many unanswered questions, both in the framework of physics and mechanical engineering. Among these are extensions to two-dimensions without any restriction on the geometry or friction, replacing the particle with an extended object, and adding an acceleration (translational or rotational) to the substrate. The two-dimensional motion of a particle sliding on a rough inclined plane has been investigated in [5]. In [6], the motion of a particle on an arbitrary surface but confined to a fixed vertical plane has been investigated. The tangential motion at the contact of two solid objects has been studied in [7], where it was shown that the friction force and the torque are inherently coupled. The example studied there, is a disk sliding and spinning on a horizontal flat surface. It could make the problem easier to study the motion (on a plane) of symmetric extended objects; for example two-dimensional symmetric objects such as a disk [8, 9, 10], or a hoop [10, 12]; or three-dimensional symmetric objects such as a sphere, or a cylinder [13, 14]. Ref. [15] is among the relatively old books in which some of such problems have been discussed. The following and the references therein, provide some other examples. The dynamics of a steel disc spinning on a horizontal rough surface has been investigated both experimentally and theoretically in [16]. In [17], the two-dimensional motion of a generally non-circular non-uniform cylinder on a flat horizontal surface has been addressed. In [5], the equation of motion for a sphere on a rough inclined plane has been solved, using the solution of the equation of motion for a point particle sliding on the same inclined plane with a different friction coefficient. In [18], the motion of a point particle sliding on a turntable has been studied. There are some other sources on the motion of a sphere rolling on a turntable. Examples are [19, 20, 21, 22]. Some of these models have found applications as well. For example in [23], the results of [5] have been used for the study and analysis of granular materials including the study of rotation phenomenon using particle methods. Motions influenced by dry friction, can also result in the stick-slip phenomenon. There are two modes in the stick-slip phenomenon: the stick mode occurs when there is no relative motion between the two objects in contact, and the slip mode occurs when there is a relative motion. This phenomenon is frequently seen all around us. Examples are squeaking doors, and earthquakes taking place during periods of rapid slip. Stick-slip phenomena can also produce sounds through induced vibrations. Examples are drawing chalk on a blackboard, or moving a violin bow. Stick-slip phenomena are also important in mechanical engineering and many studies have been devoted to it. Examples are [24, 25, 26] and references therein. Typically, in the slip-stick phenomenon for the friction between the involved surfaces the coefficient of static friction is greater than the coefficient of kinetic friction. But in the situation studied here this is not the case. Here a situation is studied that there are two solids directly in contact, without an intermediate fluid lubrication layer. Friction is taken to be proportional to the normal force, with the same value for static and dynamic coefficients. Dry friction imposes the resistance in the relative motion of two objects in contact. More specifically, here the two-dimensional motion of an object on an accelerating rough horizontal plane is investigated. The plane's acceleration may be translational or rotational (produced by a turntable, for example). This paper has essentially two parts. In the first part, sections 2 and 3, the plane has a translation acceleration and the motion of a point particle or a homogeneous sphere is studied. It is shown that the solution to the latter problem can be expressed in terms of the solution to the former. Examples of constant acceleration and periodic acceleration along a fixed line, and specifically sinusoidal acceleration along a fixed line, are studied in more detail. The motion on an inclined plane is equivalent to the motion on a surface with constant acceleration. So the results obtained here for constant acceleration are expected to be related to those of [5]. But this does not apply to the results corresponding to a time-dependent acceleration. In the case of sinusoidal acceleration along a fixed line, there may be stick-slip motions, which are investigated in detail. Also a situation is investigated where the friction is anisotropic, that is the friction coefficient depends on the direction of the velocity. This leads to a much richer dynamic behavior. The second part of the article, section 4, is on the motion of a point particle on a rough turntable. Here the evolution equation is reduced to a set of three first order coupled differential equations. The large-time behavior of the system is studied in more detail, and some results are obtained about the dependence of the large-time behavior on the initial condition; specifically, which initial conditions result in final rest and which result in perpetual motion. Finally, section 5 is devoted to the conclusion. ## 2 A point particle sliding on a rough accelerating surface Consider a horizontal rough plane that is being pulled with the acceleration \(\boldsymbol{A}\). A particle of mass \(m\) moves on this plane. There is friction between the particle and the plane, with the coefficient \(\mu\) which is assumed to be the same for static and kinetic frictions. The equation of motion of the particle, when it is not at rest, in the non-inertial frame of the accelerating plane is \[\frac{\mathrm{d}\,\boldsymbol{v}}{\mathrm{d}\,t}=-\boldsymbol{A}-\mu\,g\, \frac{\boldsymbol{v}}{v}, \tag{1}\] where \(\boldsymbol{v}\) is the particle's velocity, \(g\) is the gravity acceleration, and \(|\boldsymbol{v}|\) is denoted by \(v\). Multiplying this equation by \((\boldsymbol{v}/v)\), one arrives at \[\frac{\mathrm{d}\,v}{\mathrm{d}\,t}=-\boldsymbol{A}\cdot\frac{\boldsymbol{v} }{v}-\mu\,g. \tag{2}\] This time evolution has some general properties: * If the maximum of \(|\boldsymbol{A}|\) is less than \((\mu\,g)\), the right-hand side of (2) is negative and the particle's speed decreases with time so that it will be at rest in a finite time. * If for any time \(t\) there is a larger time at which \(|\boldsymbol{A}|\) is more than \((\mu\,g)\), then the particle cannot remain at rest indefinitely. * If the direction of \(\boldsymbol{A}\) is fixed and \(|\boldsymbol{A}|\) is always larger than \((\mu\,g)\), then at sufficiently large times \(v\) is increasing with time. * If the direction of \(\boldsymbol{A}\) is fixed, then \(|\boldsymbol{v}_{\perp}|\) decreases with time, where \(\boldsymbol{v}_{\perp}\) is the part of \(\boldsymbol{v}\) which is perpendicular to \(\boldsymbol{A}\). If \(|\boldsymbol{A}|\) has an upper bound, then \(\boldsymbol{v}_{\perp}\) tends to zero at large times, so that the motion becomes essentially one dimensional. ### Constant acceleration A special case is when the acceleration is a constant vector. One can choose the axes so that \[\mathbf{A}=-A\,\hat{\mathbf{x}}, \tag{3}\] where \(A\) is positive. Denoting the angle of \(\mathbf{v}\) with the \(x\) axis by \(\theta\), \[\mathbf{v}=v\,(\hat{\mathbf{x}}\,\cos\theta+\hat{\mathbf{y}}\,\sin\theta), \tag{4}\] where \((x,y)\) are Cartesian coordinates, and defining \(\lambda\) through \[\lambda=\frac{\mu\,g}{A}, \tag{5}\] equation (1) becomes \[\frac{\mathrm{d}\,(v\,\cos\theta)}{\mathrm{d}\,t} =A\,(1-\lambda\,\cos\theta). \tag{6}\] \[\frac{\mathrm{d}\,(v\,\sin\theta)}{\mathrm{d}\,t} =-A\,\lambda\,\sin\theta. \tag{7}\] Eliminating \(t\), one arrives at \[\frac{\mathrm{d}\,(v\,\sin\theta)}{\mathrm{d}\,(v\,\cos\theta)} =-\frac{\lambda\,\sin\theta}{1-\lambda\,\cos\theta}. \tag{8}\] So, \[\frac{\mathrm{d}\,v}{v}=\frac{\lambda-\cos\theta}{\sin\theta}\, \mathrm{d}\,\theta, \tag{9}\] resulting in \[v\,(\sin\theta)\,\left(\tan\frac{\theta}{2}\right)^{-\lambda} =\text{constant}, \tag{10}\] or, \[v=v_{0}\,\frac{\sin\theta_{0}}{\sin\theta}\,\left(\frac{\tan\frac {\theta}{2}}{\tan\frac{\theta_{0}}{2}}\right)^{\lambda}. \tag{11}\] Putting this in equation (7), \[A\,\mathrm{d}\,t=-\frac{v_{0}\,\sin\theta_{0}}{\left(\tan\frac{ \theta_{0}}{2}\right)^{\lambda}}\,\frac{\mathrm{d}\left[\left(\tan\frac{ \theta}{2}\right)^{\lambda}\right]}{\lambda\,\sin\theta}, \tag{12}\] resulting in \[t=\frac{v_{0}\left(\lambda+\cos\theta_{0}\right)}{A\left(1-\lambda^{2}\right)}\, \left[\frac{\left(\lambda+\cos\theta\right)\,\sin\theta_{0}}{\left(\lambda+\cos \theta_{0}\right)\,\sin\theta}\,\left(\frac{\tan\frac{\theta}{2}}{\tan\frac{ \theta_{0}}{2}}\right)^{\lambda}-1\right]. \tag{13}\] Using (11) and (13), one arrives at \[v=\frac{\left(1-\lambda^{2}\right)A\,t+v_{0}\left(\lambda+\cos\theta_{0}\right) }{\left(\lambda+\cos\theta\right)}. \tag{14}\] Depending on the value of \(\lambda\), different behaviors occur: * \(\lambda=0\). The surface is friction-less. The \(y\)-component of the particle's velocity is constant, and the \(x\)-component of the particle's velocity grows linearly with time: \[v_{x} =v_{0\,x}+A\,t.\] (15) \[v_{y} =v_{0\,y}.\] (16) The particle's trajectory is generally a parabola. * \(\lambda\leq 1\). The friction is not large enough to keep the particle at rest. At large time, \(\theta\) tends to zero, and the particle slides along the plane's acceleration. For the special case \(\lambda=1\), \[v\left(1+\cos\theta\right)=v_{0}\,\left(1+\cos\theta_{0}\right).\] (17) At large times the particle's velocity tends to a constant: \[v_{x}(\infty) =\frac{v_{0}\,\left(1+\cos\theta_{0}\right)}{2}.\] (18) \[v_{y}(\infty) =0.\] (19) * \(\lambda>1\). The friction is large enough to prevent the particle from sliding. The particle eventually stops at the time T: \[T=\frac{v_{0}\left(\lambda+\cos\theta_{0}\right)}{A\left(\lambda^{2}-1\right)}.\] (20) And as this time is approached, \(\theta\) tends to zero, so the direction of the particle's velocity approaches the \(x\) axis. These can be visualized in figure (1), which is the phase portrait of \(\left(v_{x},v_{y}\right)\) for different values of \(\lambda\). For \(\lambda\leq 1\), the friction isn't large enough to prevent the particle from sliding: at large times, the particle slides along \(x\) direction (the plane's acceleration). For \(\lambda>1\), the friction is large enough and the particle eventually comes to rest, after a finite time. At this point, the direction of the velocity tends to the \(x\) axis. These results are consistent with those of ref. [5]. ### Sinusoidal linear acceleration Consider a case where the acceleration is sinusoidal and along a fixed line. The axes are chosen so that \[\boldsymbol{A}=-A_{0}\,\sin(\varpi\,t)\,\hat{\boldsymbol{x}}. \tag{21}\] At large times, the \(y\)-component of the velocity vanishes. If \(A_{0}\) (which is taken to be positive) is smaller than \((\mu\,g)\), \(v_{x}\) vanishes at large times as well. Here the case is studied that \(A_{0}\) is larger than \((\mu\,g)\). So \(v_{x}\) doesn't tend to zero at large times. The dimensionless parameters \(\lambda_{0}\), \(\phi\), and \(\mathfrak{w}\) are defined through \[\lambda_{0} =\frac{\mu\,g}{A_{0}}. \tag{22}\] \[\phi =\varpi\,t.\] (23) \[\mathfrak{w} =\frac{\varpi\,v_{x}}{A_{0}}. \tag{24}\] The assumption that \(A_{0}\) is larger than \((\mu\,g)\) means that \[\lambda_{0}<1. \tag{25}\] At large times, the \(v_{y}\) vanishes. If initially \(v_{y}\) is much larger than the \(v_{x}\), then initially the friction is approximately along the \(y\) axis, and is constant. That produces a constant acceleration along the \(y\) axis which decreases \(v_{y}\) linearly Figure 1: The velocity phase portrait for different values of \(\lambda\). For \(\lambda\leq 1\), the friction is not large enough to prevent the particle from sliding. At large times, the particle slides along the plane’s acceleration. For \(\lambda>1\), the friction is large enough and the particle eventually comes to rest, in a finite time. with time. But at some time \(v_{y}\) becomes negligible compared to \(v_{x}\). After this, the friction is nearly along the \(x\) axis, with a small \(y\)-component which is proportional to \(v_{y}\). This results in a slower decrease of \(v_{y}\), which is exponential in time. Regarding \(v_{x}\), its large time behavior doesn't depend on the initial value of \(v_{y}\). But its transient behavior does. Figure 2 shows this behavior for several initial values of \(v_{y}\). The large-time equation for \(v_{x}\), when it is not zero, is rewritten as \[\frac{\mathrm{d}\,\mathfrak{w}}{\mathrm{d}\,\phi}=\sin\phi-\lambda_{0}\,\frac{ \mathfrak{w}}{|\mathfrak{w}|}. \tag{26}\] \(|\sin\phi|\) becomes equal to \(\lambda_{0}\) at four points (in each period). For \(\phi\) in \([0,2\,\pi]\), these points are denoted by \(\phi_{1}\), \(\phi_{2}\), \(\phi_{3}\), and \(\phi_{4}\): \[\phi_{1} =\sin^{-1}\lambda_{0}. \tag{27}\] \[\phi_{2} =\pi-\sin^{-1}\lambda_{0}.\] (28) \[\phi_{3} =\pi+\sin^{-1}\lambda_{0}.\] (29) \[\phi_{4} =2\,\pi-\sin^{-1}\lambda_{0}. \tag{30}\] If there are time intervals in which \(\mathfrak{w}\) is zero, then those time intervals would end at \(\phi_{1}\) or \(\phi_{3}\) (in the first period). Assuming that \(\mathfrak{w}\) is zero at \(\phi_{1}\), one arrives at the following expression for \(\mathfrak{w}\), for the interval in which \(\mathfrak{w}\) is positive. \[\mathfrak{w}=-\lambda_{0}\,(\phi-\phi_{1})+\cos\phi_{1}-\cos\phi. \tag{31}\] Figure 2: At large times, the velocity becomes parallel to the vector \(\mathbf{A}\) (here parallel to the \(x\) axis). The large time behavior of \(v_{x}\) doesn’t depend on the initial value of \(v_{y}\). But the transient behavior of \(v_{x}\) does depend on the initial value of \(v_{y}\). The figure shows the time dependence of \(v_{x}\) for a fixed initial value of \(v_{x}\) and several initial values of \(v_{y}\). This is valid for \(\phi\) in \([\phi_{1},\tilde{\phi}_{2}]\), where \(\tilde{\phi}_{2}\) is the smallest value larger than \(\phi_{1}\) which satisfies \[-\lambda_{0}\left(\tilde{\phi}_{2}-\phi_{1}\right)+\cos\phi_{1}-\cos\tilde{\phi }_{2}=0. \tag{32}\] It is seen that \(\tilde{\phi}_{2}\) is larger than \(\phi_{2}\) but smaller than \(\phi_{4}\). If \(\tilde{\phi}_{2}\) is smaller than \(\phi_{3}\), then there is an interval \([\tilde{\phi}_{2},\phi_{3}]\) in which \(\mathfrak{w}\) is zero. Then \(\mathfrak{w}\) is of the following form. \[\mathfrak{w}=\begin{cases}-\lambda_{0}\left(\phi-\phi_{1}\right)+\cos\phi_{1 }-\cos\phi,&\phi_{1}<\phi<\tilde{\phi}_{2}\\ 0,&\tilde{\phi}_{2}<\phi<\phi_{3}\\ \lambda_{0}\left(\phi-\phi_{3}\right)+\cos\phi_{3}-\cos\phi,&\phi_{3}<\phi< \tilde{\phi}_{2}+\pi\\ 0,&\tilde{\phi}_{2}+\pi<\phi<\phi_{1}+2\,\pi\end{cases}. \tag{33}\] And of course \(\mathfrak{w}\) continues periodically in \(\phi\), with the period \((2\,\pi)\). For this behavior to happen, \(\tilde{\phi}_{2}\) should be smaller than \(\phi_{3}\), or \[-\lambda_{0}\left(\phi_{3}-\phi_{1}\right)+\cos\phi_{1}-\cos\phi_{3}<0. \tag{34}\] Expressing everything in terms of \(\lambda_{0}\), one arrives at the following condition for \(\lambda_{0}\): \[-\pi\,\lambda_{0}+2\,\sqrt{1-\lambda_{0}^{2}}<0, \tag{35}\] or, \[\lambda_{0}>\lambda_{\rm c}. \tag{36}\] Where, \[\lambda_{\rm c}=\frac{2}{\sqrt{4+\pi^{2}}}. \tag{37}\] If the condition (36) is not satisfied, then the interval \([\phi_{1},\tilde{\phi}_{2}]\) in which the frictional acceleration is \((-\mu\,g)\) is more than \(\pi\), which is half the period. So whatever the friction be in the interval \([\tilde{\phi}_{2},\phi+2\,\pi]\), the integral of the frictional acceleration from \(\phi_{1}\) to \((\phi_{1}+2\,\pi)\) is negative, and as the integral of \(A\) in that interval is zero, \([\mathfrak{w}(\phi_{1}+2\,\pi)]\) would be negative. This means that if (36) is not satisfied, then there is no periodic solution for \(\mathfrak{w}\) which vanishes on some interval. (\(\mathfrak{w}\) still does vanish at some points, but not on any interval.) The friction is simply too small to do so. In that case, the large-time behavior of \(\mathfrak{w}\) is still periodic, but with the following form. \[\mathfrak{w}=\begin{cases}-\lambda_{0}\left(\phi-\tilde{\phi}_{1}\right)+\cos \tilde{\phi}_{1}-\cos\phi,&\tilde{\phi}_{1}<\phi<\tilde{\phi}_{1}+\pi\\ \lambda_{0}\left(\phi-\tilde{\phi}_{1}-\pi\right)-\cos\tilde{\phi}_{1}-\cos \phi,&\tilde{\phi}_{1}+\pi<\phi<\tilde{\phi}_{1}+2\,\pi\end{cases}, \tag{38}\] where \[\tilde{\phi}_{1}=\cos^{-1}\frac{\pi\,\lambda_{0}}{2}. \tag{39}\] And it is seen that when this behavior occurs, for which \[\lambda_{0}<\lambda_{\rm c}, \tag{40}\] then \[\tilde{\phi}_{1}>\phi_{1}. \tag{41}\] To summarize, the qualitative large-time behavior of \(\mathfrak{w}\) (hence \(v_{x}\)) depends on \(\lambda_{0}\): * \(\lambda_{0}<\lambda_{\rm c}\). In this small-friction situation, at each period \(v_{x}\) vanishes at exactly two points. * \(\lambda_{\rm c}<\lambda_{0}<1\). In this large-friction situation, each period contains two time intervals when \(v_{x}\) is zero. * \(1<\lambda_{0}\). In this very-large-friction situation, \(v_{x}\) is identically zero. Figures 3 and 4 show examples of large-time behavior of \(v_{x}\) versus time, respectively. ### More general periodic linear acceleration The arguments of subsection 2.2 can qualitatively be repeated for the case the acceleration is periodic. Consider a function \(h\) substituting the sine in the right hand side of (26). It is assumed that \(h\) is periodic with the period \((2\,\pi)\), and the maximum of \(|h|\) is \(1\). The aim is to study the large-time behavior of \(\mathfrak{w}\). If \(\lambda_{0}\) is larger than \(1\) (very large friction), the particle will always be at rest. For a generic \(h\), it is expected that when \(\lambda_{0}\) is slightly less than \(1\) (large friction) the particle is at rest for a big fraction of time, and when \(\lambda_{0}\) is very small (small friction) the velocity doesn't vanish on some interval. So there should be a \(\lambda_{\rm c}\) determining the boundary of large and small friction. The actual situation could be more complicated. There could be more that one value of \(\lambda\) at which such change-of-behaviors occur: There could be cases when the number of zero-velocity intervals in each period changes. As a simple no-so-generic example, consider \[h(\phi) =\begin{cases}1,&0<\phi<\eta\,\pi\\ 0,&\eta\,\pi<\phi<\pi\end{cases}. \tag{42}\] \[h(\pi+\phi) =-h(\phi). \tag{43}\] Where \(\eta\) is a constant between \(0\) and \(1\). It is seen that if \(\lambda_{0}\) is less than \(1\) but not very much less than \(1\), then \[\mathfrak{w} =\begin{cases}(1-\lambda_{0})\,\phi,&0<\phi<\eta\,\pi\\ (1-\lambda_{0})\,\eta\,\pi-\lambda_{0}\,(\phi-\eta\,\pi),&\eta\,\pi<\phi<(\eta \,\pi/\lambda_{0})\.\\ 0,&(\eta\,\pi/\lambda_{0})<\phi<\pi\end{cases} \tag{44}\] \[\mathfrak{w}(\pi+\phi) =-\mathfrak{w}(\phi). \tag{45}\] The condition that \(\lambda_{0}\)_is not very much less than \(1\)_, is that \(\mathfrak{w}\) vanishes before the friction becomes nonzero again. That is, \(\lambda_{0}\) being large than \(\lambda_{\mathrm{c}}\), where \[\lambda_{\mathrm{c}}=\eta. \tag{46}\] This is qualitatively similar to what was found in subsection 2.2. It is seen that for \(\eta=1\) (the acceleration never vanishes), there is no large-friction phase. And for \(\eta=0\) (no acceleration), there is no small-friction phase. ### Anisotropic friction coefficient Up to now, it has been assumed that the friction coefficient is isotropic. That is, \(\mu\) is a constant, specifically independent of the direction of \(\mathbf{v}\). Here a situation is studied in which this is not the case. The case of linear sinusoidal acceleration is reexamined. Again, at large times only the component of the velocity along the acceleration could be nonzero. Then, for the large-time behavior equations similar to (21) to (24) are used, except that (22) is substituted with \[\lambda_{+} =\frac{\mu_{+}\,g}{A_{0}}. \tag{47}\] \[\lambda_{-} =\frac{\mu_{-}\,g}{A_{0}}. \tag{48}\] where \(\mu_{+}\) (\(\mu_{-}\)) corresponds to positive (negative) \(v_{x}\). \(\lambda_{+}\) and \(\lambda_{-}\) are reparametrized through \[\lambda_{+} =\lambda_{0}. \tag{49}\] \[\lambda_{-} =q\,\lambda_{0}. \tag{50}\] Then the equation (26) becomes \[\frac{\mathrm{d}\,\mathfrak{w}}{\mathrm{d}\,\phi} =\sin\phi-\lambda_{0}, \mathfrak{w}>0. \tag{51}\] \[\frac{\mathrm{d}\,\mathfrak{w}}{\mathrm{d}\,\phi} =\sin\phi+q\,\lambda_{0}, \mathfrak{w}<0. \tag{52}\] Without loss of generality, one could consider \(q\) to be not less than \(1\). Then, arguments similar to those presented for the isotropic case result in the following regions for the qualitative large-time behavior of \(\mathfrak{w}\), in terms of \((q,\lambda_{0})\). The change of behavior in this parameter space occurs on the following curves: \[\textbf{A}\quad\ K=\frac{\pi\,q}{q+1},\quad\frac{\sqrt{1-\lambda_{0} ^{2}}}{\lambda_{0}} =\frac{K}{\sin^{2}K}-\cot K.\] (53) \[\textbf{B}\quad\sqrt{1-\lambda_{0}^{2}}+\sqrt{1-(q\,\lambda_{0} )^{2}} =\lambda_{0}\,[\pi+\sin^{-1}(q\,\lambda_{0})-\sin^{-1}(\lambda_{0})].\] (54) \[\textbf{C}\quad\ Figure 5: The regions in the parameter space of the anisotropic friction, corresponding to the large-time behavior of the velocity with linear sinusoidal acceleration. The curve **B** is the dotted curve. ## 3 The motion of a sphere on a rough accelerating horizontal surface In this section, the motion of a homogeneous sphere on a rough accelerating horizontal plane is investigated. The sphere is of radius \(R\) and mass \(m\), and it can roll and slide on the plane. The equation of motion for the sphere (in the Figure 6: The large-time behavior of the velocity in each region in the parameter space of the anisotropic friction, corresponding to the large-time behavior of the velocity with linear sinusoidal acceleration. Each plot contains two periods. accelerated frame) are \[m\,\frac{\mathrm{d}^{2}\,\mathbf{r}}{\mathrm{d}\,t^{2}} =-m\,\mathbf{A}+\mathbf{F}, \tag{59}\] \[I\,\frac{\mathrm{d}\,\mathbf{\omega}}{\mathrm{d}\,t} =\mathbf{R}\times\mathbf{F}, \tag{60}\] where \(\mathbf{r}\) is the two-dimensional position of the center of the sphere, \(\mathbf{\omega}\) is the angular velocity of the sphere, \(I\) is the moment of inertia of the sphere, \(\mathbf{F}\) is the friction force, and \[\mathbf{R}=-R\,\hat{\mathbf{z}}. \tag{61}\] The \(z\) axis is normal to the plane and upward. Defining \(\kappa\) through \[\kappa=\frac{I}{m\,R^{2}}, \tag{62}\] the equations of motion for the case the sphere slids become \[\frac{\mathrm{d}^{2}\,\mathbf{r}}{\mathrm{d}\,t^{2}} =-\mathbf{A}-\mu\,g\,\left|\frac{\mathrm{d}\,\mathbf{r}}{\mathrm{d}\,t}+ \mathbf{\omega}\times\mathbf{R}\right|^{-1}\,\left(\frac{\mathrm{d}\,\mathbf{r}}{\mathrm{ d}\,t}+\mathbf{\omega}\times\mathbf{R}\right). \tag{63}\] \[\kappa\,R^{2}\,\frac{\mathrm{d}\,\mathbf{\omega}}{\mathrm{d}\,t} =-\mu\,g\,\mathbf{R}\times\left\{\left|\frac{\mathrm{d}\,\mathbf{r}}{ \mathrm{d}\,t}+\mathbf{\omega}\times\mathbf{R}\right|^{-1}\,\left(\frac{\mathrm{d}\, \mathbf{r}}{\mathrm{d}\,t}+\mathbf{\omega}\times\mathbf{R}\right)\right\}. \tag{64}\] So, \[\frac{\mathrm{d}\,(\mathbf{\omega}\times\mathbf{R})}{\mathrm{d}\,t} =-\frac{\mu\,g}{\kappa}\,\left|\frac{\mathrm{d}\,\mathbf{r}}{\mathrm{ d}\,t}+\mathbf{\omega}\times\mathbf{R}\right|^{-1}\,\left(\frac{\mathrm{d}\,\mathbf{r}}{ \mathrm{d}\,t}+\mathbf{\omega}\times\mathbf{R}\right). \tag{65}\] \[\frac{\mathrm{d}}{\mathrm{d}\,t}\,\left(\frac{\mathrm{d}\,\mathbf{r} }{\mathrm{d}\,t}+\mathbf{\omega}\times\mathbf{R}\right) =-\mathbf{A}-b\,\mu\,g\,\left|\frac{\mathrm{d}\,\mathbf{r}}{\mathrm{d}\,t }+\mathbf{\omega}\times\mathbf{R}\right|^{-1}\,\left(\frac{\mathrm{d}\,\mathbf{r}}{ \mathrm{d}\,t}+\mathbf{\omega}\times\mathbf{R}\right), \tag{66}\] where, \[b=1+\frac{1}{\kappa}. \tag{67}\] Using \(R\) and \((\mu\,g)\), one can make the quantities dimensionless: \[\mathbf{\rho} =\frac{\mathbf{r}}{R}. \tag{68}\] \[\tau =\sqrt{\frac{\mu\,g}{R}}\,t.\] (69) \[\mathbf{\zeta} =\sqrt{\frac{R}{\mu\,g}}\,\mathbf{\omega}.\] (70) \[\mathbf{u} =\frac{1}{\sqrt{\mu\,g\,R}}\,\left(\frac{\mathrm{d}\,\mathbf{r}}{ \mathrm{d}\,t}+\mathbf{\omega}\times\mathbf{R}\right).\] (71) \[\mathbf{\gamma} =\frac{\mathbf{A}}{\mu\,g}. \tag{72}\] Denoting differentiation with respect to \(\tau\) by dot, and denoting \(|\mathbf{u}|\) by \(u\), the equations of motion become \[\ddot{\mathbf{\rho}} =-\mathbf{\gamma}-\frac{\mathbf{u}}{u}. \tag{73}\] \[\dot{\mathbf{u}} =-\mathbf{\gamma}-b\,\frac{\mathbf{u}}{u}. \tag{74}\] Of course one also has \[\mathbf{\zeta}=\hat{\mathbf{z}}\,\zeta_{3}+\hat{\mathbf{z}}\times(\mathbf{u}-\dot{\mathbf{\rho}}). \tag{75}\] And it is seen that \(\zeta_{3}\) is a constant and does not enter the evolution of other parameters. Defining \(\mathbf{\nu}\) as \[\dot{\mathbf{\nu}}=\mathbf{\gamma}, \tag{76}\] one arrives at \[(\dot{\mathbf{\rho}}+\mathbf{\nu})^{\cdot} =-\frac{\mathbf{u}}{u}. \tag{77}\] \[(\mathbf{u}+\mathbf{\nu})^{\cdot} =-b\,\frac{\mathbf{u}}{u}. \tag{78}\] So, \[\dot{\mathbf{\rho}}=-\left(1-\frac{1}{b}\right)\,\mathbf{\nu}+\frac{\mathbf{u}}{b}+\mathbf{c}, \tag{79}\] where \(\mathbf{c}\) is a constant vector. So the problem of finding the velocity and the angular velocity of the sphere as a function of time, is reduced to finding \(\mathbf{u}\) as a function of time. In other words to investigate the motion of a sphere on a rough accelerating surface with the friction coefficient \(\mu\) is equivalent to study the motion of a particle on a rough surface with the friction coefficient \((b\,\mu)\) and the same acceleration. ### Constant acceleration Consider a special choice that \(\mathbf{A}\) is a constant vector. The axes are chosen like (3). As noted before, the problem of finding \(\mathbf{u}\) is the same as the problem of finding \(\mathbf{v}\) for the motion of a particle (without rotation), but with \(\mu\) replaced by \((b\,\mu)\). So one can use the results for \(\mathbf{v}\) to obtain \(\mathbf{u}\). Denoting the angle of \(\mathbf{u}\) with the \(x\) axis by \(\theta\), and defining \(\lambda\) through \[\lambda=\frac{b\,\mu\,g}{A}, \tag{80}\] one arrives at \[u =u_{0}\,\frac{\sin\theta_{0}}{\sin\theta}\,\left(\frac{\tan\frac{ \theta}{2}}{\tan\frac{\theta_{0}}{2}}\right)^{\lambda}. \tag{81}\] \[\tau =\frac{u_{0}\,(\lambda+\cos\theta_{0})}{1-\lambda^{2}}\,\left[ \frac{(\lambda+\cos\theta)\,\sin\theta_{0}}{(\lambda+\cos\theta_{0})\,\sin \theta}\,\left(\frac{\tan\frac{\theta}{2}}{\tan\frac{\theta_{0}}{2}}\right)^{ \lambda}-1\right].\] (82) \[u =\frac{(1-\lambda^{2})\,\tau+u_{0}\,(\lambda+\cos\theta_{0})}{( \lambda+\cos\theta)}. \tag{83}\] ### Sinusoidal linear acceleration The (dimensionless) time evolution of \(\mathbf{u}\) is the same as the (dimensionless) time evolution of \(\mathbf{v}\) as discussed in subsection 2.2, except that here \(\mu\) should be substituted with \((b\,\mu)\), so that here \[\lambda_{0}=\frac{b\,\mu\,g}{A_{0}}. \tag{84}\] Also, here a vanishing \(\mathbf{u}\) means that the sphere rolls without slipping. So the qualitative large-time behavior can be summarized as: * \(\lambda_{0}<\lambda_{\mathrm{c}}\). In this small-friction situation, at each period \(u_{x}\) vanishes at exactly two points. At these points rolling occurs. * \(\lambda_{\mathrm{c}}<\lambda_{0}<1\). In this large-friction situation, each period contains two time intervals when \(u_{x}\) is zero. At these intervals rolling occurs. * \(1<\lambda_{0}\). In this very-large-friction situation, \(u_{x}\) is identically zero. This means that in this case, eventually the motion of the sphere will be rolling without slipping. Figures 3 and 4 show examples of large-time behavior of the dimensionless \(u_{x}\) versus time, respectively. ### More general periodic linear acceleration Similar to the subsection 3.2, one can bring the arguments of subsection 2.3 here, with \(\lambda_{0}\) now defined like (84). Also, for the example of \(h\) defined as (42) and (43), the boundary of small and large friction is \(\lambda_{\mathrm{c}}\), defined in (46). ### Anisotropic friction coefficient Again, the (dimensionless) time evolution of \(\mathbf{u}\) is the same as the (dimensionless) time evolution of \(\mathbf{v}\) as discussed in subsection 2.4, except that here \(\mu\) should be substituted with \((b\,\mu)\), so that here \[\lambda_{0}=\frac{b\,\mu_{+}\,g}{A_{0}}. \tag{85}\] ## 4 A point particle sliding on a rough turntable In this section the motion of a point particle on a turntable of infinite extent is investigated. It is assumed that \(\mathbf{\Omega}\), the angular frequency of the rotation of the table, is perpendicular to the plane of the table and is constant. The equation of motion in the rotating frame is \[\frac{\mathrm{d}^{2}\,\mathbf{r}}{\mathrm{d}\,t^{2}}=\Omega^{2}\,\mathbf{r}+2\,\Omega \,\frac{\mathrm{d}\,\mathbf{r}}{\mathrm{d}\,t}\times\hat{\mathbf{z}}-\mu\,g\,\left| \frac{\mathrm{d}\,\mathbf{r}}{\mathrm{d}\,t}\right|^{-1}\,\frac{\mathrm{d}\,\mathbf{r} }{\mathrm{d}\,t}. \tag{86}\] From now on, it is assumed that \(\Omega\) is positive. This is no loss of generality, as any solution to the above equation with the sign of \(\Omega\) changed, is the mirror-reflected of a solution of the original problem. Using \(\Omega\) and \((\mu\,g)\), two dimensionless quantities \(\mathbf{\rho}\) and \(\tau\) are defined: \[\mathbf{\rho} =\frac{\Omega^{2}}{\mu\,g}\,\mathbf{r}. \tag{87}\] \[\tau =\Omega\,t. \tag{88}\] The equation of motion becomes \[\ddot{\mathbf{\rho}}=\mathbf{\rho}+2\,\dot{\mathbf{\rho}}\times\hat{\mathbf{z}}-\frac{\dot{\bm {\rho}}}{|\dot{\mathbf{\rho}}|}, \tag{89}\] where dot means differentiation with respect to \(\tau\). One has \[\dot{\mathbf{\rho}}\cdot\ddot{\mathbf{\rho}} =\mathbf{\rho}\cdot\dot{\mathbf{\rho}}-|\dot{\mathbf{\rho}}|. \tag{90}\] \[\dot{\mathbf{z}}\cdot\mathbf{\rho}\times\ddot{\mathbf{\rho}} =-2\,\mathbf{\rho}\cdot\dot{\mathbf{\rho}}-\frac{\hat{\mathbf{z}}\cdot\mathbf{ \rho}\times\dot{\mathbf{\rho}}}{|\dot{\mathbf{\rho}}|}.\] (91) \[\mathbf{\rho}\cdot\ddot{\mathbf{\rho}}+\dot{\mathbf{\rho}}\cdot\dot{\mathbf{\rho}} =\mathbf{\rho}\cdot\mathbf{\rho}+2\,\hat{\mathbf{z}}\cdot\mathbf{\rho}\times \dot{\mathbf{\rho}}-\frac{\mathbf{\rho}\cdot\dot{\mathbf{\rho}}}{|\dot{\mathbf{\rho}}|}+\dot{ \mathbf{\rho}}\cdot\dot{\mathbf{\rho}}. \tag{92}\] The length of \(\dot{\mathbf{\rho}}\) is denoted by \(p\): \[p=|\dot{\mathbf{\rho}}|, \tag{93}\] and \(\psi\) is defined as the angle of the position vector with respect to the velocity vector, counterclockwise. Further, \(\xi\) and \(\ell\) are defined as \[\xi =\rho\,\cos\psi. \tag{94}\] \[\ell =-\rho\,\sin\psi. \tag{95}\] So, \[p\,\ell =\hat{\mathbf{z}}\cdot\mathbf{\rho}\times\dot{\mathbf{\rho}}, \tag{96}\] \[p\,\xi =\mathbf{\rho}\cdot\dot{\mathbf{\rho}}, \tag{97}\] Then, using \[\mathbf{\rho}\cdot\mathbf{\rho} =\ell^{2}+\xi^{2}, \tag{98}\] one arrives at \[\dot{p} =\xi-1. \tag{99}\] \[\dot{\ell} =-\frac{\xi\,(2\,p+\ell)}{p}.\] (100) \[\dot{\xi} =\frac{(p+\ell)^{2}}{p}. \tag{101}\] Equation (99) is the projection of Newton's equation along the particle's velocity (or trajectory). Equation (100) is the projection of Newton's law along the azimuthal direction (the equation of change for the angular momentum), combined with the evolution equation for \(u\). Equation (101) is the projection of Newton's law along the radial direction, combined with the evolution equation for \(u\). Equations (99), (100), and (101) are three coupled differential equations, governing the evolution of \(p\), \(\ell\), \(\xi\). The system contains a constant of motion. One has \[\rho\,\dot{\rho} =\ell\,\dot{\ell}+\xi\,\dot{\xi},\] \[=\xi\,p,\] \[=p\,(\dot{p}+1). \tag{102}\] So, \[\left(\frac{p^{2}-\rho^{2}}{2}\right)^{\cdot} =-p, \tag{103}\] or \[\frac{p^{2}-\rho^{2}}{2}+s =\text{constant}, \tag{104}\] where \(s\) is the arc-length parameter: \[\dot{s}=p. \tag{105}\] Re-dimensionalizing (104), one arrives at an equation proportional to \[\frac{m}{2}\,\left(\frac{\mathrm{d}\,\mathbf{r}}{\mathrm{d}\,t}\right)\cdot\left( \frac{\mathrm{d}\,\mathbf{r}}{\mathrm{d}\,t}\right)-\frac{m\,\Omega^{2}\,r^{2}}{2 }+m\,g\,s=\text{constant}. \tag{106}\] The first term is the kinetic energy of the particle in the non-inertial frame of the turntable, the second term is the potential energy associated to the centrifugal force, and \((-m\,g\,s)\) is the work done by the friction. It is noted that the Coriolis force does no work. So the above equation is the work-energy theorem in the non-inertial frame. Expressing \(\ell\) and \(\xi\) in terms of \(\rho\) and \(\psi\), the evolution equations become \[\dot{p} =\rho\,\cos\psi-1. \tag{107}\] \[\dot{\rho} =p\,\cos\psi.\] (108) \[\dot{\psi} =2-\left(\frac{p}{\rho}+\frac{\rho}{p}\right)\,\sin\psi. \tag{109}\] One can also express \(\rho\) and \(p\) in terms of hyperbolic parameters. There are two cases. Either, \[p^{2}-\rho^{2} <0. \tag{110}\] \[p =A\,\sinh\alpha.\] (111) \[\rho =A\,\cosh\alpha. \tag{112}\] Then, \[\dot{A} =\sinh\alpha. \tag{113}\] \[\dot{\alpha} =\cos\psi-\frac{\cosh\alpha}{A}.\] (114) \[\dot{\psi} =2\,[1-(\sin\psi)\,\coth(2\,\alpha)]. \tag{115}\] Or, \[p^{2}-\rho^{2} >0. \tag{116}\] \[p =B\,\cosh\beta.\] (117) \[\rho =B\,\sinh\beta. \tag{118}\] Then, \[\dot{B} =-\cosh\beta. \tag{119}\] \[\dot{\beta} =\cos\psi+\frac{\sinh\beta}{B}.\] (120) \[\dot{\psi} =2\,[1-(\sin\psi)\,\coth(2\,\beta)]. \tag{121}\] The pair \((B,\beta)\) can be related to the pair \((A,\alpha)\) through \[A =-\mathrm{i}\,B. \tag{122}\] \[\alpha =\frac{\mathrm{i}\,\pi}{2}+\beta. \tag{123}\] Consider the first case (real \(\alpha\)). The evolution of \(\psi\) has two _quasi_-fixed points: \[\psi_{1} =\sin^{-1}[\tanh(2\,\alpha)]. \tag{124}\] \[\psi_{2} =\pi-\sin^{-1}[\tanh(2\,\alpha)]. \tag{125}\] These are not actual fixed points, as \(\alpha\) is not constant. However, \(\psi_{1}\) is attractive while \(\psi_{2}\) is repulsive. It is seen that \(A\) increases indefinitely, unless \(\alpha\) tends to zero. But if \(\alpha\) is near zero, then \(\psi\) changes rapidly towards its attractive _quasi_-fixed point \(\psi_{1}\), which is near zero, if \(\alpha\) is near zero. Then the evolution of \(\alpha\) becomes \[\dot{\alpha}\approx 1-\frac{1}{A}. \tag{126}\] If \[A>1, \tag{127}\] then \(\alpha\) will increase and ceases to be near zero. So the particle's speed will never vanish, and \(A\) will further increase. \(A\) never decreases. So \(A\) remains bigger than 1, if its initial value is bigger than 1, and in that case \(A\) increases indefinitely, and the particle's speed never vanishes. Assuming that (127) holds, initially, one can find the asymptotic behavior of the variables at large times. Approximating \(\psi\) by its attractive _quasi_-fixed point, one arrives at \[\dot{\alpha}\approx\frac{1}{\cosh(2\,\alpha)}-\frac{\cosh\alpha}{A}. \tag{128}\] \(\alpha\) cannot remain finite, because in this case the right-hand side will eventually become positive (as \(A\) increases). So both \(A\) and \(\alpha\) should increase indefinitely. A further approximation is to take \(\alpha\) to be the _quasi_-fixed point of its evolution: \[(\cosh\alpha)\,\cosh(2\,\alpha)\approx A. \tag{129}\] For large values of \(A\), and hence \(\alpha\), this becomes \[\alpha\approx\frac{1}{3}\,\ln(4\,A). \tag{130}\] So, \[\dot{A}\approx\frac{(4\,A)^{1/3}}{2}, \tag{131}\] leading to \[A \sim\frac{2\,\tau^{3/2}}{\sqrt{27}}. \tag{132}\] \[\alpha \sim\frac{1}{2}\,\ln\frac{4\,\tau}{3}.\] (133) \[\psi \sim\frac{\pi}{2}-\left(\frac{3}{\tau}\right)^{1/2}.\] (134) \[p \sim\frac{2\,\tau^{2}}{9}.\] (135) \[\rho \sim\frac{2\,\tau^{2}}{9}.\] (136) \[s \sim\frac{2\,\tau^{3}}{27}. \tag{137}\] As a check, it is seen that \[\frac{A^{2}}{2}\sim s. \tag{138}\] Now consider cases when the particle eventually comes to rest. The condition that the particle remain at rest, at \(\rho=\sigma\), is \[\sigma\leq 1. \tag{139}\] The reason is that at \(\rho>1\), the maximum of the static frictional force is less than the centrifugal force. When the particle nears its rest, \(p\) tends to zero, so \(\alpha\) tends to zero. Then from the evolution equation for \(\psi\) it is seen that \(\psi\) tends to zero as well. So near the rest and before that, \[\dot{\alpha}=1-\frac{1}{\sigma}+\cdots, \tag{140}\] which results in \[\alpha=\frac{\left(\sigma-1\right)\tau}{\sigma}+\cdots. \tag{141}\] Putting this in the evolution equation for \(\psi\), and noting that \(\alpha\) and \(\psi\) are both small, one arrives at \[\frac{\sigma-1}{\sigma}\,\frac{\mathrm{d}\,\psi}{\mathrm{d}\,\alpha}\approx 2 \,\left(1-\frac{\psi}{2\,\alpha}\right), \tag{142}\] which results in \[\psi=2\,\alpha\,\left[\frac{\sigma}{2\,\sigma-1}+c\,\alpha^{(2\,\sigma-1)/(1- \sigma)}\right]+\cdots, \tag{143}\] where \(c\) is an integration constant. Similarly, the evolution equation for \(A\) becomes \[\frac{\sigma-1}{\sigma}\,\frac{\mathrm{d}\,A}{\mathrm{d}\,\alpha}\approx\alpha, \tag{144}\] resulting in \[A=\sigma+\frac{\sigma}{\sigma-1}\,\frac{\alpha^{2}}{2}+\cdots. \tag{145}\] For each value of \(\sigma\), equations (143) and (145) represent a surface \(\mathbb{S}_{\sigma}\) in the 3-dimensional parameter space \((A,\alpha,\psi)\). The envelope \(\mathbb{S}\) of these surfaces is the boundary between two regions of the parameter space: the region \(\mathbb{V}_{0}\) corresponding to the initial values which result in an eventual rest, and the region \(\mathbb{V}_{1}\) corresponding to the initial values which result in unbounded motions. To obtain the equation of \(\mathbb{S}\), one notices that the equation of \(\mathbb{S}_{\sigma}\) is just (145), with \(\psi\) being free. Hence the parametric equation for the envelope consists of (145) and its derivative with respect to the parameter \(\sigma\): \[0=1-\frac{1}{(\sigma-1)^{2}}\,\frac{\alpha^{2}}{2}+\cdots. \tag{146}\] Eliminating \(\sigma\) between this and (145), the equation for \(\mathbb{S}\) is determined as \[A=\left(1-\frac{\alpha}{\sqrt{2}}\right)^{2}+\cdots. \tag{147}\] The particle eventually comes to rest, if initially the value of \(A\) is less that the right-hand side. Otherwise its motion would be unbounded. ## 5 Conclusion Even though friction-based problems in classical mechanics have been studied extensively, there is still a substantial number of new researchs, with compelling results. The problems studied here were the two-dimensional motion of a particle and a homogeneous sphere on a moving rough horizontal plane. It was shown that the problem of the motion of a homogeneous sphere on a moving plane with translational acceleration is reduced to that of the motion of a point particle. Some examples were studied in more detail: constant acceleration, periodic (and specifically sinusoidal) acceleration along a fixed line, and a situation where the friction is anisotropic, which leads to a much richer dynamical behavior. The motion of a point particle on a rough turntable was also studied. It was shown that the evolution equation is reduced to a set of three coupled first order differential equations. The large-time behavior of the system was studied in more detail, and some results were obtained about the dependence of the large-time behavior on the initial condition; specifically, which initial conditions result in final rest and which result in perpetual motion. **Acknowledgment**: The work of M. Khorrami and A. Aghamohammadi was supported by the research council of the Alzahra University.
2310.09608
Decays of the tensor glueball in a chiral approach
Glueballs remain an experimentally undiscovered prediction of QCD. Lattice QCD predicts a spectrum of glueballs, with the tensor $(J^{PC}=2^{++})$ glueball being the second lightest, behind the scalar glueball. From an effective hadronic model based on spontaneous and explicit chiral symmetry breaking, we compute decay ratios of the tensor glueball into various meson decay channels. We find the tensor glueball to primarily decay into 2 vector mesons, dominated by $\rho\rho$ and $K^*K^*$. These results are compared to experimental data of decay rates of spin 2 mesons. Based on this comparison we make statements on the eligibility of these mesons as potential tensor glueball candidates.
Arthur Vereijken
2023-10-14T15:58:02Z
http://arxiv.org/abs/2310.09608v2
# Decays of the tensor glueball in a chiral approach ###### Abstract Glueballs remain an experimentally undiscovered prediction of QCD. Lattice QCD predicts a spectrum of glueballs, with the tensor (\(J^{PC}=2^{++}\)) glueball being the second lightest, behind the scalar glueball. From an effective hadronic model based on spontaneous and explicit chiral symmetry breaking, we compute decay ratios of the tensor glueball into various meson decay channels. We find the tensor glueball to primarily decay into 2 vector mesons, dominated by \(\rho\rho\) and \(K^{*}K^{*}\). These results are compared to experimental data of decay rates of spin 2 mesons. Based on this comparison we make statements on the eligibility of these mesons as potential tensor glueball candidates. ## 1 Introduction The experimental verification of glueballs has been, and still is, a long-standing open issue in QCD [1]. Numerous theoretical [2; 3] and experimental [4] approaches have made headway, yet the situation is still not completely clear [5; 6; 7; 8]. The different theoretical methods agree on the mass hierarchy of the lowest lying glueball states, with the scalar (\(J^{PC}=0^{++}\)) being the lightest and the tensor (\(J^{PC}=2^{++}\)) the second lightest glueball. In this work we will focus on the tensor glueball, for which there are many experimentally observed isoscalar-tensor candidate resonances. We will present results on the tensor glueball [10] in the extended Linear Sigma Model [11], which is an extension of earlier works on axial-tensor mesons in the same model [12]. Different glueballs have been studied before in the same type of model, such as the scalar [13] and the pseudoscalar glueball[14]. ## 2 Chiral model The meson resonances are gathered into the nonets \(V^{\mu}\) (\(J^{PC}=1^{--}\)) containing vector mesons, \(A_{1}^{\mu}\) (\(J^{PC}=1^{++}\)) containing axial-vector mesons, \(P\) (\(J^{PC}=0^{-+}\)) containing pseudoscalar mesons, \(S\) (\(J^{PC}=0^{++}\)) containing scalar mesons, \(T^{\mu\nu}\) (\(J^{PC}=2^{++}\)) containing tensor mesons, and \(A_{2}^{\mu\nu}\) (\(J^{PC}=2^{--}\)) containing axial-tensor mesons. For details on the resonance assignment of the nonets see [10; 12]. The tensor glueball itself is a flavor blind object \(G_{2,\mu\nu}\). The chiral invariant Lagrangians relevant to us are as follows \[\mathcal{L}_{\lambda}=\frac{\lambda}{\sqrt{6}}G_{2,\mu\nu}\Big{(}\mathrm{Tr} \Big{[}\{L^{\mu},L^{\nu}\}\Big{]}+\mathrm{Tr}\Big{[}\{R^{\mu},R^{\nu}\}\Big{]} \Big{)}\, \tag{1}\] \[\mathcal{L}_{\alpha}=\frac{\alpha}{\sqrt{6}}G_{2,\mu\nu}\Big{(}\text{Tr}\Big{[} \Phi\mathbf{R}^{\mu\nu}\Phi^{\dagger}\Big{]}+\text{Tr}\Big{[}\Phi^{\dagger} \mathbf{L}^{\mu\nu}\Phi\Big{]}\Big{)}. \tag{2}\] These are the leading terms in large-\(N_{c}\) expansion, where \(N_{c}\) is the number of colors of underlying the gauge group. The nonets of chiral partners are grouped together and are given by: \[L^{\mu}:=V^{\mu}+A_{1}^{\mu}\ \,R^{\mu}:=V^{\mu}-A_{1}^{\mu}\, \Phi=S\ +iP\] \[\mathbf{L}^{\mu\nu}=T^{\mu\nu}+A_{2}^{\mu\nu}\,\ \mathbf{R}^{\mu\nu}=T^{\mu\nu}-A_{2}^{\mu\nu}, \tag{3}\] such that they obey the transformation rules \(L^{\mu}\to U_{L}L^{\mu}U_{L}^{\dagger}\), \(R^{\mu}\to U_{R}R^{\mu}U_{R}^{\dagger}\), \(\Phi\to U_{L}\Phi U_{R}^{\dagger},\mathbf{R}^{\mu\nu}\to U_{R}\mathbf{R}^{ \mu\nu}U_{R}^{\dagger},\mathbf{L}^{\mu\nu}\to U_{L}\mathbf{L}^{\mu\nu}U_{L}^{\dagger}\) under the chiral transformations of \(U_{L}(3)\times U_{R}(3)\). The first Lagrangian (1) models the 2 body decays of the tensor glueball into 2 vector mesons, into 2 pseudoscalar mesons, and into an axial-vector and a pseudoscalar meson. The second Lagrangian (2) leads to the decay into a tensor and a pseudoscalar meson. Since the coupling constants \(\alpha\) and \(\lambda\) are not a priori known and cannot be fitted to experimental data, we are limited to computing decay ratios, seperately for each Lagrangian. Lattice calculations in [15] find a tensor glueball mass of 2369 MeV. The decay ratios of the first Lagrangian with respect to \(\pi\pi\) for this mass are shown in table 1. As evident from the results in table 1, the 2-vector decay channel is dominant, in particular the decays into \(\rho\rho\) and \(K^{*}\bar{K}^{*}\). A similar dominance of the 2-vector channel was recently found in [16] with the holographic Witten-Sakai-Sugimoto model. ## 3 Results & Data Comparison We compare results to available data of spin-2 isoscalar resonances (\(J^{PC}=2^{++}\), I = 0) with masses of 1.9 GeV and upwards. These are the \(f_{2}(1910),f_{2}(1950),f_{2}(2010),f_{2}(2150),f_{J}(2220),f_{2}(2300)\), and the \(f_{2}(2340)\). In table 2 decay ratios are computed and compared with PDG data [9] where available, revealing how well they fit as glueball candidates. We see that every glueball candidate other than the \(f_{2}(1950)\) has disagreement with experimental data, and that the \(f_{2}(1950)\) fits reasonably well, given uncertainties on both sides. Therefore we interpret \(f_{2}(1950)\) to be the best candidate for the lightest tensor glueball, which has not been the first time it has been proposed as the tensor glueball, see e.g. [17]. \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline Decay Ratio & theory & Decay Ratio & theory & Decay Ratio & theory \\ \hline \hline \(\frac{G_{2}(2369)\to\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! ## 4 Conclusion In this note we have computed decays of the tensor glueball using a chiral hadronic model. We find that the decay to two vectors, in particular \(\rho\rho\) and \(K^{*}R^{*}\), is dominant. Upon comparing with experimental data, we find that the \(f_{2}(1950)\) is the most suitable candidate (even if some deviations are present) for the tensor glueball based on the known decay ratios. In the future, one should investigate why there is up to a 400 MeV mass difference between this resonance and predicted masses from lattice methods. One possible explanation could be the role of mesonic loops and/or the mixing with nearby quark-antiquark states. ## Acknowledgements We thank Francesco Giacosa and Shahriyar Jafarzade for useful discussions. We also acknowledge financial support from the Polish National Science Centre (NCN) via the OPUS project 2019/33/B/ST2/00613. \begin{table} \begin{tabular}{|c|c|c|c|} \hline Resonances & Decay Ratios & PDG [9] & Model Prediction \\ \hline \hline \(f_{2}(1910)\) & \(\rho(770)\rho(770)/\omega(782)\omega(782)\) & \(2.6\pm 0.4\) & 3.1 \\ \hline \(f_{2}(1910)\) & \(f_{2}(1270)\eta/a_{2}(1320)\pi\) & \(0.09\pm 0.05\) & 0.07 \\ \hline \(f_{2}(1910)\) & \(\eta\eta/\eta\eta^{\prime}(958)\) & \(<0.05\) & \(\sim 8\) \\ \hline \(f_{2}(1910)\) & \(\omega(782)\omega(782)/\eta\eta\eta(958)\) & \(2.6\pm 0.6\) & \(\sim 200\) \\ \hline \hline \(f_{2}(1950)\) & \(\eta\eta/\pi\pi\) & \(0.14\pm 0.05\) & 0.081 \\ \hline \(f_{2}(1950)\) & \(K\bar{K}/\pi\pi\) & \(\sim 0.8\) & 0.32 \\ \hline \(f_{2}(1950)\) & \(4\pi/\eta\eta\) & \(>200\) & \(>700\) \\ \hline \hline \(f_{2}(2150)\) & \(f_{2}(1270)\eta/a_{2}(1320)\pi\) & \(0.79\pm 0.11\) & 0.1 \\ \hline \(f_{2}(2150)\) & \(K\bar{K}/\eta\eta\) & \(1.28\pm 0.23\) & \(\sim 4\) \\ \hline \(f_{2}(2150)\) & \(\pi\pi/\eta\eta\) & \(<0.33\) & \(\sim 10\) \\ \hline \hline \(f_{J}(2220)\) & \(\pi\pi/K\bar{K}\) & \(1.0\pm 0.5\) & \(\sim 2.5\) \\ \hline \end{tabular} \end{table} Table 2: Decay ratios for the decay channels with available data.
2304.13917
Proportionally Representative Clustering
In recent years, there has been a surge in effort to formalize notions of fairness in machine learning. We focus on clustering -- one of the fundamental tasks in unsupervised machine learning. We propose a new axiom ``proportional representation fairness'' (PRF) that is designed for clustering problems where the selection of centroids reflects the distribution of data points and how tightly they are clustered together. Our fairness concept is not satisfied by existing fair clustering algorithms. We design efficient algorithms to achieve PRF both for unconstrained and discrete clustering problems. Our algorithm for the unconstrained setting is also the first known polynomial-time approximation algorithm for the well-studied Proportional Fairness (PF) axiom (Chen, Fain, Lyu, and Munagala, ICML, 2019). Our algorithm for the discrete setting also matches the best known approximation factor for PF.
Haris Aziz, Barton E. Lee, Sean Morota Chu, Jeremy Vollen
2023-04-27T02:01:24Z
http://arxiv.org/abs/2304.13917v2
# Proportionally Representative Clustering* ###### Abstract In recent years, there has been a surge in effort to formalize notions of fairness in machine learning. We focus on _clustering_--one of the fundamental tasks in unsupervised machine learning. We propose a new axiom that captures proportional representation fairness (PRF). We make a case that the concept achieves the raison d'etre of several existing concepts in the literature in an arguably more convincing manner. Our fairness concept is not satisfied by existing fair clustering algorithms. We design efficient algorithms to achieve PRF both for unconstrained and discrete clustering problems. + Footnote †: journal: ## 1 Introduction As artificial intelligence and machine learning serves as the 'new electricity' of systems, it is becoming increasingly important to ensure that AI systems embody societal values such as privacy, transparency, and fairness. Fairness is a growing concern, as AI systems are used to make decisions that can critically affect our daily lives, finances, and careers [22]. The impulse to tackle the issue of fairness in AI is prominently reflected in several governmental policy documents (see, e,g, [23; 3; 32]), the emergence of dedicated conferences on Ethical AI, and the formation of AI ethics boards by various big tech companies. Over the past years, several developments have taken place in the theory and application of fairness in supervised learning. One prominent approach is to use information derived from labelled data in supervised learning to formalize some notions of equity within and across (pre-specified and) protected attributes (see, e.g., [29]). In contrast, in several models of unsupervised learning, protected attributes or their labelling are unknown. Fairness in unsupervised machine learning is becoming important with the growth of data collection both in public (e.g., via IoT devices) and private sectors (e.g., map services and social media). Unfettered and irresponsible use of such data has the danger of increasing inequity [45]. We focus on fairness in clustering which is one of the most widely applied tasks in unsupervised machine learning. A (centroid selection) clustering problem has a metric space \(\mathscr{X}\) with a distance measure \(d:\mathscr{X}\times\mathscr{X}\rightarrow\mathbb{R}^{+}\cup\{0\}\), a multiset \(\mathscr{N}\subseteq\mathscr{X}\) of \(n\) data points (agents), a set \(\mathscr{M}\) of candidate centers, and positive integer \(k\leq n\). In many clustering problems, each data point corresponds to an individuals' attribute profile; thus, we refer to the data points as _agents_. The goal is to find a size-\(k\) subset of centers, \(Y\subseteq\mathscr{M}\ :\ |Y|=k\). Such _centroid selection_ problems can be widely applied to various applications including recommender systems and data-compression. The problem also captures social planning scenarios where \(k\) facility locations need to be chosen to serve a set of agents. In such a (centroid selection) clustering problem, there are two versions of the problem. In the _discrete_ version, \(\mathscr{M}\) is a finite set. In the _unconstrained_ or _continuous_ setting, \(\mathscr{M}=\mathscr{X}\). We first focus on the unconstrained setting as done by Micha and Shah [43]. Later we discuss how our concepts and ideas extend to the discrete setting. For a location \(i\) and set of locations \(S\subseteq\mathscr{M}\), we will denote \(\min_{c\in S}d(i,c)\) as \(d(i,S)\). Standard clustering solutions include \(k\)-center, \(k\)-medians, and \(k\)-means.1 These standard objectives can be viewed as achieving some form of global welfare but are not well-suited for proportional representational and fairness concerns. They are also NP-hard (see, e.g.,[41; 5]). Footnote 1: The \(k\)-center solution outputs a subset \(Y\in\arg\min_{W\subseteq\mathscr{M},|W|=k}\max_{i\in\mathscr{N}}d(i,W)\). The \(k\)-medians solution outputs a subset \(Y\in\arg\min_{W\subseteq\mathscr{M},|W|=k}\sum_{i\in\mathscr{N}}d(i,W)\). The \(k\)-means solution outputs a subset \(Y\in\arg\min_{W\subseteq\mathscr{M},|W|=k}\sum_{i\in\mathscr{N}}d(i,W)^{2}\). We seek a suitable concept for clustering that captures general fairness principles such as non-discrimination, equality of opportunity, and equality of outcome. In many applications, the data point correspond to real individuals or their positions or opinions. Such individuals may expect the clustering outcome to capture natural fairness requirements. In particular, they may expect an appropriate number of centers to be close enough to them. The number of clusters and their proximity would depend on how many (or what proportion of) data points are tightly clustered together. The issue of finding fair and proportionally representative cluster centers can be especially important in applications such as hiring where the data points correspond to qualified individuals that have been short-listed for a set of positions and the centers are the selected (or successful) set of individuals. Similarly, computing a fair clustering is also meaningful in facility location, where the location of the facility should depend on the density and the number of people it is serving. In this paper, we consider the following fundamental research question. _What is a meaningful concept of proportional fairness in centroid clustering? Is such a concept guaranteed to be achievable?_ ContributionsIn contrast to traditional optimization objectives in clustering and centroid selection, we take an axiomatic fairness perspective. We first identify a potential shortcoming of some recently introduced concepts that aim to capture proportional fairness for clustering; in particular, we show how the concepts do not capture an aspect they attempt to capture. We then propose a new axiom called _proportional representative fairness (PRF)_, which is our central conceptual contribution. We propose PRF for both the unconstrained and discrete clustering setting. We discuss how PRF overcomes some of the drawbacks of the previously introduced concepts. In particular, it implies unanimous proportionality, which can be viewed as a minimal requirement of proportional fairness. PRF has several other desirable features : a PRF outcome is guaranteed to exist, the axiom does not require a preset specification of protected groups and it is robust to outliers and multiplicative scaling. We show that none of the traditional clustering algorithms or new algorithms for fair clustering satisfy PRF. We design polynomial-time algorithms to achieve PRF both in the unconstrained and discrete settings.2 Finally, we experimentally compare our algorithm with two standard algorithms and show that it performs especially well when the mean square distance of the agents is calculated based on their distance from not only the closest center but also the next \(j\)-closest centers (for large \(j\)). Footnote 2: The results for unconstrained and discrete settings are incomparable in the following sense. The guarantee of existence of a fair outcome in the discrete setting guarantees the existence of a fair outcome in the unconstrained setting; however, a polynomial-time algorithm for computing a fair outcome in the discrete setting does not apply the same for the unconstrained setting. ## 2 Related Work Fairness in machine learning and computer systems is a vast topic [37]--for an overview, see Chouldechova and Roth [22] or the survey by Mehrabi et al. [42]. We focus on the fundamental learning problem of clustering, which is a well-studied problem in computer science, statistics, and operations research (see, e.g., [33]). Our approach is axiomatic, focused on the concept of proportional fairness (or representation), and does not rely on the use of a pre-specified or protected set of attributes. Thus, the most closely related papers are those of Chen et al. [19], Micha and Shah [43], Li et al. [40], and Jung et al. [38], which we review below. Chen et al. [19] proposed the concept of _proportional fairness_ (PF) for the discrete setting, which is based on the entitlement of groups of agents of large enough size. They reasoned that PF is a desirable fairness concept but it is not guaranteed to be achievable for every instance. On the other hand, they showed that outcomes satisfying reasonable approximations of PF are guaranteed to exist and can be computed in polynomial time via the 'Greedy Capture' algorithm. Micha and Shah [43] analyzed the proportional fairness axiom in the unconstrained setting and presented similar results to those obtained by Chen et al. [19]. They showed that the natural adaptation of the Greedy Capture algorithm finds an approximately PF outcome. On the other hand, they showed that checking whether a PF outcome exists is NP-hard and that the clustering returned by the 'greedy capture' algorithm cannot be computed in polynomial time unless P = NP. Li et al. [40] consider the same discrete clustering setting as Chen et al. [19] and propose a stronger version of proportional fairness called _core fairness_. They present results on lower and upper approximation bounds for various restrictions of the discrete setting. Because core fairness is stronger than proportional fairness, the negative existence results of Chen et al. [19] also hold for core fairness. Jung et al. [38] consider the discrete clustering problem in which the set of agents and candidate centers coincide. For this setting, they propose a fairness concept that has a similar goal as that of Chen et al. [19]. In subsequent work, it has been referred to as _individual fairness_[49; 17]. Individual fairness requires that, for each agent \(i\), there is a selected center that is at most \(r(i)\) from \(i\), where \(r(i)\) is the smallest radius around \(i\) that includes \(n/k\) agents. As is the case for proportional fairness, only a constant-factor approximation can be achieved for two or more dimensions. There are also several concepts and algorithms in clustering where the protected groups are pre-specified based on protected attributes [1; 2; 13; 12; 39; 20; 21; 30]. In contrast, we are seeking fairness properties that apply to arbitrarily defined groups of all possible sizes. In our proposed axiom, these groups are _endogenously_ determined from data points; in some real-world settings, this is a legal requirement since it can be forbidden by law to have decision-making processes use certain attributes as inputs. Some recent papers also use randomization to achieve probabilistic fairness objectives [16; 31]. We do not explore randomization since our proposed axiom is defined on the (ex-post) clustering outcome and can be achieved via deterministic algorithms. Fairness and representation are important concerns in economic design [44]. Issues related to proportional representation and fairness have been considered in many settings, including apportionment [47; 15], participatory budgeting [9], portioning [7; 4], probabilistic voting [10], and multi-winner voting [34; 8; 27; 28; 48]. Group fairness has also been examined in the context of resource allocation [14; 24; 11]. This literature has largely focused on settings where agents have _ordinal_ preferences over candidates. Our fairness (or proportional representation) axioms take into account the additional cardinal information available when the candidates belong to a general metric space. Hence, our axioms formalize a notion of proportional representation for \(m\)-dimensional Euclidean space for \(m>2\). The standard fairness axioms such as proportionality of solid coalitions for ordinal preferences [8] are too weak when applied to our spatial context. Naively applying them to our spatial problem also results in a reduction from our unconstrained problem to a multiwinner voting problem with an infinite number of candidates. The clustering problem has natural connections with multi-winner voting [34; 35; 46]. Agents and centers in the clustering context correspond to voters and candidates in multi-winner voting. Each agent's (voter's) preference over centers (candidates) can then be derived using the distance metric with closer centers being more preferred. With this view, multi-winner voting rules imply clustering solutions. For example, the Chamberlin-Courant rule takes agents' distances from candidates using Borda count and then implements the \(k\)-median objective. Relatedly, clustering can also be viewed as a facility location problem [36], where multiple facilities (centers) are to be located in a metric space; for a recent overview, see the survey by Chan et al. [18]. ## 3 Limitation of Existing Proportional Fairness Concepts Proportional fairness for clustering has been considered in a series of recent papers. Chen et al. [19] first proposed a proportional fairness axiom that is based on the idea that more dense groups of data points require more centers--in their words, _"any subset of at least \(r\lceil n/k\rceil\) individuals is entitled to choose \(r\) centers"_. Their proposed formal specification of proportional fairness requires the clustering solution to be such that there is no set of agents of size at least \(n/k\) that can find an unselected center that is "better" (i.e., located closer than the closest selected center) for each of them. The concept was studied in detail for the discrete and unconstrained settings by Chen et al. [19] and and Micha and Shah [43], respectively. **Definition 1** (Proportional Fairness [19]).: \(X\subseteq\mathscr{M}\) _with \(|X|=k\) satisfies proportional fairness if \(\forall S\subseteq\mathscr{N}\) with \(|S|\geq\lceil\frac{n}{k}\rceil\) and for all \(c\in\mathscr{M}\), there exists \(i\in S\) with \(d(i,c)\geq d(i,X)\)._ The idea of proportional fairness was further strengthened to core fairness that requires that there is no set of agents of size at least \(n/k\) that is better for them in aggregate. **Definition 2** (Core Fairness [40].).: \(X\subseteq\mathscr{M}\) _with \(|X|=k\) is core fair if \(\forall S\subseteq\mathscr{N}\) with \(|S|\geq\lceil\frac{n}{k}\rceil\) and for all \(c\in\mathscr{M}\), \(\sum_{i\in S}d(i,c)\geq\sum_{i\in S}d(i,X)\)._ One rationale for proportional fairness is explained in the following example, which was first presented by Micha and Shah [43] and then reused by Li et al. [40]. We reuse the same example. **Example 1**.: _Consider a set of data points/agents. The agents are divided into 11 subsets of clusters each of which is densely clustered. One cluster of agents has size 10,000. The other 10 clusters have sizes 100 each. The big cluster is very far from the smaller clusters. The small clusters are relatively near to each other._ Micha and Shah [43] and Li et al. [40] are of the view that any set of centers that satisfies a reasonable notion of proportional fairness would place 10 centers for the big cluster and 1 center serving the small clusters. Next, we point out that proportional fairness and core fairness do not necessarily satisfy this minimal requirement, which we formalize via the _unanimous proportionality_ axiom below. **Definition 3** (Unanimous Proportionality (UP)).: \(X\subseteq\mathscr{M}\) _with \(|X|=k\) satisfies unanimous proportionality if \(\forall S\subseteq\mathscr{N}\) with \(|S|\geq\ell\lceil\frac{n}{k}\rceil\) and each agent in \(S\) has the same location \(x\in\mathscr{X}\), then \(\ell\) possible locations closest to \(x\) are selected._ UP captures a principle that Chen et al. [19] were seeking that requires that _"any subset of at least \(r\lceil n/k\rceil\) individuals is entitled to choose \(r\) centers"_.3 The following example shows that proportional fairness [19] and core fairness [40] do not imply Unanimous Proportionality. Similarly, it can be shown that the fairness concept proposed by Jung et al. [38] does not imply unanimous fairness. Footnote 3: In the term unanimous proportionality, unanimous refers to the condition where a set of agents have the same location, and proportionality refers to the requirements that such agents get an appropriate number of centers in proportion to the number of agents. **Example 2** (Limitation of proportional fairness and core fairness).: _Suppose agents are located in the \([0,1]\) interval. 10,000 agents are at point 0 and 1000 agents are at point 1. Suppose we want to find \(k\) centers. UP requires that 10 centers are at or just around point 0 and 1 center is at point 1. However, placing 1 center at point 0 and 10 centers at point 1 satisfies the proportional fairness concept of Chen et al. [19] and also the core fairness concept of Li et al. [40]._ The example above shows that there is a need to formalize new concepts that capture proportional representative fairness. The property should imply unanimous proportionality but should also be meaningful if no two agents / data points completely coincide in their location. One reason why proportional fairness and core fairness do not capture UP is that they make an implicit assumption that may be suitable in some contexts of facility location but possibly not in the context of certain unsupervised learning problems. The assumption is that an agent _only_ cares about the distance from the nearest center and not on how many centers are nearby. Another issue is that a PF outcome may not exist. We introduce an example presented by Micha and Shah [43] that shows that a PF outcome may not exist. We will use the same example to illustrate our new fairness concept. **Example 3**.: _Consider Figure 1. For the 6 agents and \(k=3\), it has been shown by Micha and Shah [43] that a PF outcome may not exist._ ## 4 Fairness for the Unconstrained Setting We propose a new concept _Proportionally Representative Fairness (PRF)_ that captures proportional representation fairness concerns. The intuition behind the concept is based on two natural principles: (1) A set of agents that is large enough deserves to have a proportional number of centers 'near' it, and (2) The requirement of proximity of the nearby centers from a subset of agents should also depend on how densely together the particular subset of agents are. We use these two principles to propose a new fairness concept. **Definition 4** (Proportionally Representative Fairness (PRF) for Unconstrained Clustering).: _An outcome \(X\subseteq\mathscr{M}\) with \(|X|=k\) satisfies Proportionally Representative Fairness (PRF) if the following holds. For any set of agents \(S\subseteq\mathscr{N}\) of size at least \(|S|\geq\ell n/k\) such that the maximum distance between pairs of agents in \(S\) is \(y\), the following holds: \(|c\in X:\exists i\in S\) s.t \(d(i,c)\leq y|\geq\ell\)._ PRF has the following features: it implies unanimous proportionality; it is oblivious to and does not depend on the specification of protected groups or sensitive attributes; it is robust to outliers in the data, since the fairness requirements are for groups of points that are sizable enough; and it is scale invariant to multiplication of distances (PRF outcomes are preserved if all the distances are multiplied by a constant). Next, let us illustrate an example where proportional fairness is not achievable but one of the most natural clustering outcomes satisfies PRF. **Example 4** (Requirements of PRF).: _We revisit Example 3 to illustrate the requirements that PRF imposes. For each set \(S\) of size at least \(\ell\lceil\frac{n}{k}\rceil=2\ell\), PRF requires \(\ell\) centers in the relevant neighborhood of agents in \(S\) (see Figure 2). Therefore, for agent 1 and 2, PRF requires that one selected candidate center should be in one of the circles around 1 or 2. For the set of agents \(\{1,2,3\}\), PRF also requires that one candidate center should be in one of the circles around 1, 2 or 3. A natural solution is to have one center located between agents 1, 2, and 3, another center located between agents 4, 5, and 6, and the final center located somewhere not too far from all agents (e.g., between the two groups of agents). Such solutions are intuitively reasonable and fair and, furthermore, satisfy PRF; however, as mentioned in Example 3, this instance does not admit a proportional fairness solution._ Next, we highlight that the well-known \(k\)-means or \(k\)-median solutions do not necessarily satisfy PRF. Example 5 is adapted from an example by Chen et al. [19]. **Example 5** (\(k\)-means does not satisfy PRF.).: _Consider Figure 3 in which there are \(n/3\) agents / data points uniformly distributed on the perimeter of each of the three circles. If \(k=3\), a \(k\)-means or \(k\)-median solution will place one center close to the small circles and two centers insider the big circle. In contrast, PRF requires that each of the circles gets its own center facility._ PRF is a property that requires conditions on an exponential number of subsets of agents. For each subset of agents, it enforces representational constraints pertaining to an infinite number of neighborhood distances. On face value, it is not clear whether an outcome satisfying PRF exists Figure 1: An example instance with 6 agents and \(k=3\) for which a PF outcome does not exist. Figure 3: The k-means solution may not satisfy PRF. as proportional fairness cannot always be guaranteed [43]. Secondly, even if a PRF outcome is guaranteed to exist, the complex constraints seem computationally challenging. Perhaps surprisingly, we show that not only is a PRF outcome guaranteed to exist, it can be computed in polynomial time. The algorithm intuitively works as follows. Firstly, instead of considering infinite possible centers, we restrict our attention to the \(n\) candidates centers that coincide with each agent's location. Each agent is given an initial weight of \(1\). This weight dynamically decreases over the course of the algorithm. The algorithm can be viewed as gradually expanding the neighborhoods of each of the agents.4 Instead of continuously expanding the neighborhood, we only consider at most \(n^{2}\) possible values of radii for which the neighborhoods of the agents are considered. If there exists some location in the candidate set such that it is in the intersection of the neighborhoods of a group of agents with total weight of at least \(n/k\), we select one such candidate. The process is continued until \(k\) centers are chosen. The algorithm is formally specified as Algorithm 1. Footnote 4: The idea of expanding neighborhoods is used in other algorithms in the literature such as the Greedy Capture algorithm studied by Micha and Shah [43] for the unconstrained setting that is a continuous algorithm that grows neighborhoods of each possible point. Unlike our algorithm, it does not involve reweighting of agents. More importantly, Greedy Capture does not satisfy PRF. **Lemma 1**.: _Algorithm 1 terminates in polynomial time \(O(n^{4}k^{2})\) and returns \(k\) centers._ Proof.: We prove by induction that if the number of candidates selected is less than \(k\), then there is a way to select at least one more center. If the number of candidates selected is less than \(k\), there is still aggregate weight of at least \(n/k\) on the set of all agents. These agents can get a candidate selected from their neighborhoods if the neighborhood radius is large enough. In particular, for \(d_{|D|}=\max_{i,j\in N}d(i,j)\), some unselected candidate from \(M\) can be selected. Note that for a given radius \(d_{j}\), at most \(k\) candidates can be selected. If no more candidates can be selected for a given \(d_{j}\), the next radius \(d_{j+1}\) is considered. There are \(O(n^{2})\) different distances that are to be considered. Next, we bound the running time. There are \((n^{2})\) different distances that are to be considered. These distances are sorted in \(O(n^{2}\log(n^{2}))\) time. For each distance that we consider, we check for each of the \(kn\) locations whether they get sufficient support of \(n/k\). It takes time \(kn^{2}\) to check if some location has support \(n/k\) for the current neighborhood distance. If no location has enough support, we move to the next distance. If some location has enough support, we need to select one less location. Therefore, we need to check whether some location has enough support at most \(\max(k,n^{2}k)\) times. Therefore the running time is \(O(n^{2}\log(n^{2}))+O((kn^{2})(n^{2}k))=O(n^{4}k^{2})\). **Lemma 2**.: _Algorithm 1 finds a set of \(k\) centers that satisfies PRF._ Proof.: Suppose for contradiction that the algorithm finds a set of \(k\) centers \(W\) that violates PRF. In that case, there exist a set of agents \(S\subseteq\mathscr{N}\) of size at least \(|S|\geq\ell n/k\) such that the maximum distance between pairs of agents in \(S\) is \(y\) but the number of locations that are within \(y\) of some agent in \(S\) is at most \(\ell-1\). In that case, consider the snapshot of the algorithm when neighborhood distance \(y=\max_{i,j\in S}d(i,j)\) is considered. At this point, \(|c\in X:\exists i\in S\text{ s.t }d(i,c)\leq y|\leq\ell^{\prime}-1\) Since agents in \(\mathscr{N}\) have only used their weight towards selecting locations within \(y\) up till this point, it follows that \(\sum_{i\in S}w_{i}\geq\ell\frac{n}{k}-(\ell-1)\frac{n}{k}=\frac{n}{k}\). It follows that the agents in \(S\) still have total weight at least \(\frac{n}{k}\) to select one more location that is at distance at most \(y\) from some agent in \(S\). Hence, at this stage, when the neighborhood distance is \(y\), the algorithm would have selected at least one more center within distance \(y\) than in the outcome \(W\). ## 5 Fairness for the Discrete Setting We presented PRF as a desirable concept for the unconstrained setting. It is not immediate how to adapt the concept to the discrete setting. Applying the unconstrained PRF definition to the discrete setting leads to issues of non-existence. On the other hand, some natural attempts to account for the coarseness of the candidate space, \(\mathscr{M}\), can lead to a concept that is too weak. For example, one alternative is to require that an outcome \(X\) be such that if, for any set of agents \(S\subseteq\mathscr{N}\) of size at least \(|S|\geq\ell n/k\) such that the maximum distance between pairs of agents in \(S\) is \(y\), the following holds: \(|c\in X:\exists i\in S\text{ s.t }d(i,c)\leq y|\geq\min\{\ell,|\cup_{j\in S}B_{y}(j) \cap\mathscr{M}|\}\), where \(B_{r}(i)\) denote a ball of radius \(r\) around agent \(i\). Alternatively, we could replace the right-hand side of the final inequality with \(\min\{\ell,|\cap_{i\in S}B_{y}(i)\cap\mathscr{M}|\}\). Both of these versions are too weak. To see this, suppose \(k=1\), all agents are located at \(0\), and \(\mathscr{M}=(\ldots,-2,-1,1,2,3,\ldots)\), then neither version places any restriction on the facility location and do not even imply UP. To resolve these issues, we need to take into account that the nearest candidate locations may be very far from a subset of agents. We adapt our PRF concept for the discrete setting via a careful adaptation of the PRF concept for the unconstrained setting. **Definition 5** (Proportionally Representative Fairness (PRF) for Discrete Clustering).: _For any set of agents \(S\subseteq\mathscr{N}\) of size at least \(|S|\geq\ell n/k\), if there are \(\ell^{\prime}\leq\ell\) candidates from \(\mathscr{M}\) that are at distance at most \(y\) from all agents in \(S\), the following holds: \(|c\in X:\exists i\in S\text{ s.t }d(i,c)\leq y|\geq\ell^{\prime}\)._ For the discrete setting, we propose a new algorithm (Algorithm 2). Algorithm 2 is similar to Algorithm 1, which was designed for the unconstrained setting. Similar to Algorithm 1, Algorithm 2 terminates in polynomial time and returns \(k\) centers (Lemma 3) and its output satisfies PRF (Lemma 4). The proofs of these lemmas are similar to Lemmas 1 and 2. ``` Input: metric space \(\mathscr{X}\) with a distance measure \(d:\mathscr{X}\times\mathscr{X}\rightarrow\mathbb{R}^{+}\cup\{0\}\), a finite multiset \(\mathscr{N}\subseteq\mathscr{X}\) of \(n\) data points (agents), a finite set of candidate locations \(\mathscr{M}\), and positive integer \(k\). Output: A multiset of \(k\) centers \(w_{i}\longleftarrow 1\) for each \(i\in\mathscr{N}\) Consider the set \(D=\{d(i,c)\mid i\in\mathscr{N},c\in\mathscr{M}\}\). Order the entries in \(D\) as \(d_{1}\leq d_{2}\leq\cdots\leq d_{|D|}\). \(j\longleftarrow 1\) \(W\longleftarrow\emptyset\) while\(|W|<k\)do \(C^{*}=\{c\in\mathscr{M}\mid\sum_{i\in\mathscr{N}\mid d(i,c)\leq d_{j}}w_{i}\geq n/k\}\) if\(C^{*}=\emptyset\)then \(j\longleftarrow j+1\) else Select some candidate \(c^{*}\) from \(C^{*}\) such that \(c^{*}=\arg\max_{c^{\prime}\in C^{*}}\sum_{i\in\mathscr{N}\mid d(i,c^{\prime}) \leq d_{j}}w_{i}:\) \(W\longleftarrow W\cup\{c^{*}\}\); \(\mathscr{M}\longleftarrow\mathscr{M}\setminus\{c^{*}\}\) \(N^{\prime}\longleftarrow\{i\in\mathscr{N}\,:\,d(i,c^{*})\leq d_{j}\}\) Modify the weights of voters in \(N^{\prime}\) so the total weight of voters in \(N^{\prime}\), i.e., \(\sum_{i\in N^{\prime}}w_{i}\), decreases by exactly \(n/k\). Return \(W\) ``` **Algorithm 2**Algorithm for the Discrete Clustering **Lemma 3**.: _Algorithm 2 terminates in polynomial time \(O({|\mathscr{M}|}^{2}n^{2})\) and returns \(k\) centers._ Proof.: We prove by induction that if the number of candidates selected is less than \(k\), then then there is a way to select at least one one more center. If the number of candidates selected is less than \(k\), there is still aggregate weight of at least \(n/k\) on the set of all agents. These agents can get a candidate selected from their neighborhood if the neighborhood distance is large enough. In particular, for \(d_{|D|}=\max_{i\in N,c\in\mathscr{M}}d(i,c)\), some unselected candidate from \(M\) can be selected. Note that for a given radius \(d_{j}\), at most \(k\) candidates can be selected. If no more candidates can be selected for a given \(d_{j}\), the next distance \(d_{j+1}\) is considered. Next we bound the running time. There are \((n\cdot{|\mathscr{M}|})\) different distances that are to be considered. These distances are sorted in \(O(n\cdot{|\mathscr{M}|}\log(n\cdot{|\mathscr{M}|}))\) time. For each distance that we consider, we check for each of the \({|\mathscr{M}|}\) locations whether they get sufficient support of \(n/k\). It takes time \({|\mathscr{M}|}\cdot n\) to check if some location has support \(n/k\) for the current neighborhood distance. If no location has enough support, we move to the next distance. If some location has enough support, we need to select one less location. Therefore, we need to check whether some location has enough support at most \(\max(k,n\cdot{|\mathscr{M}|})\) times. Therefore the running time is \(O(n\cdot{|\mathscr{M}|}\log(n\cdot{|\mathscr{M}|}))+O(({|\mathscr{M}|}\cdot n )(k+n\cdot{|\mathscr{M}|}))=O({|\mathscr{M}|}^{2}n^{2})\). **Lemma 4**.: _Algorithm 2 finds a set of \(k\) centers that satisfies PRF._ The running time analysis is similar to the proof of Lemma 1. Proof.: Suppose for contradiction that the algorithm finds a set of \(k\) centers \(W\) that violates PRF. In that case, there exist a set of agents \(S\subseteq\mathscr{N}\) of size at least \(|S|\geq\ell n/k\) such that there are at least \(\ell^{\prime}\leq\ell\) locations in \(\mathscr{M}\) that are within distance \(y\) of some agent in \(S\), but the number of locations in \(W\) that are within \(y\) of at least some agent in \(S\) is at most \(\ell-1\). In that case, consider the snapshot of the algorithm when neighborhood distance \(y\) is considered. At this point, the the number of locations that are within \(y\) of at least some agent in \(S\) is at most \(\ell-1\). Since agents in \(\mathscr{N}\) have only used their weight towards selecting locations within \(y\) up till this point, it follows that \(\sum_{i\in S}w_{i}\geq\ell\frac{n}{k}-(\ell-1)\frac{n}{k}=\frac{n}{k}\). It follows that the agents in \(S\) still have total weight at least \(\frac{n}{k}\) to select one more location that is at distance at most \(y\) from some agent in \(S\). Hence, at this stage, when the neighborhood distance is \(y\), the algorithm would have selected at least one more center within distance \(y\) than in the outcome \(W\). Thus \(W\) is not the correct output of the algorithm, a contradiction. **Remark 1**.: _Algorithm 1 can be viewed as first setting \(\mathscr{M}\) to the multiset of candidate locations corresponding to agents in \(\mathscr{N}\) and then running Algorithm 2._ **Remark 2**.: _Chen et al. [19] present an algorithm called "Greedy Capture." An equivalent algorithm, called \(ALG_{g}\), is also presented by Li et al. [40]. However, it can be shown that these algorithms do not satisfy PRF. In fact, for given \(k\), these algorithms may fail to output \(k\) candidate locations.5_ Footnote 5: To see this, let \(k=3\) and let \(\mathbf{x}=(0,0,1)\). Chen et al.’s Greedy Capture algorithm and Li et al.’s ALG\({}_{g}\) will select candidate locations \(0\) and \(1\), but will not output a 3rd location. Of course, this issue could be rectified by arbitrarily choosing a 3rd candidate location—but then there is no guarantee that the set of locations would satisfy PRF. We consider two natural and stronger notions of PRF in the discrete setting (that also apply to the continuous setting). We will show that the versions that we consider do not guarantee the existence of a fair outcome in all settings. The PRF concept can be increasingly strengthened in the following ways by making the requirements on the outcome stronger. **Definition 6** (Proportionally Representative Fairness (PRF)-II for Discrete Clustering).: _For any set of agents \(S\subseteq\mathscr{N}\) of size at least \(|S|\geq\ell n/k\), if there are \(\ell^{\prime}\leq\ell\) candidates from \(\mathscr{M}\) that are at distance at most \(y\) from all agents in \(S\), the following holds: there exists some \(i\in S\) such that \(|c\in X:d(i,c)\leq y|\geq\ell^{\prime}\)._ **Definition 7** (Proportionally Representative Fairness (PRF)-III for Discrete Clustering).: _For any set of agents \(S\subseteq\mathscr{N}\) of size at least \(|S|\geq\ell n/k\), if there are \(\ell^{\prime}\leq\ell\) candidates from \(\mathscr{M}\) that are at distance at most \(y\) from all agents in \(S\), the following holds: \(|c\in X:d(i,c)\leq y\) for all \(i\in S|\geq\ell^{\prime}\)._ The following example shows that an outcome satisfying PRF-II may not exist (and hence also PRF-III may not exist). **Example 6**.: _Take \(n=4,k=2\) with the following location profile on a unit interval \([0,1]\): \(\mathbf{x}=(0,0,1,1)\). There are 4 candidate locations at \(\mathscr{M}=(0,0.5,0.5,1)\). It is immediate that the 2 groups of agents at \(0\) and \(1\) must have a facility at \(0\) and \(1\). But this violates PRF-II. The set of agents \(N\) (size 4) have 2 candidate locations with a distance of \(0.5\) of all agents; however, no agent in \(N\) has 2 facility locations within a distance of 0.5 of them._ ## 6 Experiment We now apply Algorithm 1 to four real-world datasets. We consider the discrete domain in which \(\mathscr{M}=\mathscr{N}\) as most clustering data sets only have the data points. Algorithm 1 is guaranteed to satisfy PRF; therefore, we are interested in analyzing the performance of the algorithm with respect to other objectives. A prominent objective for clustering algorithms is to minimize the _Mean Squared Distance (MSD) to the closest center_: the average distance between each datapoint and its closest center. We consider this performance measure as the number of centers ranges from \(k=1\) to \(100\). In addition, to MSD to the closest center, we analyze two other related measures. MSD to the closest \(k/2\) centers and MSD to the closest \(k\) centers. Intuitively, MSD to the closest \(k/2\) centers is the average distance between each datapoint and its \(k/2\)-closest centers (MSD to the closest \(k\) centers is similar). These measures capture--to varying extents--the idea that it may be desirable to have datapoints located close to more than just one center; this idea is at the core of the proportional representation concept. To benchmark Algorithm 1's performance, we also implement two other algorithms: the well-known \(k\)-means++ algorithm6 of Arthur and Vassilvitskii [6] and \(\text{ALG}_{g}\) of Li et al. [40]. For all of our MSD measures, a smaller value is more desirable. Footnote 6: I.e., Lloyd’s algorithm for \(k\)-means minimization objective with a particular initialization. Given our focus on the discrete domain, we use a \(k\)-means++ algorithms that only chooses centers from among the data points. _Datasets._. The four datasets that we analyze are from the UCI Machine Learning Repository [26]. We summarize these below.7 Footnote 7: The datasets are licensed under a Creative Commons Attribution 4.0 International License (CC BY 4.0); for details, please refer to [https://archive-beta.ics.uci.edu/ml/donation-policy](https://archive-beta.ics.uci.edu/ml/donation-policy). Figure 4: HVC dataset: MSD to closest 1, k/2, and k centroids. **HCV dataset**: contains information about patients with Hepatitis C and blood donors. **Seeds dataset**: describes measurements of geometrical properties of kernels belonging to three different varieties of wheat. The data consists of 7 real-valued variables: area, perimeter, compactness, length and width of kernel, an asymmetry coefficient, and length of kernel groove. **Buddy-move dataset**: consists of user information extracted from user reviews published in www.holidayiq.com about point of interests in South India. The attributes are integer-valued consisting of the number of reviews for: sports complexes, religious institutions, natural and cultural areas, shopping areas, and parks and picnic spots. **Wholesale dataset**: consists of client information for a wholesale distributor. It includes the annual spending on various product categories, e.g., fresh foods, milk, groceries, frozen foods, detergents and paper products, delicatessen, retail, and location. _Results._ We summarize our results in Table 1 and illustrate our analysis in the subsequent figures. Across all the datasets, Algorithm 1's performance relative to \(k\)-means improves when moving from MSD to closest center to MSD to closest \(k/2\) centers and MSD to closest \(k\) centers. A similar pattern is observed for \(ALG_{g}\), which may be expected and is reassuring since--like Algorithm 1--\(ALG_{g}\) is motivated by the idea of proportional fairness. For two of the datasets (Wholesale and HCV), Algorithm 1 offers substantially better performance compared to \(k\)-means with respect to MSD to closest \(k\) centers. In these cases, Algorithm 1 also outperforms \(ALG_{g}\). For MSD to the closest \(k/2\) centers, \(ALG_{g}\) and Algorithm 1 either outperform or perform similar to \(k\)-means. In contrast, and as expected, the \(k\)-means outperforms both algorithms with respect to the MSD to closest center. With this measure, \(ALG_{g}\) typically outperforms Algorithm 1. The experiments illustrate that other well-known algorithms produce different outcomes compared to our Algorithm 1, and this difference can be quantified via an intuitive PRF-related metric, i.e., minimizing the MSD to multiple centers. ## 7 Discussion and conclusion We revisited a classical AI problem with a novel solution concept that is based on ideas from the theory of social choice and fair division. We proposed a natural fairness concept called proportionally representative fairness (PRF) for clustering in both the unconstrained and discrete settings. PRF is guaranteed to exist and has desirable features: it implies unanimous proportionality, does not require the specification of protected groups, and is robust to outliers and multiplicative scaling. A natural next step is to identify and analyze stronger versions of the PRF axiom that still guarantees existence. It will be interesting to consider more refined versions of the PRF algorithm that make more clever choices while selecting candidates with sufficient weighted support. Another direction for future research is the understanding how much impact the constraint of PRF imposes on standard optimization objectives such as for \(k\)-means, \(k\)-median, or \(k\)-center. For specific applications domains where clustering is performed, there may be a possibility to use additional features of the application and a clearer interpretation of the data to formalize more nuanced versions of fairness [25] and perhaps also find natural proportional fairness axioms that are incentive compatible.
2305.15097
Computer Vision for Construction Progress Monitoring: A Real-Time Object Detection Approach
Construction progress monitoring (CPM) is essential for effective project management, ensuring on-time and on-budget delivery. Traditional CPM methods often rely on manual inspection and reporting, which are time-consuming and prone to errors. This paper proposes a novel approach for automated CPM using state-of-the-art object detection algorithms. The proposed method leverages e.g. YOLOv8's real-time capabilities and high accuracy to identify and track construction elements within site images and videos. A dataset was created, consisting of various building elements and annotated with relevant objects for training and validation. The performance of the proposed approach was evaluated using standard metrics, such as precision, recall, and F1-score, demonstrating significant improvement over existing methods. The integration of Computer Vision into CPM provides stakeholders with reliable, efficient, and cost-effective means to monitor project progress, facilitating timely decision-making and ultimately contributing to the successful completion of construction projects.
Jiesheng Yang, Andreas Wilde, Karsten Menzel, Md Zubair Sheikh, Boris Kuznetsov
2023-05-24T12:27:42Z
http://arxiv.org/abs/2305.15097v1
# Computer Vision for Construction Progress Monitoring: ###### Abstract Construction progress monitoring (CPM) is essential for effective project management, ensuring on-time and on-budget delivery. Traditional CPM methods often rely on manual inspection and reporting, which are time-consuming and prone to errors. This paper proposes a novel approach for automated CPM using state-of-the-art object detection algorithms. The proposed method leverages e.g. YOLOv8's real-time capabilities and high accuracy to identify and track construction elements within site images and videos. A dataset was created, consisting of various building elements and annotated with relevant objects for training and validation. The performance of the proposed approach was evaluated using standard metrics, such as precision, recall, and F1-score, demonstrating significant improvement over existing methods. The integration of Computer Vision into CPM provides stakeholders with reliable, efficient, and cost-effective means to monitor project progress, facilitating timely decision-making and ultimately contributing to the successful completion of construction projects. Keywords:AI and digital transformation, Digital Twins. ## 1 Introduction Construction progress monitoring (CPM) is a critical aspect of effective project management, as it ensures timely delivery and budget adherence. As construction projects become increasingly complex, the importance of accurate and efficient progress monitoring cannot be overstated. Traditional CPM methods largely depend on manual inspection and reporting, which involve site visits, visual observation, and documentation of construction progress. These methods are often time-consuming, labor-intensive, and error-prone, leading to potential delays, or reduced overall project performance. Furthermore, manual CPM methods can also be impacted by human subjectivity, resulting in inconsistencies and inaccuracies in the collected data. Comprehensive and consistent CPM is a major contribution to quality management in the AECO-sector. The quality of work executed has a substantial impact on buildings' performance and is in some cases a pre-requisite for the introduction of novel, innovative business models, such as energy service contracting [1], [2], predicted building automation and control [3] or performance-based maintenance [4]. Therefore, we selected for our research the monitoring of the installation process of windows as an example. Instantaneous monitoring and fast analysis of monitoring results will enable the quality managers appointed by the property owner to contact the main contractor to request the correction of low-quality installations. Numerous sub-contractors are involved in the installation process. Thus, our paper explains AI-support of a highly collaborative work-process on construction sites. Early work in the area of digital CPM focused on the deployment of mobile, wearable devices on construction sites [5] and the development flexible, dynamic business-process models [6]. In recent years, there has been a growing interest in leveraging technological advancements to automate CPM [7]. Several approaches were explored, such as image-based techniques, laser scanning, and Building Information Modeling (BIM) [8, 9] or semantic web technologies [10, 11]. However, these methods have their own drawbacks. Image-based techniques typically require manual processing of the collected images. Laser scanning and BIM can be expensive and time-consuming to implement and semantic-web technologies cannot be easily integrated in commercially available solutions. As a result, there is a need for a more efficient, accurate, and cost-effective CPM solutions. Deep learning and computer vision offer opportunities for automating CPM through object detection algorithms. These algorithms can identify and track objects within images and videos, providing real-time information on their location, size, and orientation. This paper introduces a novel approach to automated CPM using the state-of-the-art object detection algorithm, YOLOv8. Leveraging YOLOv8's real-time capabilities and high accuracy, our method identifies and tracks construction elements within site images. YOLOv8, a single-stage object detection algorithm, outperforms other methods in accuracy and speed, and its architecture enables simultaneous detection of multiple objects, making it ideal for complex construction sites. The primary objective of this study is to explore the potential of YOLOv8 as an effective tool for automating CPM, by evaluating its performance in detecting and tracking construction elements within a custom dataset of construction site images and videos. The dataset was created by collecting images and videos from various construction projects and annotating them with relevant objects, such as construction materials, or equipment. The performance of the proposed approach was evaluated using standard metrics, e.g. precision or recall, and compared to existing methods. The remainder of the paper is structured as follows: Section 2 provides a review of the background and related work on CPM and object detection algorithms. Section 3 introduces the YOLOv8 object detection algorithm and its relevance to CPM. Section 4 describes the dataset creation and annotation process, while Section 5 presents the implementation of the proposed CPM method. Section 6 discusses the results and potential implications of the study, and Section 7 concludes the paper with recommendations for future work. Background and Related Work In recent years, there has been growing interest in the use of computer vision techniques for construction site monitoring. Early work explored the use of computer vision in the AEC industry [12], focusing on the benefits and challenges of implementing these technologies. By incorporating object detection, tracking, and activity recognition in real-time site monitoring computer vision techniques gained popularity [13]. Subsequent research investigated the automatic detection of construction site safety hazards using computer vision [14], further emphasizing the potential of these methods in improving safety and efficiency on construction sites. In parallel, a review of computer vision methods for 3D construction site modelling was conducted [15], highlighting the usefulness of techniques such as Structure-from-Motion (SfM) and photogrammetry for generating 3D models from images or video data collected on-site. More recent projects have critically reviewed automated visual recognition of construction site objects [16] and investigated smart construction site monitoring systems based on Unmanned Aerial Vehicles (UAVs) and artificial intelligence [17]. These studies demonstrated the effectiveness of combining AI technologies, including computer vision algorithms, with UAVs for progress monitoring, safety analysis, and resource management. Latest research includes a comprehensive review of object detection and tracking in the construction industry [18], identifying the challenges that need to be addressed to improve the effectiveness of these techniques in construction site monitoring. ### The Use Case: Progress Monitoring for Window Installations The above projects provide valuable insights and methodologies, which are applicable and adaptable for our current project, which aims to develop CPM for the window installation process based on object detection modelling. The installation of windows can be monitored from the outside of buildings, e.g. by using drones and from the inside, i.e. from the room to which the window belongs to. Figure 1: Windows in rough construction Figure 2: Windows in facçade (“peel away”) ## 3 Object Detection Algorithm Object detection is a pivotal computer vision task that involves identifying and localizing instances of objects within images or videos. Applications exist in various fields, such as autonomous vehicles, robotics, and construction progress monitoring [19], [20], [21]. Recently, convolutional neural networks (CNNs) have significantly improved object detection capabilities by automatically learning hierarchical features from raw pixel data, offering higher accuracy and better generalization [22], [23]. The latest version of the YOLO object detection algorithms, renowned for their real-time processing capabilities and competitive accuracy, incorporates several architectural improvements and optimization techniques that enhance both accuracy and speed [24], including: * a modified backbone architecture based on CSPDarknet53 with Cross Stage Hierarchical (CSH) connections, * an enhanced feature pyramid network for better handling of objects with varying sizes and aspect ratios, * optimized anchor boxes tailored to the specific object classes and aspect ratios present in the training dataset, and * Mosaic data augmentation technique that exposes the model to a diverse set of object scales, orientations, and lighting conditions. Additionally, it employs an improved loss function for more accurate and stable training and boasts an efficient implementation using CUDA and cuDNN libraries. ### Object-Identification for Window Installation In the context of CPM and quality management, the documentation of window installations plays a crucial role in allowing stakeholders to visually assess ongoing work [25]. Timely and accurate CPM for windows is essential for maintaining construction activities' progress according to schedule and quality requirements [26], including the avoidance of so-called 'hidden' errors and omissions. Improved accuracy in object detection provides advanced capabilities for the identification of missing parts, the determination of the correct window type, and the verification of assembly tolerances. The use of object-detection algorithms can enhance safety by replacing manual monitoring processes with automated visual documentation using either integrated devices or UAV. The risk of accidents and injuries associated with quality management tasks on complex construction sites is reduced as well. The outdoor object detection for windows uses a rather conservative approach, since polished or glassed surfaces pose several problems to objection detection algorithms. Window frames often have only a limited number features to be detect and classified. Finally, different visual effects of the materials used in the installation process (e.g. color of sealing tapes) make a fully automatic detection process error prone. Therefore, it was decided to mark each window with a unique QR-code sticker. Provided a sufficient image quality, QR-codes can be detected and located robustly. For outdoor detection of windows installed, we envisage to use an active learning approach. In summary, the application of advanced object detection algorithms for CPM, particularly for the indoor part of CPM for windows, provides several benefits, such as: * automatic detection and localization of windows' position using real-time processing capabilities, * to receive up-to-date information about the status of all steps of the window installation process (see Fig.3), * to facilitate proactive decision-making in case of detected anomalies, * fast identification of the actors responsible for corrections and * instantaneous information provision for actors through electronic media. Thus, automated, AI-based CPM contributes to the high-quality completion of construction projects by facilitating early detection of deviations, timely decision-making for necessary corrections and ensuring compliance with project schedules. ## 4 Dataset Creation and Annotation ### Progress Monitoring and Checkpoints Identification Data collection is a crucial component of the project, as it lays the foundation for subsequent model training and prediction. A comprehensive dataset consisting of high-quality images is essential for achieving accurate and reliable results in object detection. **In case of indoor object identification**: We compiled a representative sample of images covering the range of window construction scenarios to be monitored in the \begin{table} \begin{tabular}{l l l l} \hline \hline & **Feature** & **Indoor** & **Outdoor** \\ \hline **General** & Parties & 4 & 6 \\ **Features** & involved & & \\ & Access to & yes & scaffold required \\ & Quality Mgr. & & \\ \hline **Object-** & Parts for & handles, hinges, & nearly none \\ **Identification** & identification & actuators, etc. & \\ **Feature** & Obstructions & limited & scaffold, powerlines \\ & Objects in & depends on site & multiple \\ & background & & \\ & Mirroring & limited & high, due to reflective \\ & effect & & surfaces \\ \hline \hline \end{tabular} \end{table} Table 1: Comparison of Indoor versus Outdoor Features for CPM of window installation future. We conducted the data collection in January 2023 at the Beyer-Bau between 11 a.m. and 12 p.m. The dataset comprises 347 images, stored in JPG format with a resolution of 3060x4080 pixels. We considered various factors during image capture, including window types, camera position, lighting conditions, and foreground and background diversity. The project's success will depend on dataset's quality and relevance. After analysing project requirements, it's clear that process monitoring involves tracking completion percentage and operation order. Fig. 3 describes the window installation process and divides the installation into six parts with corresponding completion percentages. However, an additional subprocess is needed to address the "flexible" steps, such as the combined work of Part 4 and Part 5. Workers must decide whether to perform these tasks before or after completing Part 3, depending on material availability and construction system coordination. The window installing percentage, as shown in Table. 1, can be divided into eight "check points," determined by the completion status of the corresponding steps. By combining data collection, monitoring plan determination, and data preprocessing, we created an efficient and comprehensive framework to monitor window construction progress using YOLO. This approach simplifies the task and ensures the proper characterization of the window installation process while preserving the necessary details for object detection. **In case of outdoor object identification**: The task is to compare a list of target-states of building parts with their as-is-states. In our case this is a list of windows together with their planned position and orientation, which is derived automatically from BIM-data. Each window is assigned to a unique ID which is printed on a sticker and attached to the corresponding window. The ID is also given in the list. The positions and orientations are given relative to a coordinate system attached to the building, which is marked on the carcass by additional QR-code-signs, the positions of the coordinate system markers are given in the list as well. The second input data set is a large number of images taken by a drone. The images must show a considerable overlap of the depicted scene that would allow rendering a 3D point cloud. Figure 3: Window installation process ### Annotation and Label Formatting Using makesense.ai, a web-based platform for creating labelled datasets, we manually annotated the dataset. The platform provides an online tool for annotating images, as well as powerful tools for organizing labelled datasets and collaborating on large-scale projects. We selected makesense.ai for its flexibility, ease of use, and advanced sharing capabilities. The labels were exported to YOLO format with one TXT file corresponding to each image. These files contain information on the class of the bounding box, and the x and y coordinates of the centre point, as well as the width and height of the bounding box in normalized xywh format. Class numbers are zero-indexed, and box coordinates are in normalized format with values ranging from 0 to 1. ### Data Augmentation and Dataset Partitioning We applied various image-level augmentations to enhance the dataset's diversity, including horizontal flip, rotation (clockwise and counterclockwise 90\({}^{\circ}\) and \(\pm\)15\({}^{\circ}\)), shearing (\(\pm\)15\({}^{\circ}\)), brightness adjustment (\(\pm\)25%), and blurring (2px). Consequently, the final dataset contained 768 images. \begin{table} \begin{tabular}{l l} \hline \hline Completion & Description \\ Percentage & \\ \hline 20\% & Part 1: Secure pre-assembled window frame into the wall opening, \\ & use support materials to prevent frame creeping and ensure stability. \\ 40\% & Part 1 + Part 2: Attach hinges to the frame, install sashes, and test tightness for smooth operation and no leaks. \\ 60\% & Part 1 + Part 2 + Part 4: Install inner sashes for double-layered \\ & windows or count single-layered windows as 40\% complete. \\ 65\% & Part 1 + Part 2 + Part 3: Apply adhesive to fill gaps from Part 1, \\ & complete waterproofing, soundproofing, and heat insulation. Seal \\ & gaps with plastic strips. \\ 70\% & Part 1 + Part 2 + Part 3: Apply adhesive to fill gaps from Part 1, \\ & complete waterproofing, soundproofing, and heat insulation. Seal \\ & gaps with plastic strips. \\ 85\% & Part 1 + Part 2 + Part 3 + Part 4: Complete gap filling, \\ & waterproofing, soundproofing, and heat insulation; install inner \\ & sashes for double-layered windows. \\ 95\% & Part 1 + Part 2 + Part 3 + Part 4 + Part 5: Complete painting and \\ & install inner sashes for double-layered windows. \\ 100\% & Part 1 + Part 2 + Part 3 + Part 4 + Part 5 + Part 6: Remove plastic \\ & membrane from glass, complete any final miscellaneous tasks. \\ \hline \hline \end{tabular} \end{table} Table 2: Window installation percent To create appropriate subsets for training, validation, and testing, we partitioned the dataset into three sets: training (729 images, 88%), validation (52 images, 6%), and testing (52 images, 6%). In summary, this chapter described the dataset creation and annotation process for monitoring window construction progress using YOLO. Through careful image selection, defining monitoring plans, and pre-processing data, we successfully developed a robust dataset for training and testing object detection models. The integration of the "Percentage" and "Process" subsystems allowed for a more accurate representation of the window installation process, while proper annotation and data augmentation ensured the quality and diversity of the dataset. This comprehensive approach should provide a solid foundation for the effective application of the YOLO object detection algorithm in monitoring window construction progress, leading to accurate and reliable results in real-time performance. ### Implementation **Indoor CPM of window installation**: The object detection model was implemented using the YOLO algorithm. To optimize the model's performance, various hyperparameters were carefully selected. The chosen epoch count was 1000, the image size was set to 640, and the batch size was 32. The Adam optimizer was utilized with an initial learning rate of 5e-4 and a final learning rate of 1e-3. Plotting was enabled to visualize the training progress. For the training process, a powerful hardware setup was employed, which included an NVIDIA A100-SXM4-40GB GPU, a combination of 11 Intel(R) Xeon(R) CPU with 2.20GHz, and 85.3GB of RAM. This setup ensured a smooth and efficient training experience, allowing the model to learn from the provided dataset effectively. The dataset used for training, collected in January 2023 at Beyer-Bau, was preprocessed and annotated to create a comprehensive training set. The dataset comprised 347 high-resolution images, capturing various window construction scenarios, camera positions, lighting conditions, and foreground and background diversity. The training set enabled the model to learn and differentiate between the various construction stages effectively. During the implementation, the YOLO-based object detection model was designed to recognize critical checkpoints of the construction progress, such as the completion percentages and flexible steps in the process. This innovative approach allowed the model to monitor the window installation process effectively and efficiently. The implementation also involved the creation of a monitoring plan, which combined the "Percentage" and "Process" subsystems aiming to effectively characterize the window installation process. By transforming the continuous and streamlined work of window installation monitoring into the detection of windows under different situations of time sections during the construction process, the entire task was simplified into a condition more suitable for the YOLO algorithm. Outdoor CPM of window installation: The algorithm for object identification comprises of three steps, including: 1. In the first step all images are examined with respect to suitability, i.e. sharpness, contrast and exposure. Then the images are registered, which means that the geographic coordinate system is established, For each image the camera position as well as the position of key features are computed. 2. QR-code-detection and localization is carried out for each image. Recognized QR-codes are checked against the list of target-states of each window. If a matching window-ID is found, the position is checked and the detected window-ID is removed from the list. 3. The result of step 2 is a list of windows, which could not be detected by their QR-codes. There are many possible reasons for this: The necessary images might not be taken or delivered, the technical quality of some images might be poor, the window might be hidden by some other structure, the QR-code-sticker might have fallen of etc.. Some of the reasons can be checked automatically. Regardless of the result, further human activities must be taken to either verify the state of the windows with alternative means or to request further image data, necessary to finalize the outdoor object detection. ## 5 Results and Discussion ### Results After completing the training process, the model achieved its best performance at epoch 82, with the best model saved as best.pt. An early stopping mechanism was employed due to no improvement observed in the last 50 epochs, which allowed for efficient use of computational resources. In total, 132 epochs were completed in 0.227 hours. The evaluation of the trained model was carried out on a separate validation dataset. The model demonstrated promising results (see Fig. 4) with an overall mAP50 (mean Average Precision at 50% IoU) of 0.953 and an mAP50-95 (mean Average Precision at 50% to 95% IoU) of 0.678. These metrics indicate a high degree of accuracy in detecting windows at different stages of the construction process. The model also showed high precision and recall scores (see Fig. 5) across the various completion percentages and flexible steps, suggesting a good understanding of the different stages and their respective checkpoints. ### Discussion The results obtained from the trained YOLO-based object detection model indicate its potential for monitoring window construction progress effectively. The model successfully identified various checkpoints and completion percentages in the window installation process, providing accurate and timely information on the construction progress. The high mAP scores across different IoU thresholds demonstrate the model's robustness and ability to generalize well to new data. The use of the YOLO algorithm for this task proved to be a suitable choice, as it allowed for a streamlined and efficient approach to process monitoring. The combination of the "Percentage" and "Process" subsystems provided a comprehensive characterization of the window installation process, enabling the model to capture the necessary details for accurate object detection. One potential limitation of the current implementation is the reliance on a single dataset for training, which may not cover all possible scenarios and variations in window construction. To improve the model's generalizability further, it would be beneficial to include more diverse data from various construction sites and time periods. In conclusion, the YOLO-based object detection model for monitoring window construction progress has demonstrated promising results, indicating its potential for practical application in the construction industry. Future work could involve testing the model on real-world construction sites to evaluate its performance in real-time and exploring the integration of the model with other monitoring systems to provide a comprehensive solution for construction progress tracking. Figure 4: Training Results ## 6 Conclusion and Future Work ### Conclusion This study aimed to develop an efficient and effective method for CPM of window installations. The project involved data collection, pre-processing, model training, evaluation, and analysis. The results demonstrated that the trained model for indoor object detection could accurately identify various checkpoints and completion percentages in the window installation process. The YOLO algorithm's adoption for this task proved to be a suitable choice, as it allowed for a streamlined and efficient approach to process monitoring. The combination of the "Percentage" and "Process" subsystems provided a comprehensive characterization of the window installation process (see **Fig. 6**), enabling the model to capture the necessary details for accurate object detection. Despite some limitations in the current implementation, such as the reliance on a single dataset for training, the YOLO-based object detection model for monitoring window construction progress has shown promising results, indicating its potential for practical application in the construction industry. Figure 5: Precision and recall score ### Future Work To further enhance the model's performance and applicability, several avenues for future work could be explored: Expand the training dataset: Collecting data that are more diverse would improve the model's ability to adapt to different scenarios and variations in window construction. Real-world testing: Evaluating the model's performance on real-world scenarios will provide insights into its effectiveness and help identify areas for improvement. Integration with other monitoring systems: Exploring the integration of the YOLO-based model with other monitoring systems, such as site management software or Building Information Modelling, could provide a comprehensive solution for CPM. Adaptation to other construction tasks: Investigating the applicability of the YOLO-based object detection model for monitoring other construction tasks, such as masonry, or steel erection could broaden the model's use in the construction industry. By pursuing these future work directions, the YOLO-based object detection model proposed in this paper could become a valuable tool in the construction industry. Figure 6: Implementation of trained model Acknowledgements We would like to express our appreciation to Mr. Fang Jian for his contribution to the project. His knowledge and dedication have been instrumental in advancing our understanding of the subject matter and achieving our research objectives. The publication is part of the research project entitled "iECO - Intelligence Empowerment of Construction Industry" which receives funding from Bundesministerium fur Wirtschaft und Klimaschutz (BMWK) based on a resolution of the German Bundestag. Authors gratefully acknowledge the support and funding from the BMWK. The content of this publication reflects the author view only and the BMWK is not responsible for any use that may be made of the information it contains.
2308.04166
Security of a Continuous-Variable based Quantum Position Verification Protocol
In this work we study quantum position verification with continuous-variable quantum states. In contrast to existing discrete protocols, we present and analyze a protocol that utilizes coherent states and its properties. Compared to discrete-variable photonic states, coherent states offer practical advantages since they can be efficiently prepared and manipulated with current technology. We prove security of the protocol against any unentangled attackers via entropic uncertainty relations, showing that the adversary has more uncertainty than the honest prover about the correct response as long as the noise in the quantum channel is below a certain threshold. Additionally, we show that attackers who pre-share one continuous-variable EPR pair can break the protocol.
Rene Allerstorfer, Llorenç Escolà-Farràs, Arpan Akash Ray, Boris Škorić, Florian Speelman, Philip Verduyn Lunel
2023-08-08T09:56:38Z
http://arxiv.org/abs/2308.04166v1
# Security of a Continuous -Variable based Quantum Position Verification Protocol ###### Abstract In this work we study quantum position verification with continuous-variable quantum states. In contrast to existing discrete protocols, we present and analyze a protocol that utilizes coherent states and its properties. Compared to discrete-variable photonic states, coherent states offer practical advantages since they can be efficiently prepared and manipulated with current technology. We prove security of the protocol against any unentangled attackers via entropic uncertainty relations, showing that the adversary has more uncertainty than the honest prover about the correct response as long as the noise in the quantum channel is below a certain threshold. Additionally, we show that attackers who pre-share one continuous-variable EPR pair can break the protocol. ## 1 Introduction Position-based cryptography allows for protocols in which the geographical location of a party is used as a cryptographic credential. Consider, for example, the establishment of trust between you and someone at a claimed location. Or sending a confidential message that can only be decrypted at a specific location. Part of position-based cryptography is the task of position verification, where an untrusted prover aims to convince verifiers that he is present at a certain position \(P\). This primitive was first introduced by Chandran, Goyal, Moriarty, and Ostrovsky [1], and it has been shown that no classical position verification protocol can exist, due to a universal attack based on cloning input information. This attack fails in the quantum setting because of the no-cloning theorem [13]. Quantum position verification (QPV) has been studied1 since the early 2000s by several authors [11, 12, 13, 14], but despite the failure of the classical universal attack, a universal quantum attack has since been found [1, 2]. However, this attack consumes an amount of entanglement exponential in the input size and is therefore not practically feasible. Thus, we may still find secure QPV protocols in the bounded-entanglement model. Footnote 1: under the name of ‘quantum tagging’ The analysis of the entanglement resources needed turns out to be a deep question in its own right [1, 2, 3, 4, 5, 6]. Many protocols have since been proposed [1, 1, 1, 14] and different security models have been studied [12, 13, 14]. Recent work has focused on the practicality of implementing position-verification protocols. Aspects such as channel loss and error tolerance of certain QPV protocols must be taken into account [1, 2]. Almost all previously studied QPV protocols have in common that they contain only finite-dimensional quantum systems. The study of QPV using continuous-variable (CV) quantum information, i.e., using infinite-dimensional quantum states, was first mentioned in [15], in which a general attack was shown in the transmission regime \(t\leq 1/2\), but the security of the protocol was not further analyzed. The best known example of CV quantum information is the quantized harmonic oscillator [14, 15, 16], which is usually described by continuous variables such as position and momentum. Continuous-variable quantum systems are particularly relevant for quantum communication and quantum-limited detection and imaging techniques because they provide a quantum description of the propagating electromagnetic field. Of particular relevance are the eigenstates of the annihilation operator, the so-called coherent states, and their quadrature squeezed counterparts known as squeezed coherent states. The maiden appearance of CV quantum states in a quantum communication protocol was the CV variant of quantum key distribution (QKD). Firstly proposed with discrete [13, 12, 11] and Gaussian [16] encoding of squeezed states, soon a variety of protocols were published on Gaussian-modulated CV-QKD with coherent states [12, 17, 18, 19]. In this paper, we employ many techniques borrowed from the wealth of research available on CV-QKD. Theoretical reviews with practical considerations of CV-QKD can be found in [10, 1]. We extend the ideas of finite-dimensional QPV protocols, and more formally analyze a QPV protocol very similar to the one mentioned in [13]. We provide a general proof of security against attackers who do not have access to entanglement, taking into account attenuation and excess noise in the quantum channel. By way of illustration, we also analyze a number of specific attacks. We show that the attackers can break the scheme if they pre-share one pair of strongly entangled modes. In the finite-dimensional case, usually the job of the prover is to complete a task _correctly_, and attackers are detected by a suspiciously high error rate. This property of QPV protocols changes in the continuous setting, where even the honest prover's answers are drawn from a probability distribution. Therefore, the verifiers' job is to distinguish an honest sample from an adversarial one. Although the generalization of QPV to CV is interesting in itself, the motivation here is practical. CV systems are much simpler to handle in practice and leverage several decades of experience in coherent optical communication technology. One particular advantage is that no true single-photon preparation or detection is necessary. Clean creation and detection of single photons is still expensive and technically challenging, especially if photon number resolution is desired. In contrast, homodyne and heterodyne measurements are easy to implement and a lot of existing infrastructure is geared towards handling light at low-loss telecom wavelengths (1310nm, 1550nm), whereas an ideal single photon source in these wavelength bands still has to be discovered and frequency up-conversion is challenging and introduces new losses and errors. Furthermore, loss causes a decrease in the signal-to-noise ratio in homodyne measurements rather than giving a "no detection" event. This may open new avenues for protection against the usual lossy attack in discrete variable QPV protocols, in which attackers make use of the "no detection" rounds. ## 2 Preliminaries In this section, we introduce the continuous-variable formalism that one encounters in CV-QKD, and some information-theoretic results. The goal of this section is threefold. First, we present the different types of CV states used in the paper. We then discuss displacement measurements that can be performed on these states and how a noisy channel is modeled. Finally, we close the section with some useful results from classical and quantum information theory. ### Gaussian states The Wigner function fully describes an \(N\)-mode bosonic quantum state \(\rho\) and can be obtained from \(\rho\) by the Wigner formula [14] \[W(\mathbf{x},\mathbf{p})=\frac{1}{\pi^{N}}\int_{\mathbb{R}^{N}}e^{2i\mathbf{p }\cdot\mathbf{y}}\langle\mathbf{x}-\mathbf{y}|\rho|\mathbf{x}+\mathbf{y} \rangle\,\mathrm{d}\mathbf{y}. \tag{1}\] This is sometimes also called the Wigner transformation of the density matrix. The inverse transformation is archived via the Weyl transform. Gaussian states are defined by the property that their Wigner function is a Gaussian function in phase space. The Wigner function of Gaussian states reads \[W_{\mathrm{G}}(\mathbf{r})=\frac{1}{\pi^{N}\sqrt{\det\Gamma}}\exp\bigl{\{}-( \mathbf{r}-\mathbf{d})^{T}\Gamma^{-1}(\mathbf{r}-\mathbf{d})\bigr{\}}, \tag{2}\] where \(\mathbf{r}=(x_{1},p_{1},\ldots,x_{N},p_{N})\) are the quadrature variables. The vector \(\mathbf{d}\) is the displacement vector, \[d_{i}=\mathbb{E}\hat{r}_{i}=\mathrm{Tr}[\rho\hat{r}_{i}]. \tag{3}\] And \(\Gamma\) is the covariance matrix, \[\Gamma_{ij}=\mathrm{Tr}\bigl{[}\rho\bigl{(}(\hat{r}_{i}-d_{i})(\hat{r}_{j}-d_{ j})+(\hat{r}_{j}-d_{j})(\hat{r}_{i}-d_{i})\bigr{)}\bigr{]}. \tag{4}\] ### Displacement measurements of CV states Here we describe homodyne and heterodyne measurements, the two types of possible displacement measurements. For the physics of the measurement process, refer to Chapter 1 of [1]. #### Homodyne Consider a Wigner function \(W(\mathbf{x},\mathbf{p})\). A homodyne measurement of the quadrature \(x_{i}\), yields the following marginal probability distribution \[f_{X_{i}}(x_{i})=\int_{\mathbb{R}^{2N-1}}W(\mathbf{x},\mathbf{p})\,\mathrm{d} \mathbf{p}\,\mathrm{d}x_{1}\ldots\mathrm{d}x_{i-1}\,\mathrm{d}x_{i+1}\ldots \mathrm{d}x_{N}. \tag{5}\] One can choose any axis \(x_{\theta}\) along which to perform a homodyne measurement, given a mode. In this case, we rotate our reference frame corresponding to the mode to be measured by an angle \(\theta\). We can then perform an integral similar to the one above to obtain \(f_{X_{\theta}}(x_{\theta})\). #### Heterodyne A heterodyne measurement is essentially a double homodyne measurement. The selected mode from \(W(\mathbf{x},\mathbf{p})\) is mixed with vacuum on a balanced beamsplitter. A homodyne measurement is then performed on the two output modes, each in conjugate directions. The result obtained is captured by the theorem which follows. **Theorem 2.1**.: _The heterodyne measurement of a one-mode Gaussian state with displacement \((x_{0},p_{0})\), produces two Gaussian distributions, centered around \(x_{0}/\sqrt{2}\) and \(-p_{0}/\sqrt{2}\) respectively._ Proof.: A balanced beamsplitter is represented by the following symplectic matrix \[S=\begin{pmatrix}\sqrt{\frac{1}{2}}\mathbb{1}_{2}&\sqrt{\frac{1}{2}}\mathbb{1 }_{2}\\ -\sqrt{\frac{1}{2}}\mathbb{1}_{2}&\sqrt{\frac{1}{2}}\mathbb{1}_{2}\end{pmatrix}. \tag{6}\] As the input state is Gaussian, and mixing preserves Gaussian states, the output states are also Gaussian. The new displacements under this transformation are the given by \[(x_{0},p_{0},0,0)S^{T}=(x_{0}/\sqrt{2},p_{0}/\sqrt{2},-x_{0}/\sqrt{2},-p_{0}/ \sqrt{2}). \tag{7}\] #### Noisy CV channel Whereas a discrete qubit state passing through a noisy channel suffers from qubit loss, bit errors, and phase errors, a continuous-variable state gets attenuated and acquires excess noise. Consider a coherent state with displacement \((x_{0},p_{0})\). Let \(t\in[0,1]\) be the attenuation parameter, and let \(u\geq 0\) denote the excess noise power.2 The effect of the channel is that the displacement becomes \((x_{0},p_{0})\sqrt{t}\), and the covariance matrix goes from \(\mathbb{1}_{2}\) to \(\mathbb{1}_{2}(1+2u)\). The outcome of a homodyne measurement now has the variance \(\frac{1}{2}+u\) instead of just the \(\frac{1}{2}\) from shot noise. In terms of signal and noise, the signal has changed by a factor \(t\) and the noise has increased by a factor \(1+2u\). Overall, the signal-to-noise ratio has changed by a factor \(\frac{t}{1+2u}\). Footnote 2: In the CVQKD literature the excess noise power is often written as \(\frac{1}{2}t\xi\), where the proportionality with \(t\) comes from the fact that the adversary mixes in his own quantum state using the same beamsplitter that also taps off part of the sender’s state. In our case we have no such adversarial action. ### Continuous-variable EPR state and teleportation Consider two modes labeled \(A\) and \(B\). The Wigner function of the two-mode squeezed vacuum state (TMSV) with squeezing parameter \(\zeta\geq 0\) is given by \[W_{\text{TMSV}}(x_{a},p_{a},x_{b},p_{b}) =\frac{1}{\pi^{2}}\exp\{-e^{-2\zeta}[(x_{a}+x_{b})^{2}+(p_{a}-p_{ b})^{2}]-e^{2\zeta}[(x_{a}-x_{b})^{2}+(p_{a}+p_{b})^{2}]\}\] \[=\frac{1}{\pi^{2}}\exp\{-\big{(}x_{a}\quad p_{a}\quad x_{b}\quad p _{b}\big{)}\Gamma(\zeta)^{-1}\begin{pmatrix}x_{a}\\ p_{a}\\ x_{b}\\ p_{b}\end{pmatrix}\}, \tag{8}\] with covariance matrix \[\Gamma(\zeta)=\begin{pmatrix}\cosh(2\zeta)\mathbb{1}_{2}&\sinh(2\zeta)Z\\ \sinh(2\zeta)Z&\cosh(2\zeta)\mathbb{1}_{2}\end{pmatrix},\qquad\text{where} \qquad Z=\begin{pmatrix}1&0\\ 0&-1\end{pmatrix}. \tag{9}\] Throughout this paper \(\mathbb{1}_{n}\) denotes the \(n\)-dimensional identity matrix. In the limit of the squeezing parameter \(\zeta\to\infty\) we have \(W_{\text{TMSV}}(x_{a},p_{a},x_{b},p_{b})\to C\delta(x_{a}-x_{b})\delta(p_{a}+p _{b})\), for a constant \(C\), which corresponds to the continuous-variable maximally entangled EPR state. Consider a heterodyne measurement performed on the \(A\) mode. The state of the \(A\) mode, viewed in isolation, is a thermal state with covariance matrix \(K_{A}=\mathbb{1}_{2}\cosh 2\zeta\). Using a \(50/50\) beamsplitter this state gets mixed with the vacuum, resulting in a two-mode \(A^{\prime}A^{\prime\prime}\) state with covariance matrix \[K_{A^{\prime}A^{\prime\prime}}=\frac{1}{2}\begin{pmatrix}\mathbb{1}_{2}+K_{A} &\mathbb{1}_{2}-K_{A}\\ \mathbb{1}_{2}-K_{A}&\mathbb{1}_{2}+K_{A}\end{pmatrix}=\begin{pmatrix}1_{2} \cosh^{2}\zeta&-\mathbb{1}_{2}\sinh^{2}\zeta\\ -\mathbb{1}_{2}\sinh^{2}\zeta&\mathbb{1}_{2}\cosh^{2}\zeta\end{pmatrix}. \tag{10}\] In mode \(A^{\prime}\) the \(x\)-quadrature is measured, and in mode \(A^{\prime\prime}\) the \(p\)-quadrature. The Wigner function for \(x_{a^{\prime}}\) and \(p_{a^{\prime\prime}}\) is obtained by integrating out \(p_{a^{\prime}}\) and \(x_{a^{\prime\prime}}\) from the Wigner function \(A^{\prime}A^{\prime\prime}\), resulting in a product of two Gaussian distributions, \(\mathcal{N}_{0,\frac{1}{2}\cosh^{2}\zeta}(x_{a^{\prime}})\mathcal{N}_{0,\frac {1}{2}\cosh^{2}\zeta}(p_{a^{\prime\prime}})\). If the heterodyne measurement has resulted in \((x_{a^{\prime}},p_{a^{\prime\prime}})\), then the post-measurement state of the \(B\) subsystem is a Gaussian state with displacement \((x_{B},p_{B})=(x_{a^{\prime}},-p_{a^{\prime\prime}})\sqrt{2}\tanh\zeta\) and covariance \(\mathbb{1}_{2}\), i.e. a coherent state (see chapter 2 of [10]). Note that the components \(x_{B}\) and \(p_{B}\) are Gaussian-distributed with variance \(\frac{1}{2}\cosh^{2}\zeta\cdot(\sqrt{2}\tanh\zeta)^{2}=\sinh^{2}\zeta\). In Section 3.2 we tune \(\sinh\zeta=\sigma\) so that \(x_{B},p_{B}\) have Gaussian statistics with variance \(\sigma^{2}\). #### Teleportation The teleportation of an unknown continuous-variable quantum state using a CV EPR pair was proposed by Vaidman [14] and is described as follows: 1. Alice and Bob share a CV-EPR pair described by the Wigner function (8). Alice possesses the single-mode quantum state \(|\psi\rangle\) to be teleported. 2. With a balanced beamsplitter, Alice mixes \(|\psi\rangle\) with her mode of the CV-EPR pair and then does a measurement of the \(x\)-quadrature in one mode and the \(p\)-quadrature in the other mode (i.e. she performs a heterodyne measurement). We denote the outcome of the measurement as \((d_{x},d_{p})\). The result is that Bob's half of the EPR pair is transformed to a displaced version of \(|\psi\rangle\), with displacement \((\sqrt{2}d_{x},-\sqrt{2}d_{p})\). Alice sends the classical \((d_{x},d_{p})\) to Bob. 3. Bob applies a displacement \((-\sqrt{2}d_{x},\sqrt{2}d_{p})\) to his state to obtain \(|\psi\rangle\). ### Information theory We now define some basic notions of information theory that will be used in the paper. First, we present some definitions and properties regarding CV entropies. **Definition 2.2**.: _Let \(X\) be a continuous random variable with probability density function \(f(x)\), and let \(\mathcal{X}\) be its support set. The differential Shannon entropy \(h(X)\) is defined as_ \[h(X)=-\int_{\mathcal{X}}f(x)\log f(x)\,\mathrm{d}x, \tag{11}\] _where, if not otherwise mentioned, we use \(log\) in base 2._ **Lemma 2.3**.: _Let \(\alpha>0\) and \(X\in\mathbb{R}\). It holds that \(h(\alpha X)=h(X)+\log\alpha\)._ **Definition 2.4**.: _The von Neumann entropy of a state \(\rho\) is defined as \(S(\rho)=-\operatorname{Tr}[\rho\log\rho]\)._ The von Neumann entropy of Gaussian states is provided by the following lemma, which will be needed to calculate entropies of the honest prover. **Lemma 2.5**.: _[_10_]_ _Let \(\rho\) be an \(N\)-mode CV Gaussian state with \(2N\times 2N\) covariance matrix \(\Gamma\). Let \(\nu_{1},\ldots,\nu_{N}\) be the symplectic eigenvalues of \(\Gamma\). Let the function \(g\) be given by_ \[g(x)=(x+1)\log(x+1)-x\log x. \tag{12}\] _The von Neumann entropy of \(\rho\) is given by_ \[S(\rho)=\sum_{i=1}^{N}g\bigg{(}\frac{\nu_{i}-1}{2}\bigg{)}. \tag{13}\] **Lemma 2.6**.: _[_10_]_ _The symplectic eigenvalue of a single-mode covariance matrix \(\Gamma\) is given by \(\sqrt{\det\Gamma}\)._ In Section 3.3.3 we consider \(\sigma\gg 1\) and are interested in the behavior of \(g\) in that regime. The following lemma is not too hard to see from (12). **Lemma 2.7**.: _The large-argument behavior of the function \(g\), defined in (12), is given by \(g(x)\sim\log(ex)+\mathcal{O}(1/x)\)._ Another useful quantity to compare two quantum states is the relative entropy. **Definition 2.8**.: _Let \(\rho\) and \(\sigma\) be two density matrices. Their Umegaki's quantum relative entropy \(D(\cdot||\cdot)\) is defined as_ \[D(\rho||\sigma)=\operatorname{Tr}[\rho\log\rho-\rho\log\sigma]. \tag{14}\] As introduced in [10], let \(\rho_{AB}\) be a bipartite state on systems \(A\) and \(B\), which correspond to a system to be measured and a system held by an observer. Let \(X\) be a continuous random variable, \(\alpha=2^{-n}\) for some \(n\in\mathbb{N}\), and consider the intervals \(\mathcal{I}_{k;\alpha}:=(k\alpha,(k+1)\alpha]\) for \(k\in\mathbb{Z}\). Here \(\rho_{B}^{k:\alpha}\) denotes the sub-normalized density matrix in \(B\) when \(x\) is measured in \(\mathcal{I}_{k;\alpha}\), \(\rho_{B}^{x}\) denotes the conditional reduced density matrix in \(B\) so that \(\int_{\mathcal{I}_{k;\alpha}}\rho_{B}^{x}dx=\rho_{B}^{k:\alpha}\), and \(Q_{\alpha}\) denotes the random variable that indicates which interval \(x\) belongs to. These notions are used in the continuous version of the conditional entropy. **Definition 2.9**.: _The quantum conditional von Neumann entropy is defined as_ \[H(Q_{\alpha}|B)_{\rho}:=-\sum_{k\in\mathbb{Z}}D(\rho_{B}^{k;\alpha}||\rho_{B}). \tag{15}\] **Definition 2.10**.: _We define the differential quantum conditional von Neumann entropy is defined as_ \[h(X|B)_{\rho}:=-\int_{\mathbb{R}}D(\rho_{B}^{x}||\rho_{B})\,\mathrm{d}x. \tag{16}\] The basis of our security proofs is the quantum-mechanical uncertainty principle. We use the following form for the differential entropy in a tripartite setting of a guessing game, as is often useful in the context of quantum cryptography. **Lemma 2.11**.: _[_FBT\({}^{+}\)14_]_ _Let \(\rho_{ABC}\) be a tripartite density matrix on systems \(A\), \(B\) and \(C\). Let \(Q\) and \(P\) denote the random variables of position and momentum respectively, resulting from a homodyne measurement on the \(A\) system and let the following hold: \(h(Q|B)_{\rho},h(P|C)_{\rho}>-\infty\) and \(H(Q_{\alpha}|B)_{\rho},H(P_{\alpha}|C)_{\rho}<\infty\) for any \(\alpha>0\). Then_ \[h(Q|B)_{\rho}+h(P|C)_{\rho}\geq\log(2\pi). \tag{17}\] Furthermore, we will make use of the following estimation inequality. **Theorem 2.12**.: _[_Cov99_]_ _Let \(X\) be a random variable and \(\hat{X}(Y)\) an estimator of \(X\) given side information \(Y\), then_ \[\mathbb{E}\bigg{[}\Big{(}X-\hat{X}(Y)\Big{)}^{2}\bigg{]}\geq\frac{1}{2\pi e}e ^{2h_{\mathrm{nats}}(X|Y)}, \tag{18}\] _where \(h_{\mathrm{nats}}(X|Y)\) is the conditional entropy in natural units. Moreover, if \(X\) is Gaussian and \(\hat{X}(Y)\) is its mean, then the equality holds._ ## 3 The Protocol ### Prepare-and-measure Consider two spatially separated verifiers \(V_{1}\) and \(V_{2}\), and a prover P somewhere in between them. Let \(\mathcal{A}\) be a publicly known set of angles in \([0,2\pi)\) such that \(\alpha\in\mathcal{A}\implies\alpha+\pi/2\in\mathcal{A}\). Let \(\sigma\) be a publicly known parameter, \(\sigma\gg 1\). A single round of the protocol consists of the following steps (for a diagrammatic picture see Fig. 1): 1. The verifiers draw random \(\theta\in\mathcal{A}\) and two random variables \((r,r^{\perp})\) from the Gaussian distribution \(\mathcal{N}_{0,\sigma^{2}}\). Verifier \(V_{1}\) prepares a coherent state \(|\psi\rangle\) with quadratures \((x_{0},p_{0})=(r\cos\theta+r^{\perp}\sin\theta,\ r\sin\theta-r^{\perp}\cos\theta)\). Then \(V_{1}\) sends \(|\psi\rangle\) to the prover, and \(V_{2}\) sends \(\theta\) to the prover. 2. The prover receives \(\theta\) and \(|\psi\rangle\) and performs a homodyne measurement on \(|\psi\rangle\) in the \(\theta\) direction, resulting in a value \(r^{\prime}\in\mathbb{R}\). The prover sends \(r^{\prime}\) to both verifiers. After \(n\) rounds, the verifiers have received a sample of responses, which we denote as \((r^{\prime}_{i})_{i=1}^{n}\). The verifiers check whether all prover responses arrived at the correct time, and whether the reported values \((r^{\prime}_{i})_{i=1}^{n}\) satisfy \[\frac{1}{n}\sum_{i=1}^{n}\frac{\big{(}r^{\prime}_{i}-r_{i}\sqrt{t}\big{)}^{2} }{\frac{1}{2}+u}<\gamma\qquad\text{with }\gamma\stackrel{{\text{def}}}{{=}}1+\frac{2}{\sqrt{n}}\sqrt{ \ln\frac{1}{\varepsilon_{\mathrm{hon}}}}+\frac{2}{n}\ln\frac{1}{\varepsilon_{ \mathrm{hon}}}. \tag{19}\] Here \(\varepsilon_{\mathrm{hon}}\) is an upper bound on the honest prover's failure probability, see Section 3.3. \(\varepsilon_{\mathrm{hon}}\) is a protocol parameter and can be set to a desired value. The verifiers _reject_ if not all these checks are satisfied. We refer to the sum in (19) as the _score_. ### Entanglement based version of the protocol In security proofs for qubit-based schemes, it is customary to re-formulate a protocol into an EPR based form. The act of one party (\(V\)) preparing and sending a qubit state in a particular basis \(\mathcal{B}\) is equivalent to \(V\) preparing a maximally entangled two-qubit state (EPR pair) and then measuring one of the qubits in the \(\mathcal{B}\) basis while the other qubit is sent. The act of measuring can be postponed. This has the advantage that in the security analysis the basis choice can be delayed, and it is then possible to base security on the properties of entangled states. We will do an analogous reformulation for CV states. In fact, we work with exactly the same states as Gaussian-modulated CV-QKD [18]. We tune the squeezing parameter \(\zeta\) such that \(\sinh\zeta=\sigma\), as explained in Section 2.3. Preparing a coherent state with Gaussian distributed displacements \(x_{0},p_{0}\sim\mathcal{N}_{0,\sigma^{2}}\) is equivalent to preparing a two-mode squeezed state with squeezing parameter \(\zeta\) and then performing a heterodyne \((\hat{x},\hat{p})\) measurement on one mode, with measurement outcome \(\frac{(x_{0},-p_{0})}{\sqrt{2}\tanh\zeta}\). In our particular case, the verifier \(V_{1}\) prepares the two-mode squeezed state \(\rho_{VP}\) and performs the heterodyne measurement with quadratures that are rotated by the angle \(\theta\) on the \(V\) subsystem. The measurement outcomes are \(r/(\sqrt{2}\tanh\zeta)\) and \(-r^{\perp}/(\sqrt{2}\tanh\zeta)\), resulting in displacement \((r,r^{\perp})\) in the state sent to the prover (i.e. subsystem \(P\)). The prover then performs a homodyne measurement under angle \(\theta\) to recover \(r\), similar to the prepare and measure scheme. In the security analysis in Section 5 we will explicitly write \(V_{1}\)'s heterodyne measurement as a double-homodyne measurement. First \(V_{1}\) mixes its own mode with the vacuum using a beamsplitter, resulting in a two-mode state. On one of these modes \(V_{1}\) then performs a homodyne measurement in the \(\theta\)-direction, on the other mode in the \(\theta+\frac{\pi}{2}\) direction. ### Honest prover #### 3.3.1 Success probability We show that the honest prover has a failure probability smaller than \(\varepsilon_{\text{hon}}\). **Lemma 3.1**.: _(Eq.(4.3) in [18]) Let \(X\) be a \(\chi_{n}^{2}\) distributed random variable. It holds that_ \[\mathbb{P}[X-n\geq 2\sqrt{na}+2a]\leq e^{-a}. \tag{20}\] In round \(i\), the honest prover performs a homodyne measurement under an angle \(\theta_{i}\), on a coherent state that has displacement \(r_{i}\) in the \(\theta_{i}\) direction (and displacement \(r_{i}^{\perp}\) in the \(\theta_{i}+\frac{\pi}{2}\) direction). The measurement outcome \(R_{i}^{\prime}\) is Gaussian-distributed with mean \(r_{i}\) and variance \(\frac{1}{2}\) (shot noise). Figure 1: Schematic representation of the protocol described in Section 3.1. Undulated lines represent quantum information, whereas straight lines represent classical information. The random variable \(Z=\sum_{i=1}^{n}(R^{\prime}_{i}-r_{i}\sqrt{t})^{2}/(\frac{1}{2}+u)\) is chi-square distributed with parameter \(n\), i.e. \(Z\sim\chi_{n}^{2}\). The probability that the honest prover fails to pass verification is given by \[\mathbb{P}[Z\geq n\gamma]=\mathbb{P}\bigg{[}Z\geq n+2\sqrt{n\ln\frac{1}{ \varepsilon_{\mathrm{hon}}}}+2\ln\frac{1}{\varepsilon_{\mathrm{hon}}}\bigg{]}. \tag{21}\] By Lemma 3.1 this is upper bounded by \(\varepsilon_{\mathrm{hon}}\). #### 3.3.2 A posteriori distribution and entropy of \(R\) conditioned on measurement We determine how much uncertainty the honest prover has about the displacements \(r_{i}\), given the measurement outcomes \(r^{\prime}_{i}\). For notational brevity we omit the round index \(i\). We write the probability density for \(R\) as \(f_{R}\). Since \(r^{\prime}\) is the result of a measurement under angle \(\theta\), conditioning on \(\theta\) is implicit and will be omitted from the notation. The prover's posterior distribution of \(R\), given \(r^{\prime}\), is \[f_{R|R^{\prime}}(r|r^{\prime})=\frac{f_{RR^{\prime}}(r,r^{\prime})}{f_{R^{ \prime}}(r^{\prime})}=\frac{f_{R}(r)f_{R^{\prime}|R}(r^{\prime}|r)}{f_{R^{ \prime}}(r^{\prime})}. \tag{22}\] Using \(f_{R}=\mathcal{N}_{0,\sigma^{2}}\), \(f_{R^{\prime}|R}(r^{\prime}|r)=\mathcal{N}_{r\sqrt{t},\frac{1}{2}+u}(r^{\prime})\) and \(f_{R^{\prime}}=\mathcal{N}_{0,t\sigma^{2}+\frac{1}{2}+u}\) we get, after some algebra, \[f_{R|R^{\prime}}(r|r^{\prime})=\mathcal{N}_{M,\Sigma^{2}}(r)\qquad\text{with }\Sigma^{2}\stackrel{{\mathrm{def}}}{{=}}\bigg{(}\frac{1}{ \sigma^{2}}+\frac{t}{1/2+u}\bigg{)}^{-1},\quad M\stackrel{{ \mathrm{def}}}{{=}}\frac{r^{\prime}}{\sqrt{t}}\cdot\frac{1}{1+\frac{1/2+u}{t \sigma^{2}}}. \tag{23}\] For \(t\sigma^{2}\gg 1\) this tends to a normal distribution centered on \(r^{\prime}/\sqrt{t}\), with variance \((\frac{1}{2}+u)/t\). From the Gaussian probability density function (23) we directly obtain the differential entropy of \(R\) given \(R^{\prime}\), \[h(R|R^{\prime})=\frac{1}{2}\log 2\pi e\Sigma^{2}. \tag{24}\] #### 3.3.3 Entropy of \(R\) conditioned on the prover's quantum state Let \(\rho_{VP}\) be the entangled state that is prepared by \(V_{1}\) as described in Section 3.2. Here \(P\) denotes the prover's quantum system. The heterodyne measurement on the \(V\) system yields \((r,r^{\perp})\). The measurement maps \(\rho_{VP}\) to \(\rho_{RR^{\perp}P}\). We write the post-measurement state as \[\rho_{RR^{\perp}P}=\int_{\mathbb{R}^{2}}f_{RR^{\perp}}(r,r^{\perp})|r\rangle \langle r|_{R}\otimes\big{|}r^{\perp}\rangle\langle r^{\perp}\big{|}_{R^{ \perp}}\otimes\rho_{P}^{rr^{\perp}}\mathrm{d}r\mathrm{d}r^{\perp}. \tag{25}\] The (differential) entropy of \(R\), conditioned on the prover's quantum state, can be expanded as \[h(R|P)=h(R)+S(P|R)-S(P). \tag{26}\] From the definition of conditional entropy, \(S(P|R)=\mathbb{E}_{r}S(\mathbb{E}_{r^{\perp}}\rho_{P}^{rr^{\perp}})\) and \(S(P)=S(\mathbb{E}_{r,r^{\perp}}\rho_{P}^{rr^{\perp}})\). As discussed in Section 3.2, \(\rho_{P}^{rr^{\perp}}\) would be a coherent state in an ideal case. However, in a noisy channel, the state becomes a Gaussian with covariance matrix \((1+2u)\mathbb{1}_{2}\) and displacement \((x_{0}\sqrt{t},p_{0}\sqrt{t})\). The expectations are Gaussian integrals and hence are exactly solvable. Solving these integrals, we end up with the corresponding Gaussian Wigner functions and symplectic eigenvalues \[\text{for }\mathbb{E}_{r^{\perp}}\rho_{P}^{rr^{\perp}}: W_{r}(x,p)\sim\exp\Biggl{(}-\frac{p^{2}}{2\sigma^{2}t+2u+1}- \frac{\left(x-r\sqrt{t}\right)^{2}}{2u+1}\Biggr{)},\quad\nu=\sqrt{2t\sigma^{2}+ 2u+1}, \tag{27}\] \[\text{for }\mathbb{E}_{r,r^{\perp}}\rho_{P}^{rr^{\perp}}: W(x,p)\sim\exp-\frac{p^{2}+x^{2}}{2\sigma^{2}t+2u+1},\quad\nu=2t \sigma^{2}+2u+1. \tag{28}\] We use the \(g\) function (cf. lemma 2.5) to calculate the corresponding entropy \[S(P|R) =g\biggl{(}\frac{1}{2}\sqrt{2t\sigma^{2}+2u+1}-\frac{1}{2}\biggr{)}, \tag{29}\] \[S(P) =g(t\sigma^{2}+u). \tag{30}\] Finally, by definition \(R\) is Gaussian and \(h(R)=\frac{1}{2}\log 2\pi e\sigma^{2}\). All together this yields \[h(R|P)=\frac{1}{2}\log 2\pi e\sigma^{2}+g\bigg{(}\frac{1}{2}\sqrt{2t\sigma^{2}+2u+1 }-\frac{1}{2}\bigg{)}-g(t\sigma^{2}+u)\stackrel{{\text{Lemma \ref{lem:R}}}}{{=}}\frac{1}{2}\log\frac{\pi et}{1+2u}+O\bigg{(}\frac{1}{ \sigma}\bigg{)}. \tag{31}\] For large \(\sigma\) this is essentially the same as \(h(R|R^{\prime})\) in (24). ## 4 Security against specific attacks Before showing security against a general attack, we highlight security against some specific attacks that one might naturally think of. We look into three specific attacks where the adversaries do not have access to entanglement: performing a heterodyne measurement, state splitting and performing a homodyne measurement under a guessed angle. These examples provide some insight into the security but do not constitute a general security proof. A rigorous security proof for the case of adversaries who do not pre-share entanglement is given in Section 5. The most general attack of a 1-dimensional QPV protocol consists of placing two attackers Alice \(A\) and Bob \(B\) between \(V_{1}\) and \(P\), and \(V_{2}\) and \(P\), respectively. For attackers that do not pre-share entanglement,3 an attack proceeds as follows. Alice intercepts the quantum state sent to the prover \(P\). Alice applies a local operation to her quantum system and sends some classical and/or quantum information to the second attacker Bob. The most general action Bob can take is to intercept the message \(\theta\) and broadcast it, since any quantum operation can be embedded in Alice's actions. After one round of simultaneous communication, Alice and Bob use their respective quantum and classical information to produce a classical output and respond to their closest verifier such that the answer arrives on time. For the following analysis, we describe the attacks per round of the protocol. Footnote 3: We restrict our analysis to the case where the attackers do not pre-share entanglement, since we show in Section 6 that there exists a perfect attack if they pre-share an EPR pair. ### Heterodyne attack In a _heterodyne attack_, Alice performs a heterodyne measurement on the coherent state she intercepts and sends the result \((x^{\prime},p^{\prime})\) to Bob. At the end, \(A\) and \(B\) report the best guess for \(r\) that they can produce based on \(x^{\prime},p^{\prime},\theta\). Let us denote this as the estimator \(\tilde{r}\). It holds that \(\tilde{r}=\tilde{x}\cos\theta+\tilde{p}\sin\theta\), where \(\tilde{x}\) is an estimator for \(x_{0}\), and similarly \(\tilde{p}\). The posterior distribution of \(x_{0}\) given \(x^{\prime}\) is \[f_{X_{0}|X^{\prime}}(x_{0}|x^{\prime})=\frac{f_{X_{0}}(x_{0})f_{X^{\prime}|X_ {0}}(x^{\prime}|x_{0})}{f_{X^{\prime}}(x^{\prime})}=\frac{\mathcal{N}_{0, \sigma^{2}}(x_{0})\mathcal{N}_{\frac{\sigma_{0}}{\sqrt{2}},\frac{1}{2}}(x^{ \prime})}{\mathcal{N}_{0,\frac{\sigma^{2}}{2}+\frac{1}{2}}(x^{\prime})}= \mathcal{N}_{x^{\prime}\sqrt{2}\frac{\sigma^{2}}{1+\sigma^{2}},\frac{\sigma^ {2}}{1+\sigma^{2}}}(x_{0}). \tag{32}\] Hence \(\tilde{x}=x^{\prime}\sqrt{2}\frac{\sigma^{2}}{1+\sigma^{2}}\) and \(\tilde{p}=-p^{\prime}\sqrt{2}\frac{\sigma^{2}}{1+\sigma^{2}}\). Given \(x_{0},y_{0},\theta\), the random variable \(\tilde{R}\) is Gaussian with mean \(\frac{\sigma^{2}}{1+\sigma^{2}}r\) and variance \(\frac{1}{2}(\sqrt{2}\frac{\sigma^{2}}{1+\sigma^{2}}\cos\theta)^{2}+\frac{1} {2}(\sqrt{2}\frac{\sigma^{2}}{1+\sigma^{2}}\sin\theta)^{2}=(\frac{\sigma^{2}} {1+\sigma^{2}})^{2}\). This gives \[\mathbb{E}(\tilde{R}-r)^{2}=\bigg{(}\frac{\sigma^{2}}{1+\sigma^{2}}\bigg{)}^{ 2}+r^{2}\bigg{(}\frac{1}{1+\sigma^{2}}\bigg{)}^{2}\approx 1\qquad\text{for } \sigma\gg 1, \tag{33}\] which is easily distinguishable from the honest prover's value \(\frac{1}{2}\).4 From (32) we obtain the variance of \(R\) from the attackers' point of view as \(\frac{\sigma^{2}}{1+\sigma^{2}}(\cos\theta)^{2}+\frac{\sigma^{2}}{1+\sigma^{2} }(\sin\theta)^{2}=\frac{\sigma^{2}}{1+\sigma^{2}}\). The attackers' ignorance about \(R\) is thus quantified as Footnote 4: Note that the unbiased estimator \(x^{\prime}\sqrt{2}\cos\theta-p^{\prime}\sqrt{2}\sin\theta\) would yield \(\mathbb{E}(\tilde{R}-r)^{2}=1\), which is larger than (33). \[h(R|X^{\prime}P^{\prime}\Theta)=\frac{1}{2}\log\biggl{(}2\pi e\frac{\sigma^{2} }{1+\sigma^{2}}\biggr{)}, \tag{34}\] with conditioning on \(\Theta\) being made explicit. ### Splitting attack In a _splitting attack_, Alice intercepts the coherent quantum state sent by \(V_{1}\), and as in the case of the previous attack, she used a beamsplitter to mix it with a state of her own. She now sends one of the outputs from the beamsplitter to Bob. This allows both attackers to perform a homodyne measurement under the correct angle \(\theta\). Unlike the heterodyne attack, this also allows the attackers the freedom to choose the transmittance parameter \(T\) and the quantum state that Alice uses. However, the attackers must be cautious to report a set of numbers that have identical means and variances. To see why, let us assume that Alice reports numbers with mean \(m_{a}\) and Bob's results have the mean \(m_{b}\). Let the respective variances be \(v_{a}=v_{b}\). The verifiers can immediately identify an attack if the results have a dissimilar average. To avoid this, Alice (or Bob) must multiply their results with a finite number \(c\) such that \(m_{b}=cm_{a}\) (or \(m_{a}=cm_{b}\)). However, it would lead to the verifiers possessing a final distribution with indeed the same mean, but different variances. The precision of the protocol can be altered to detect said variance. A similar argument can be constructed when the variances are unequal. Thus, a successful attack must follow \(m_{a}=m_{b}\) and \(v_{a}=v_{b}\). Now, we propose the following theorem. **Theorem 4.1**.: _Consider a 2-mode Gaussian Wigner function \(W_{\mathbf{d},\gamma}(x_{1},p_{1},x_{2},p_{2})\) which under a beamsplitter transformation of transmittance \(T\) transforms into \(W^{\prime}_{\mathbf{d}^{\prime},\gamma^{\prime}}(x^{\prime}_{1},p^{\prime}_{ 1},x^{\prime}_{2},p^{\prime}_{2})\). If \(|\mathbb{E}[r^{\prime}_{1}]|=|\mathbb{E}[r^{\prime}_{2}]|\) and \(\mathrm{var}(r^{\prime}_{1})=\mathrm{var}(r^{\prime}_{2})\), then \(\mathbf{d}_{2}=0\) and \(T=1/2\), for \(r\in\{x,p\}\). Here, \(\mathbf{d}=(\mathbf{d}_{1},\mathbf{d}_{2})\) and \(\mathbf{d}^{\prime}=(\mathbf{d}^{\prime}_{1},\mathbf{d}^{\prime}_{2})\)._ Proof.: We have the following relationship between the covariance matrices of the input and output states \[\gamma^{\prime}=S\gamma S^{T}. \tag{35}\] Where \(S\) is the symplectic matrix corresponding to a beamsplitter with transmittance T given by \[S=\begin{pmatrix}\sqrt{T}\mathbbm{1}_{2}&\sqrt{1-T}\mathbbm{1}_{2}\\ -\sqrt{1-T}\mathbbm{1}_{2}&\sqrt{T}\mathbbm{1}_{2}\end{pmatrix}. \tag{36}\] The input matrix \(\gamma\) is the direct sum of the constituent matrices, \[\gamma=\gamma_{1}\oplus\gamma_{2}. \tag{37}\] Assuming some displacements \(\mathbf{d}^{\prime}\), we calculate the exponent in \(W_{\mathbf{d}^{\prime},\gamma^{\prime}}\), \[(\mathbf{r}_{1}-\mathbf{d}^{\prime}_{1},\mathbf{r}_{2}-\mathbf{d}^{\prime}_{2 })\gamma^{\prime-1}(\mathbf{r}_{1}-\mathbf{d}^{\prime}_{1},\mathbf{r}_{2}- \mathbf{d}^{\prime}_{2})^{T}. \tag{38}\] Substituting, after some matrix multiplications \[(\mathbf{r}_{1}-\mathbf{d}^{\prime}_{1},\mathbf{r}_{2}-\mathbf{d }^{\prime}_{2})\gamma^{\prime-1}(\mathbf{r}_{1}-\mathbf{d}^{\prime}_{1}, \mathbf{r}_{2}-\mathbf{d}^{\prime}_{2})^{T} \tag{39}\] \[=(\mathbf{r}_{1}-\mathbf{d}^{\prime}_{1},\mathbf{r}_{2}-\mathbf{ d}^{\prime}_{2})\frac{1}{D}\begin{pmatrix}(T\gamma_{2}+(1-T)\gamma_{1}) \mathbbm{1}_{2}&\sqrt{T(1-T)}(\gamma_{2}-\gamma_{1})\mathbbm{1}_{2}\\ \sqrt{T(1-T)}(\gamma_{2}-\gamma_{1})\mathbbm{1}_{2}&(T\gamma_{1}+(1-T)\gamma_ {2})\mathbbm{1}_{2}\end{pmatrix}\begin{pmatrix}\mathbf{r}_{1}-\mathbf{d}^{ \prime}_{1}\\ \mathbf{r}_{2}-\mathbf{d}^{\prime}_{2}\end{pmatrix}\] (40) \[=\frac{1}{D}\big{(}(T\gamma_{2}+(1-T)\gamma_{1})(\mathbf{r}_{1}- \mathbf{d}^{\prime}_{1})^{2}+(T\gamma_{1}+(1-T)\gamma_{2})(\mathbf{r}_{2}- \mathbf{d}^{\prime}_{2})^{2}\big{)}, \tag{41}\] where \(D\) is the determinant of \(\gamma^{\prime}\). We are given that \(\mathrm{var}(r^{\prime}_{1})=\mathrm{var}(r^{\prime}_{2})\). From the construction of the Wigner function, it is clear that the coefficients in (41) must be identical for this to be true, so \[T\gamma_{2}+(1-T)\gamma_{1}=T\gamma_{1}+(1-T)\gamma_{2}\Rightarrow T=1/2. \tag{42}\] The displacement transforms as \[(\mathbf{d}^{\prime}_{1},\mathbf{d}^{\prime}_{2})=(\mathbf{d}_{1},\mathbf{d}_ {2})S^{T}=(\sqrt{T}\mathbf{d}_{1}+\sqrt{1-T}\mathbf{d}_{2},-\sqrt{1-T} \mathbf{d}_{1}+\sqrt{T}\mathbf{d}_{2}). \tag{43}\] As \(|\mathbb{E}[r^{\prime}_{1}]|=|\mathbb{E}[r^{\prime}_{2}]|\) (or \(|\mathbf{d}^{\prime}_{1}|=|\mathbf{d}^{\prime}_{2}|\)), \[|\sqrt{T}\mathbf{d}_{1}+\sqrt{1-T}\mathbf{d}_{2}|=|-\sqrt{1-T}\mathbf{d}_{1}+ \sqrt{T}\mathbf{d}_{2}|. \tag{44}\] The only meaningful case from this equation yields \[\mathbf{d}_{2}=\frac{\sqrt{T}-\sqrt{1-T}}{\sqrt{T}+\sqrt{1-T}}\mathbf{d}_{1}. \tag{45}\] When \(T=1/2\), this leads to \(\mathbf{d}_{2}=\mathbf{0}\). The above theorem fixes the displacement and the transmittance parameter. However, as we see, there is no restriction on the attackers for choosing the covariance matrix for their quantum state. Since the strongest attack must have the smallest spread, the natural choice is indeed the minimum uncertainty state, that is, a state with unit covariance. Hence, the strongest attack is carried out by mixing a vacuum state with the target using a balanced beamsplitter. After mixing, \(A\) and \(B\) have a coherent state with displacement \(\frac{(x_{0},p_{0})}{\sqrt{2}}\) and \(-\frac{(x_{0},p_{0})}{\sqrt{2}}\) respectively. Taking into account that \(B\) compensates for the minus sign, a homodyne measurement under the correct angle \(\theta\) yields an outcome \(u\) with distribution \(f_{U|R}(u|r)=\mathcal{N}_{\frac{r}{\sqrt{2}},\frac{1}{2}}(u)\) for both attackers. Their a posteriori distribution for \(R\) is \[f_{R|U}(r|u)=\frac{f_{R}(r)f_{U|R}(u|r)}{f_{U}(u)}=\frac{\mathcal{N}_{0,\sigma ^{2}}(r)\mathcal{N}_{\frac{r}{\sqrt{2}},\frac{1}{2}}(u)}{\mathcal{N}_{0, \frac{\sigma^{2}}{2}+\frac{1}{2}}(u)}, \tag{46}\] which is the same as for the heterodyne attack. The rest of the analysis is identical to that case. ### Attackers perform a homodyne measurement under a guessed angle In this attack, Alice picks a random angle \(\varphi\) and does a homodyne measurement under this angle. She forwards the result \(m\) to Bob. The distribution of \(m\) is given by \(f_{M|X_{0}P_{0}\Phi}(m|x_{0}p_{0}\varphi)=\mathcal{N}_{x_{0}\cos\varphi+p_{0} \sin\varphi,\frac{1}{2}}(m)=\)\(\mathcal{N}_{r\cos(\varphi-\theta)+r^{\perp}\sin(\varphi-\theta),\frac{1}{2}}(m)\). The attackers' posterior distribution for \(R\) is \[f_{R|M\Phi\Theta}(r|m\varphi\theta) = \frac{f_{\Theta}(\theta)f_{\Phi}(\varphi)f_{R}(r)f_{M|R\Theta \Phi}(m|r\theta\varphi)}{f_{\Theta}(\theta)f_{\Phi}(\varphi)f_{M|\Theta\Phi}( m|\theta\varphi)} \tag{47}\] \[\propto f_{R}(r)f_{M|R\Theta\Theta}(m|r\theta\varphi)\] \[= f_{R}(r)\mathbb{E}_{r^{\perp}}f_{M|RR^{\pm}\Theta\Phi}(m|r^{ \perp}\theta\varphi)\] (49) \[= f_{R}(r)\mathbb{E}_{r^{\perp}}\mathcal{N}_{r\cos(\varphi- \theta)+r^{\perp}\sin(\varphi-\theta),\frac{1}{2}}(m)\] (50) \[= \mathcal{N}_{0,\sigma^{2}}(r)\mathcal{N}_{r\cos(\varphi-\theta),\frac{1}{2}+\sigma^{2}\sin^{2}(\varphi-\theta)}(m). \tag{51}\] After some algebra this can be rewritten as \[f_{R|M\Phi\Theta}(r|m\varphi\theta)=\mathcal{N}_{\mu,S^{2}}(r)\quad\text{ with }\mu=m\cos(\varphi-\theta)\frac{\sigma^{2}}{\frac{1}{2}+\sigma^{2}},\quad S^{2}= \sigma^{2}\frac{\frac{1}{2}+\sigma^{2}\sin^{2}(\varphi-\theta)}{\frac{1}{2}+ \sigma^{2}}. \tag{52}\] The attackers send \(\mu\) to the verifiers. For the expected score we get \[\mathbb{E}(R-\mu)^{2} = \mathbb{E}R^{2}+\mathbb{E}\mu^{2}-2\mathbb{E}\mu R \tag{53}\] \[= \sigma^{2}+\left(\frac{\sigma^{2}}{\frac{1}{2}+\sigma^{2}} \right)^{2}\mathbb{E}m^{2}\cos^{2}(\varphi-\theta)-2\frac{\sigma^{2}}{\frac{1} {2}+\sigma^{2}}\mathbb{E}mr\cos(\varphi-\theta). \tag{54}\] We introduce the notation \(\delta=\varphi-\theta\). We use the distribution of \(m\) conditioned on \(rr^{\perp}\varphi\theta\) to write \[\mathbb{E}\cos^{2}\delta\;m^{2} = \mathbb{E}\cos^{2}\delta\bigg{[}\frac{1}{2}+(r\cos\delta+r^{ \perp}\sin\delta)^{2}\bigg{]} \tag{55}\] \[= \frac{1}{2}\mathbb{E}\cos^{2}\delta+(\mathbb{E}r^{2})\mathbb{E} \cos^{4}\delta+(\mathbb{E}[r^{\perp}]^{2})\mathbb{E}\cos^{2}\delta\sin^{2}\delta\] (56) \[= \frac{1}{2}\mathbb{E}\cos^{2}\delta+\sigma^{2}\mathbb{E}\cos^{2}\delta\] (57) \[= \frac{1}{2}\bigg{(}\frac{1}{2}+\sigma^{2}\bigg{)}. \tag{58}\] Here we have used that \(\mathbb{E}\cos^{2}\delta=\frac{1}{2}\) because of the uniform \(\varphi\). Furthermore we have \[\mathbb{E}mr\cos\delta=\mathbb{E}(r^{2}\cos^{2}\delta+rr^{\perp}\sin\delta\cos \delta)=\sigma^{2}/2+0. \tag{59}\] Substitution of (58,59) into (54) yields \[\mathbb{E}(R-\mu)^{2}=\frac{\sigma^{2}}{2}\cdot\frac{\sigma^{2}+1}{\sigma^{2} +\frac{1}{2}}. \tag{60}\] This is much larger than the honest prover's value \(1/2+u\) for sufficiently large \(\sigma\). ## 5 Security against general attacks by unentangled adversaries In this section, we show that we not only have security against the above described attacks, but that the result generalizes to all attackers that do not pre-share entanglement by lower bounding their uncertainty higher than the prover's. This is captured by the following theorem. **Theorem 5.1**.: _For at least one attacker \(E\) participating in a general attack, the differential entropy of \(R\) given side information held by \(E\) follows the inequality_ \[h(R|E)\geq\frac{1}{2}\log\frac{4\pi}{1+\sigma^{-2}}, \tag{61}\] _where \(\sigma\) is the same as defined in Section 3.2. Furthermore, this attacker's response \(r^{\prime}\) satisfies the inequality_ \[\mathbb{E}(R-r^{\prime})^{2}\geq\frac{2}{e}\cdot\frac{1}{1+\sigma^{-2}}. \tag{62}\] Proof.: In the entanglement-based protocol, the verifiers perform a heterodyne measurement. This is achieved by mixing one half of the TMS state with vacuum (denoted by \(O\)) and then performing a homodyne measurement per mode, in orthogonal directions \(\theta\) and \(\theta+\frac{\pi}{2}\), so \[\rho_{VP}\stackrel{{\text{Mixing}}}{{\longrightarrow}}\rho_{ \bar{V}\bar{O}P}, \tag{63}\] where the bar represents the modes after mixing. Here \(P\) is the subsystem sent to the prover and \(\bar{V}\) is the subsystem on which the \(\theta\) measurement will be applied. The attackers (Alice and Bob) perform a quantum operation on the mode \(P\) and any ancilla mode. We call the subsystem that Alice holds as \(A\), and the one sent to Bob as \(B\). The resulting state is \(\rho_{\bar{V}\bar{O}AB}\). We are interested in the tripartite state \(\rho_{\bar{V}\bar{A}B}\). We write the result of a homodyne measurement on \(\bar{V}\) under angle \(\theta\) as \(U_{\theta}\in\mathbb{R}\), and we write \(\bar{\theta}=\theta+\frac{\pi}{2}\). Lemma 2.11 gives \[\forall\theta\in\mathcal{A}\qquad h(U_{\theta}|A)+h(U_{\bar{\theta}}|B)\geq \log 2\pi. \tag{64}\] Averaging over \(\theta\), and using the fact that averaging over \(\bar{\theta}\) is the same as averaging over \(\theta\), gives \[\mathbb{E}_{\theta\in\mathcal{A}}h(U_{\theta}|A)+\mathbb{E}_{ \theta\in\mathcal{A}}h(U_{\bar{\theta}}|B)\geq\log 2\pi \tag{65}\] \[\implies\mathbb{E}_{\theta\in\mathcal{A}}h(U_{\theta}|A)+\mathbb{ E}_{\theta\in\mathcal{A}}h(U_{\theta}|B)\geq\log 2\pi. \tag{66}\] The last expression can be written as \[h(U|A\Theta)+h(U|B\Theta)\geq\log 2\pi, \tag{67}\] where the angle \(\Theta\) is now represented as a random variable. It follows that \[\max\left\{h(U|A\Theta),\;h(U|B\Theta)\right\}\geq\frac{1}{2}\log 2\pi. \tag{68}\] Finally, we note that \(R=U\sqrt{2}\tanh\zeta\) (with \(\sinh\zeta=\sigma\)) and use Lemma 2.3 to conclude \[h(R|E)\geq\frac{1}{2}\log 2\pi+\frac{1}{2}\log\frac{2}{1+\sigma^{-2}}=\frac{1}{ 2}\log\frac{4\pi}{1+\sigma^{-2}}, \tag{69}\] Here, we have set \(\max\left\{h(R|A\Theta),\;h(R|B\Theta)\right\}=h(R|E)\). The result for \(\mathbb{E}(R-r^{\prime})^{2}\) follows directly from the Fano inequality (Theorem 2.12). ### Comparison between attacker and honest prover We will now work in the \(\sigma\gg 1\) limit. The protocol works only if the attackers have more ignorance about the value \(R\) than the honest prover. Note that we assume that the attackers are powerful and have access to an ideal channel (\(t=1,u=0\)). For \(\sigma\to\infty\), the difference between their entropies (61), (24) satisfies \[h(R|E)-h(R|R^{\prime})\geq\frac{1}{2}\log\!\left(\frac{4}{e}\cdot\frac{t}{1+2u} \right)\!. \tag{70}\] The argument of the logarithm needs to be larger than \(1\). This is the case when \[t>\frac{e}{4}\approx 0.680\quad\land\quad u\leq\frac{t\cdot 4/e-1}{2}. \tag{71}\] Note that Fano's inequality applied to the honest prover's entropy (24) would yield the expression \(\mathbb{E}(\sqrt{t}R-r^{\prime})^{2}\approx\Sigma^{2}\) (as \(\sqrt{t}R|R^{\prime}\) is Gaussian with mean \(r^{\prime}\) in large \(\sigma\) limit), with \(\Sigma^{2}\) as defined in (23), evaluating to \(\Sigma^{2}\approx(1/2+u)/t\). On the other hand, the expected error of the attacker is lower bound by \(2/e\approx 0.74\), which is strictly greater than \((1/2+u)/t\) for certain parameter ranges, as depicted in Figure 2. This proves security of the protocol against a general attack in these parameter ranges. As long as equation (70) is positive, i.e. \[\frac{t}{1+2u}>\frac{e}{4}, \tag{72}\] there's a finite gap between the attacker and the honest entropy about \(R\). Then an attack fails if the score is greater than \(\gamma\) (cf. Section 3.1). To estimate the number of (independent) rounds \(n\) we have to run for the attack success probability to become vanishingly small, we cannot assume a specific attack distribution and we have to assume the attackers have access to an ideal channel. We know that \[\mathbb{E}(R-r^{\prime})^{2}\geq\frac{2}{e}, \tag{73}\] thus \(\mathbb{E}(\sqrt{t}R-r^{\prime})^{2}\geq 2/e\) for any transmission \(t\). The probability that the attackers' score falls below the threshold \(\gamma\) is at most the probability that the score differs from \(\mathbb{E}(\sqrt{t}R-r^{\prime})^{2}/(1/2+u)\) by more than the difference \(\Delta\stackrel{{\rm def}}{{=}}(2/e)/(1/2+u)-\gamma^{5}\). Thus we can use the Chebyshev inequality Figure 2: Security of the proposed CV-QPV protocol. For \(t\leq 1/2\) it is insecure (red), as shown in [1]. For values in the green region, we prove security. Currently, no conclusions can be drawn about the grey region. for the random variable of the score to get \[\mathbb{P}\Bigg{[}\Bigg{|}\frac{1}{n}\sum_{i=1}^{n}\frac{(\sqrt{t}R_{i}-r_{i}^{ \prime})^{2}}{1/2+u}-\frac{\mathbb{E}(\sqrt{t}R-r^{\prime})^{2}}{1/2+u}\Bigg{|} \geq\Delta\Bigg{]}\leq\frac{\tilde{\sigma}^{2}}{n\Delta^{2}}=O\bigg{(}\frac{1} {n\Delta^{2}}\bigg{)}, \tag{74}\] where \(\tilde{\sigma}^{2}=\mathbb{V}\bigg{[}\frac{(\sqrt{t}R-r^{\prime})^{2}}{1/2+u} \bigg{]}\). If we set \(n\Delta^{2}=\Omega\Big{(}\frac{1}{\varepsilon_{\text{att}}}\Big{)}\) then we get \[\mathbb{P}\Bigg{[}\frac{1}{n}\sum_{i=1}^{n}\frac{(\sqrt{t}R_{i}-r_{i}^{\prime} )^{2}}{1/2+u}\leq\gamma\Bigg{]}\leq O(\varepsilon_{\text{att}}). \tag{75}\] ## 6 Perfect attack with a single EPR pair It turns out that our protocol can be attacked if Alice and Bob pre-share one CV EPR pair (see Section 2 for formal descriptions of CV entanglement and teleportation). The entanglement attack proceeds as follows: 1. Alice and Bob pre-share an ideal EPR pair. 2. Alice teleports \(|\psi\rangle\) to Bob. She forwards the measured displacement \((d_{x},d_{p})\) to Bob. 3. Bob intercepts \(\theta\) and immediately performs a homodyne measurement under angle \(\theta\) on his own half of the EPR pair, obtaining outcome \(\mu\in\mathbb{R}\). He forwards \(\theta,\mu\) to Alice. 4. Alice receives \(\theta,\mu\). She computes \(r^{\prime}=\mu-d_{x}\cos\theta-d_{p}\sin\theta\) and sends \(r^{\prime}\) to \(V_{1}\). 5. Bob receives \(d_{x},d_{p}\). He computes \(r^{\prime}=\mu-d_{x}\cos\theta-d_{p}\sin\theta\) and sends \(r^{\prime}\) to \(V_{2}\). The state \(|\psi\rangle\) is a coherent state with displacement \((x_{0},p_{0})\). The effect of the teleportation is that Bob's half of the EPR pair becomes a coherent state with displacement \((x_{0}+d_{x},p_{0}+d_{p})\). Bob's homodyne measurement commutes with the teleport-induced displacement: the undoing of the displacement can be done _after_ Bob's measurement. The noise in \(r^{\prime}\) with respect to \(r\) is just shot noise, exactly as for the honest prover. Other noises originating from loss or excess noise can just be simulated by the attacker. Hence, in the case of an ideal pre-shared EPR pair, the responses from the attackers are statistically indistinguishable from honest prover responses. ## 7 Discussion The security analysis of CV-QPV differs from the discrete variable case, as the honest prover now responds with a sample from a probability distribution. Thus, to prove security (in the setting without pre-shared entanglement), we needed to show that an attack necessarily produces a different distribution than the honest one and that the verifiers can distinguish these distributions. We have shown that this can be done using an entropic uncertainty relation for the differential entropy together with a continuum version of the Fano inequality. We included attenuation and excess noise in the honest channel and showed security for a small range of parameters. We further showed that the considered CV-QPV protocol is broken if one CV-EPR pair is pre-shared between the attackers. Since continuous-variable systems have some practical advantages over discrete ones (see Section 1) we hope that this work may spur interest into the further study of QPV in the context of continuous variables and we hope our techniques can be useful there. An immediate next step could be to extend this protocol to the case where the classical information \(\theta\) is computed via a function \(f(x,y)\) taking inputs \(x,y\) from both verifiers, similar to the discrete variable \(\text{QPV}^{f}_{\text{BB84}}\) protocol [1, 1], and to study CV entanglement attacks on that. More generally, one may ask how far results on QPV for discrete variable protocols generalize or naturally carry over to the CV setting. For example, can the recent formulation of CV port-based teleportation [14] be used to immediately re-formalize the general attack on discrete variable QPV [1] in the CV setting? Do the known attacks, which scale with properties of circuit decompositions of the provers' unitary [14, 15], naturally generalize, for example to CV equivalents of \(T\)-count or \(T\)-depth? #### Acknowledgments We thank Kfir Dolev for interesting initial discussions on the topic of CV-QPV. RA was supported by the Dutch Research Council (NWO/OCW), as part of the Quantum Software Consortium programme (project number 024.003.037). PVL was supported by the Dutch Research Council (NWO/OCW), as part of the NWO Gravitation Programme Networks (project number 024.002.003). FS and LEF are supported by the Dutch Ministry of Economic Affairs and Climate Policy (EZK), as part of the Quantum Delta NL programme. BS and AAR acknowledge the support from Groeifonds Quantum Delta NL KAT2.
2303.10261
Topological Phases of Photonic Crystals under Crystalline Symmetries
Photonic crystals (PhCs) have emerged as a popular platform for realizing various topological phases due to their flexibility and potential for device applications. In this article, we present a comprehensive classification of topological bands in one- and two dimensional photonic crystals, with and without time-reversal symmetry. Our approach exploits the symmetry representations of field eigenmodes at high-symmetry points in momentum space, allowing for the efficient design of a wide range of topological PhCs. In particular, we show that the complete classification provided here is useful for diagnosing photonic crystal analogs of obstructed atomic limits, fragile phases, and stable topological phases that include bands with Dirac points and Chern numbers.
Sachin Vaidya, Ali Ghorashi, Thomas Christensen, Mikael C. Rechtsman, Wladimir A. Benalcazar
2023-03-17T22:03:55Z
http://arxiv.org/abs/2303.10261v1
# Topological Phases of Photonic Crystals under Crystalline Symmetries ###### Abstract Photonic crystals (PhCs) have emerged as a popular platform for realizing various topological phases due to their flexibility and potential for device applications. In this article, we present a comprehensive classification of topological bands in one- and two-dimensional photonic crystals, with and without time-reversal symmetry. Our approach exploits the symmetry representations of field eigenmodes at high-symmetry points in momentum space, allowing for the efficient design of a wide range of topological PhCs. In particular, we show that the complete classification provided here is useful for diagnosing photonic crystal analogs of obstructed atomic limits, fragile phases, and stable topological phases that include bands with Dirac points and Chern numbers. ## I Introduction Photonic crystals (PhCs) are periodically patterned dielectric media that can be described by a Maxwell eigenvalue problem [1; 2]. The periodicity of the dielectric medium acts analogously to a potential for electromagnetic waves and the solutions take the form of Bloch functions that are distributed into photonic bands. Similar to electronic states in conventional solids, PhC eigenmodes can be characterized by topological indices that are global properties across momentum space [3; 4; 5]. An important physical manifestation of these topological indices is the existence of states that reside on the boundaries of the system - this is known as the bulk-boundary correspondence. A wide variety of topological phases have been realized using PhC-based platforms (as distinct from waveguide-arrays [6; 7], coupled-resonator [8; 9] or microwave-circuit [10] realizations). In one and two dimensions, this includes analogs of the SSH model with quantized polarization [11; 12; 13], Chern insulators [14; 15; 16; 17; 18; 19], quantum spin-Hall-like phases [20; 21; 22; 23; 24], Dirac semi-metals [25; 26; 27; 28; 29; 30; 31], valley-Hall phases [32; 33; 34; 35], bulk-obstructed higher-order topological insulators (HOTS) [36; 37; 38; 39; 40; 41], including quadrupolar HOTIs [42; 43; 44], and fragile phases [45]. Several of these have also been proposed for photonic device applications such as for lasing [46; 47; 48; 13], harmonic generation [49; 50] and light transport [51]. Moreover, the flexibility of the PhC-based platform has made it possible to explore the effects of non-linearity [52; 53] and non-Hermiticity [54; 55] alongside topology - novel physics that is difficult to realize in conventional electronic solids. Topological systems can be classified in the tenfold way [56; 57; 58] by the presence or absence of the three fundamental symmetries: time-reversal, chiral, and particle-hole symmetries. PhCs generally do not possess chiral and particle-hole symmetries and therefore belong in either class A (TR-broken) or class AI (TR-symmetric) of the tenfold way. However, crystalline symmetries enrich this classification and can help identify finer topological distinctions within these classes. There are three kinds of topological bands: (i) Obstructed "atomic limit" (OAL) bands [59], that admit exponentially-localized Wannier functions [60] (such bands are referred to as "Wannierizable") (ii) fragile bands [61; 62] that are non-Wannierizable but become Wannierizable when combined with other atomic limit bands and (iii) stable topological bands that are not Wannierizable. In all cases, topology can be generally identified by computing Berry phases (or, more generally, Wilson loops) over the entire Brillouin zone. In the presence of crystalline symmetries, it is possible to identify and distinguish a subset of them by constructing symmetry-indicator invariants [63; 64; 65]. Compared to Berry phases, this symmetry-based approach can be substantially less intensive for computation since it only requires looking at the eigenmodes at high-symmetry points of the Brillouin zone (BZ). In this article, we build on previous works in electronic systems [63; 65] and comprehensively develop a complete classification for topological bands in one- and two-dimensional PhCs under crystalline symmetries both with and without time-reversal symmetry. For each point-group symmetry, we exhaustively calculate the topological indices, defined using symmetry-indicator invariants, for the basis set of atomic limits that span the space of all possible atomic limit bands via the procedure of induction of band representations [66; 67]. This allows us to establish a bulk-boundary correspondence for OAL bands in PhCs where we show that despite the absence of a Fermi level, the notion of a filling anomaly remains meaningful and can be used to infer the topological origin of boundary states directly from the frequency spectrum of the PhCs. This approach also allows us to diagnose topological bands that are not OALs, namely fragile phases and bands with Dirac points and Chern numbers, which is made possible by exploiting the linear structure of the classification. Based on our classification, we propose a strategy to diagnose and design topological PhCs. Finally, for completeness, we discuss the PhC-based implementations of a few other topological systems that lie outside of this framework but where symmetry plays an important role. The rest of the paper is organized as follows: In section II, we review the concepts of Berry phases, symmetry-indicator invari ants, and filling anomaly for 1D PhCs. In section III, we extend these ideas to 2D PhCs by developing the classification due to rotational symmetries, both with and without TR-symmetry. In section IV, we discuss design and characterization strategies for various topological PhCs using our classification, along with appropriate examples. In section V, we discuss PhC-based implementations of the quantum spin-Hall phases, valley-Hall phases and analogs of insulators with quantized multipole moments, all of which lie outside of this framework. ## II 1D Photonic Crystals Maxwell's equations with no sources and for a medium that is linear, isotropic, and lossless are [1; 2] \[\nabla\cdot\mathbf{H}(\mathbf{r},t) =0,\] \[\nabla\times\mathbf{E}(\mathbf{r},t)+\mu_{0}\partial_{t}\mathbf{ H}(\mathbf{r},t) =0,\] \[\nabla\cdot[\epsilon(\mathbf{r})\mathbf{E}(\mathbf{r},t)] =0,\] \[\nabla\times\mathbf{H}(\mathbf{r},t)-\epsilon_{0}\epsilon(\mathbf{ r})\partial_{t}\mathbf{E}(\mathbf{r},t) =0, \tag{1}\] where \(\mathbf{E}\) and \(\mathbf{H}\) are the electric and magnetic fields respectively, \(\epsilon(\mathbf{r})\) is the dielectric function, and \(\epsilon_{0}\) and \(\mu_{0}\) are the vacuum permittivity and permeability respectively. Expanding the temporal component of the electric and magnetic fields into harmonics as \(\mathbf{H}(\mathbf{r},t)=\mathbf{H}(\mathbf{r})\mathrm{e}^{-\mathrm{i} \alpha t}\), \(\mathbf{E}(\mathbf{r},t)=\mathbf{E}(\mathbf{r})\mathrm{e}^{-\mathrm{i}\alpha t}\), these equations reduce to \[\nabla\times\left(\frac{1}{\epsilon(\mathbf{r})}\nabla\times \mathbf{H}(\mathbf{r})\right)=\left(\frac{\omega}{c}\right)^{2}\mathbf{H}( \mathbf{r}),\] \[\nabla\times\nabla\times\mathbf{E}(\mathbf{r})=\left(\frac{ \omega}{c}\right)^{2}\epsilon(\mathbf{r})\mathbf{E}(\mathbf{r}). \tag{2}\] Due to the absence of magneto-electric coupling, we can choose to solve only the equation for \(\mathbf{H}(\mathbf{r})\) in Eq. (II) since \(\mathbf{E}(\mathbf{r})\) can be found from \(\mathbf{H}(\mathbf{r})\) using the last equation in Eq. (1). A 1D PhC, shown schematically in Fig. 1(a), is a 3D material characterized by a refractive index that is periodic along one direction (\(x\)) and is uniform along the other two directions (\(y\) and \(z\)). The magnetic field eigenmode can therefore be written as a plane wave solution in the \(y,z\) plane multiplied by an \(x\)-dependent vector field, \(\mathbf{H}=e^{i\mathbf{k}_{\parallel}\cdot\boldsymbol{\rho}}\mathbf{h}(x)\), where \(\mathbf{k}_{\parallel}\) is the momentum along the uniform directions and \(\boldsymbol{\rho}=y\hat{\mathbf{y}}+\hat{z}\hat{\mathbf{z}}\). However, we are only concerned with propagation along the periodic direction, which implies that \(\mathbf{k}_{\parallel}=0\). Moreover, since the fields must be perpendicular to the propagation direction, we can define two orthogonal polarizations where the vector fields lie in the \(y,z\) plane. Assuming isotropy, we can take these polarized fields to be \(\mathbf{h}_{z}(x)=h_{z}(x)\hat{\mathbf{z}}\) and \(\mathbf{h}_{y}(x)=h_{y}(x)\hat{\mathbf{y}}\). This leads to the following eigenvalue problem for the scalar fields \(h_{\xi}(x)\) for \(\xi\in\{y,z\}\), \[\hat{\Theta}_{1}h_{\xi}(x)=\left(\frac{\omega}{c}\right)^{2}h_{\xi}(x),\ \ \hat{\Theta}_{1}\equiv-\partial_{x}\left(\frac{1}{\epsilon(x)}\partial_{x} \right), \tag{3}\] where \(\hat{\Theta}_{1}\) is the 1D Maxwell operator that plays a role analogous to the Hamiltonian in quantum mechanics. By exploiting the periodicity of the dielectric function, the above equation can be solved using Bloch's theorem. Specifically, the ansatz \(h_{\xi,n,k_{x}}(x)=e^{ik_{x}x}u_{\xi,n,k_{x}}(x)\), can be used to solve Eq. (3), where \(u_{\xi,n,k_{x}}(x)\) is the periodic part of the field defined over a unit cell. With this, Eq. (3) can be written as \[\hat{\Theta}_{1,k_{x}}u_{\xi,n,k_{x}}(x)=\left(\frac{\omega_{n}}{c}\right)^{ 2}u_{\xi,n,k_{x}}(x), \tag{4}\] where \[\hat{\Theta}_{1,k_{x}}\equiv-(\partial_{x}+ik_{x})\left(\frac{1}{\epsilon(x) }(\partial_{x}+ik_{x})\right). \tag{5}\] This yields field solutions distributed across discrete frequency bands labeled by the index \(n\) and with their momentum, \(k_{x}\), restricted to lie within the first BZ, as shown in Fig. 1(b). It is also useful to define the inner product between two fields over a unit cell (UC) as \[\langle u_{\xi,n_{1},k_{1}}|u_{\xi,n_{2},k_{2}}\rangle=\int_{\mathrm{UC}}u_{ \xi,n_{1},k_{1}}^{*}(x)u_{\xi,n_{2},k_{2}}(x)\ \mathrm{d}x. \tag{6}\] Like electronic energy bands in conventional solids, the introduction of frequency gaps allows for a topological characterization of isolated individual photonic bands or a group of bands, as discussed in the following section. ### Classification due to inversion symmetry 1D PhCs fall into class A or AI of the tenfold way, depending on whether they break or preserve time-reversal symmetry Figure 1: (a) Schematic of a 1D PhC made out of alternating layers of dielectric materials of dielectric constants \(\varepsilon_{h}\) and \(\varepsilon_{l}\) with lattice constant \(a\). (b) Schematic of the dispersion of light in a 1D PhC. (c) Wannier centers (solid circles) are located at the two possible maximal Wyckoff positions in the inversion-symmetric unit cells (squares). (d) Filling anomaly due to inversion symmetry. The finite trivial system (with \([X_{1}]=0\)) has a number of states equal to the number of unit cells and is inversion symmetric. The topological system requires at least one more or one fewer state to maintain inversion symmetry. (TRS), respectively. In either case, 1D PhCs are topologically trivial in the absence of other symmetries. However, the presence of inversion symmetry protects two topological phases in both A and AI. The invariant for a single band in these phases is the Berry phase \[\theta=\int_{\text{BZ}}\mathcal{T}_{n,k_{x}}\;\mathrm{d}k_{x} \tag{7}\] where \(\mathcal{A}_{n,k}=-\mathrm{i}\left(u_{\xi,n,k_{x}}\left|\partial_{k_{x}}\right| u_{\xi,n,k_{x}}\right)\) is the Berry connection. Under an inversion-symmetric choice of unit cell, the Berry phase is quantized to \(0\) or \(\pi\). This quantization has an intuitive interpretation: in 1D, all photonic bands admit maximally localized Wannier functions whose centers are gauge invariant quantities [68; 69; 70; 71; 72; 73; 74]. Due to inversion symmetry, a single Wannier center (per unit cell) can only be located in two distinct positions in the unit cell, as shown in Fig. 1(c). These positions are called maximal Wyckoff positions and are labeled by \(1a\) and \(1b\). The Berry phase in Eq. (7) of a single non-degenerate band indicates the location of the (one) Wannier center within each unit cell, where \(\theta=0\) and \(\pi\) correspond to the Wannier center being located at the position \(1a\) (middle of the unit cell) and \(1b\) (edge of the unit cell), respectively. The calculation of Eq. (7) involves an integral over the entire BZ, but it can be greatly simplified by looking at the representations of the \(\mathbf{H}\) or \(\mathbf{E}\) fields at the high-symmetry points (HSPs) of the BZ [75], which are \(\mathbf{\Gamma}\) (\(k_{x}=0\)) and \(\mathbf{X}\) (\(k_{x}=\pi/a\)). Under inversion symmetry \(\mathcal{I}:\mathbf{r}\rightarrow-\mathbf{r}\), the 1D Maxwell operator obeys \[\hat{\mathcal{I}}\hat{\Theta}_{1,k_{x}}\hat{\mathcal{I}}^{-1}=\hat{\Theta}_{1,-k_{x}}, \tag{8}\] where \(\hat{\mathcal{I}}\) is the inversion operator. Equation (8) implies that \(\hat{\Theta}_{1,k_{x}}\) commutes with \(\hat{\mathcal{I}}\) at \(\mathbf{\Gamma}\) and \(\mathbf{X}\), i.e., \([\hat{\Theta}_{1,\mathbf{X}},\hat{\mathcal{I}}]=[\hat{\Theta}_{1,\mathbf{X}},\hat{\mathcal{I}}]=0\), since these HSPs map to themselves under a negative sign, modulo a reciprocal lattice vector. Thus, the eigenmodes at these HSPs can be labeled by the eigenvalues of \(\hat{\mathcal{I}}\), which are \(\pm 1\) since \(\hat{\mathcal{I}}^{2}=1\). To aid with generalization to 2D later, we denote these eigenvalues at the HSP \(\mathbf{\Pi}\) as \(\mathbf{\Pi}_{1}=+1\) and \(\mathbf{\Pi}_{2}=-1\). We can now define the symmetry-indicator invariant for a set of bands as \[[X_{1}]=\#\mathbf{X}_{1}-\#\mathbf{\Gamma}_{1}\quad\in\mathbb{Z}, \tag{9}\] where \(\#\mathbf{\Pi}_{1}\) is the number of states at the HSP \(\mathbf{\Pi}\) with \(\mathcal{I}\) eigenvalue +1. The invariant in Eq. (9) then encodes the value of the Berry phase as [75; 76] \[\frac{\theta}{2\pi}=\frac{1}{2}[X_{1}]\mod 1, \tag{10}\] which provides a \(\mathbb{Z}_{2}\) classification of dipole moments in inversion-symmetric 1D crystals. We note that the Berry phase and the invariant in Eq. (9) depend on the choice of unit cell. The bands that originate from localized and symmetric Wannier functions form a representation of the crystal's symmetry group, called a band representation [66]. The values of \([X_{1}]\) for a single isolated band can be enumerated exhaustively by working out the inverse problem, i.e., given a set of Wannier functions, we can calculate the band representation that such a set leads to. This inverse problem of band topology has been used to classify topological phases in insulators [66; 67]. We review this procedure for 1D bands in Appendix C. In the next section, we explore the physical consequence of a non-trivial invariant: the presence of boundary states. However, as we shall describe shortly, due to the lack of additional symmetries that impose constraints on the frequency spectrum (such as chiral or particle-hole symmetries), these boundary states need not lie within bandgaps, and the issue of bulk-boundary correspondence is somewhat more subtle in PhCs. ### Filling anomaly, counting mismatch, and boundary states The existence of boundary states can be heuristically understood by considering the effect of a boundary between two distinct topological phases. Since the invariants are quantized and can only change at gap closings, a gap-closing point at the boundary is required, resulting in boundary states. For 1D systems with inversion symmetry, such topological boundary states are associated with a _filling anomaly_[77; 63; 78], which we describe now. Consider a finite tiling of \(N\) inversion-symmetric 1D unit cells which creates two halves or "sectors" in real space, related by inversion-symmetry, with two boundaries as shown in Fig. 1(d). A single isolated band in the bulk gives rise to \(N\) states in this finite system. For a trivial bulk band with \([X_{1}]=0\), the Wannier centers in the finite tiling must be placed at the \(1a\) position of the unit cell, and the number of states that correspond to this bulk band is equal to \(N\). However, for a topological bulk band with \([X_{1}]=\pm 1\), the Wannier centers in the finite tiling must be placed at the \(1b\) position of the unit cell, which leads to a difficulty: \(N\) states cannot maintain inversion symmetry due to the shifted position of the Wannier centers. Instead, either \(N-1\) or \(N+1\) (or more generally, \(N-\overline{1}_{2}\) where \(\overline{1}_{2}\) is any integer congruent to \(1\) mod \(2\)) Wannier centers are necessary to be consistent with inversion symmetry as shown in Fig. 1(d). This inability to maintain both the expected number of states and inversion symmetry simultaneously is know as filling anomaly [63], and leads to the quantization of fractional charge at boundaries in electronic systems and fractional electromagnetic energy density in PhCs. Since each Wannier center corresponds to a single state, the filling anomaly also presents a practical way to diagnose non-trivial topology by counting states in the spectrum of a finite system [79; 42]. The spectral consequence of the filling anomaly is that the states in the finite system within the frequency range of a single topological bulk band must have an odd (\(\overline{1}_{2}\)) number of missing or additional states as compared to the number of unit cells. The missing states are paired up with missing states from a different topological band in a way that preserves the inversion symmetry of the system and these typically reside inside the bandgap as boundary states. However, due to a lack of additional symmetries that pin these boundary states to the middle of the gap, they could be pushed into a bulk band by inversion symmetry preserving perturbations to the boundaries. Since such perturbations act identically on both boundaries of the system, the bulk band would gain an odd number of additional states. Crucially, regardless of the details of the perturbation, the number of expected states and the actual states within the frequency range of a single topological band will differ by \(\overline{1}_{2}\); we refer to this as a "counting mismatch". In contrast, trivial boundary states, such as defect states, originate from a single band and would give rise to a counting mismatch of an even (\(=\overline{0}_{2}\)) number of states for that band in a finite system with two boundaries related by inversion symmetry. Therefore, the counting mismatch is a \(\mathbb{Z}_{2}\) invariant that can be determined from the frequency spectrum of the PhC and thus can directly reveal the topological nature of bulk bands. We provide a more detailed discussion of the origin of this counting mismatch in Appendix A. To summarize this argument, in the absence of chiral or particle-hole symmetry, the bulk-boundary correspondence of topological 1D PhCs with inversion symmetry is subtle in that the boundary states _may_ or _may not_ appear within a bandgap. However, regardless of their location in the frequency spectrum, the states within the frequency range of a topological band in a finite system _must_ exhibit an odd-integer counting mismatch. We now consider an explicit example of a 1D PhC consisting of alternating layers of TiO\({}_{2}\) (\(\varepsilon=6.25\)) and air (\(\varepsilon=1\)). The TiO\({}_{2}\) layer occupies a filling fraction \(d/a=0.6\) in the unit cell with lattice constant \(a\). The first six bands of this 1D PhC are shown in Fig. 2(a). Two distinct types of inversion-symmetric unit cells are possible for this PhC, as shown in the inset of Fig. 2(a). The two types of unit cells are re-definitions of each other, related by a shift of \(a/2\) along the \(x\) direction. The eigenvalues of \(\mathcal{I}\) at the HSPs \(\mathbf{\Gamma}\) and \(\mathbf{X}\) for both types of unit cells, as well as the Berry phase calculated using Eq. (7), are shown in the same plot. They show that while the band structure is identical for the two types of unit cells, the Berry phases and, correspondingly, the symmetry-indicator invariants are different. This is consistent with the fact that the re-definition of the unit cell shifts the Wannier center from the \(1a\) position to the \(1b\) position and, therefore, also the Berry phase from \(0\) to \(\pi\). This implies that if a band in one of the unit cell types is trivial, the corresponding band in the other type is topological. Next, we simulate a large inversion-symmetric supercell with interfaces between the two types of unit cells in a strip geometry as shown in Fig. 2(b). This supercell has two inversion-symmetry-related sectors with two boundaries and consists of a total of 61 unit cells. Therefore we expect to find 61 states per band in the spectrum of this supercell which is shown in Fig. 2(c). However, due to the distinct topology of the bands in the two unit cell types, each band in the finite structure exhibits a counting mismatch of \(\overline{1}_{2}\) states. For bands 1 to 4, we find the counting mismatch to be one missing state each and that these mismatched states reside in the bandgaps as boundary states whose field profiles are shown in Fig. 2(d). For band 5, we find a counting mismatch of three missing states, two of which reside in the fourth gap and are trivial states since they originate from the same band. The remaining missing state is paired with another state from band 6. However, we can see Figure 2: (a) The photonic band structure of a 1D PhC with \(\varepsilon_{h}=6.25\), \(\varepsilon_{l}=1\) and \(d=0.6a\). The two possible types of inversion-symmetric unit cells are shown in the inset. Eigenvalues of \(\mathcal{I}\) at the HSPs for both types of unit cells are labeled with \(+/-\) signs. The Berry phases for both types of unit cells are shown in blue boxes. (b) The dielectric profile of a finite system of size 61 unit cells with interfaces between the two types of unit cells. The inset highlights the switch between the unit cell types across the boundary (c) The frequency spectrum for the finite system shown in (b). An odd-integer counting mismatch per band leads to the presence of topological boundary states in the first and third bandgaps. The photonic DoS is also shown in the same figure, labeled with the number of states. (d) The \(E_{z}\) mode profiles of one of the two topological boundary states in the first and third gaps. that this pair of mismatched states does not lie deep inside the fifth bandgap but is instead very close to the band-edge of band 6. Including these states as part of band 6, we find a counting mismatch of one additional state for band 6. The in-gap topological boundary states discussed above have been directly observed in experiments in 1D PhCs and 1D periodic-dielectric waveguides [11; 12; 13]. Having introduced the notion of topological bands in the presence of crystalline symmetries in 1D, we now extend the topological classification and characterization of photonic bands to 2D. ## III 2D photonic crystals Two-dimensional PhCs consist of a periodic patterning of the dielectric along two directions (e.g., \(x\) and \(y\)) and a uniform dielectric profile along the third direction (e.g., \(z\)), with wave propagation restricted to lie in the \(x,y\) plane. In this setting, the equations in (2) can be simplified by exploiting the mirror symmetry through the \(x,y\) plane that sends \(z\to-z\). This separates the states into two orthogonal polarizations: transverse electric (TE) with \(\mathbf{E}(\mathbf{r})=\mathcal{E}_{x}(x,y)\mathbf{\hat{x}}+\mathcal{E}_{y}(x,y)\mathbf{\hat{y}}\), \(\mathbf{H}(\mathbf{r})=\mathcal{H}_{z}(x,y)\mathbf{\hat{z}}\), which is even under the mirror symmetry, and transverse magnetic (TM) with \(\mathbf{E}(\mathbf{r})=\mathcal{E}_{z}(x,y)\mathbf{\hat{z}}\), \(\mathbf{H}(\mathbf{r})=\mathcal{H}_{x}(x,y)\mathbf{\hat{x}}+\mathcal{H}_{y}( x,y)\mathbf{\hat{y}}\), which is odd under the mirror symmetry. For these generally non-degenerate TE and TM polarizations, the eigenvalue problem is most easily solved for the scalar fields \(\mathcal{H}_{z}(x,y)\) and \(\mathcal{E}_{z}(x,y)\) respectively, via [2] \[-\left[\partial_{x}\frac{1}{\varepsilon(x,y)}\partial_{x}+ \partial_{y}\frac{1}{\varepsilon(x,y)}\partial_{y}\right]\mathcal{H}_{z}(x,y) =\frac{\omega^{2}}{c^{2}}\mathcal{H}_{z}(x,y),\] \[-\frac{1}{\varepsilon(x,y)}\left(\partial_{x}^{2}+\partial_{y}^ {2}\right)\mathcal{E}_{z}(x,y) =\frac{\omega^{2}}{c^{2}}\mathcal{E}_{z}(x,y). \tag{11}\] As in the 1D case, these eigenvalue problems can be solved using Bloch's theorem, and the solutions are distributed into frequency bands with their momenta restricted to the 2D BZ. Since TE and TM polarizations are orthogonal, we restrict the discussion to a single polarization of choice. We now characterize the topological phases of 2D PhCs by first constructing the topological invariants that classify them under different point group symmetries and then deriving bulk-boundary correspondences and their associated index theorems. The classification of PhCs can be divided into whether they obey time-reversal symmetry (TRS) (class AI) or not (class A). In 2D, without additional symmetries, class AI does not host topological phases. In contrast, class A hosts topological phases characterized by the Chern number (\(C\in\mathbb{Z}\)) that encodes the number of chiral edge states at the boundaries of a finite system. The Chern number also presents an obstruction to the construction of exponentially localized Wannier functions, and hence such bands are referred to as non-Wannierizable [80; 81]. When the Chern number vanishes, and in the presence of crystalline symmetries, photonic bands may be associated with Wannier centers fixed at maximal Wyckoff positions of the 2D unit cells (Fig. 3). As mentioned previously, such bands are collectively called atomic limits; in particular, we use the term 'obstructed atomic limits (OAL)' to refer to cases where the Wannier centers are displaced away from the center of the unit cell. Under some circumstances, a symmetry-preserving Wannier representation of bands may not be possible despite their vanishing Chern number. Such bands are termed _fragile_ and have the property of admitting a Wannier representation when considered as a set that includes additional specific atomic limit bands [61; 45; 62]. Similar to 1D, the topology of bands in 2D PhCs can be characterized using Berry phases. However, when bands are degenerate, they must be treated collectively, which requires the use of Wilson loops [82; 83]. The Wilson loop is defined as \[\mathcal{W}(\mathcal{C})=\mathcal{P}\exp\left[\left(\mathrm{i}\int_{\mathcal{ C}}\mathcal{A}(\mathbf{k})\cdot\mathrm{d}\mathbf{k}\right)\right], \tag{12}\] where \(\mathcal{C}\) is a closed contour in \(\mathbf{k}\)-space, \(\mathcal{P}\) denotes a path ordering of the exponential and \(\mathcal{A}(\mathbf{k})\) is the multi-band Berry connection given by \[\mathcal{A}(\mathbf{k})=\mathcal{A}_{m,n}(\mathbf{k})=-\mathrm{i}\left\langle \left.u_{\mathbf{k},m}\right|\nabla_{\mathbf{k}}\right|u_{\mathbf{k},n}\right\rangle. \tag{13}\] Here, \(m,n\) label the bands in a group of connected bands. When \(\mathcal{C}\) is taken to be a non-contractible loop in the Brillouin zone, the Wilson loop eigenvalues are proportional to the expectation values of the position operator of the hybrid Wannier functions in the same direction. Therefore, these eigenvalues can indicate the Wannierizable nature of bands in atomic limit phases or indicate the non-Wannierizable nature of fragile bands or Chern bands by their non-trivial winding numbers [80; 60]. Similar to the Berry phase in 1D, the calculations of these Wilson loops can also be simplified by looking at the representations of the eigenmodes at the HSPs of the BZ. Figure 3: Maximal Wyckoff positions for (a) \(C_{2}\) (b) \(C_{4}\) (c) \(C_{6}\) and (d) \(C_{3}\) symmetric unit cells. (e) BZ of a square lattice with possible HSPs. (f) BZ of a triangular lattice with possible HSPs. ### Classification due to rotational symmetries Consider a projector into the bands of interest given by \(P_{k}=\sum_{j}|u_{j,k}\rangle\ \langle u_{j,k}|\). The eigenvalues of the rotation operator, \(\hat{r}_{n}\), projected into the bands of interest at the HSP \(\mathbf{\Pi}\), \(P_{\mathbf{\Pi}}\hat{r}_{n}P_{\mathbf{\Pi}}\), are \[\mathbf{\Pi}_{p}^{(n)}=e^{2\pi i(p-1)/n},\quad\text{for }p=1,2,\ldots n. \tag{14}\] Following previous studies on the characterization of the topology of energy bands in condensed matter systems [63], we define the integer invariants \[[\Pi_{p}^{(n)}]\equiv\#\mathbf{\Pi}_{p}^{(n)}-\#\mathbf{\Gamma}_{p}^{(n)}\quad \in\mathbb{Z}, \tag{15}\] where \(\#\mathbf{\Pi}_{p}^{(n)}\) is the number of states in the frequency band(s) in question with rotation operator eigenvalue \(\mathbf{\Pi}_{p}^{(n)}\). These invariants can be constructed for 2D lattices with \(C_{n}\) symmetry at all high symmetry points shown in Fig. 3(e) and (f). However, some of the invariants in Eq. (15) are redundant for three reasons: (i) Rotation symmetry forces representations at certain HSPs to be the same. Particularly, \(C_{4}\) symmetry forces equal representations at \(\mathbf{X}\) and \(\mathbf{Y}\), while \(C_{6}\) symmetry forces equal representations at \(\mathbf{M}\), \(\mathbf{M}^{\prime}\), and \(\mathbf{M}^{\prime\prime}\), as well as at \(\mathbf{K}\) and \(\mathbf{K}^{\prime}\); (ii) the fact that the number of bands in consideration is constant across the BZ, from which it follows that \(\sum_{p}\#\mathbf{\Pi}_{p}^{(n)}=\sum_{p}\#\mathbf{\Gamma}_{p}^{(n)}\), or \(\sum_{p}[\Pi_{p}^{(n)}]=0\); and (iii) the existence of TRS, which implies that the Chern number vanishes and that rotation eigenvalues at \(\mathbf{\Pi}^{(n)}\) and \(-\mathbf{\Pi}^{(n)}\) are related by complex conjugation. This leads to \([M_{2}^{(4)}]=[M_{4}^{(4)}]\) (for \(C_{4}\)), \([K_{2}^{(3)}]=[K_{3}^{\prime(3)}]\) (for \(C_{3}\)), \([K_{3}^{(3)}]=[K_{2}^{\prime(3)}]\) (for \(C_{3}\)), \([K_{1}^{(3)}]=[K_{1}^{(3)}]\) (for \(C_{3}\)) and \([K_{2}^{(3)}]=[K_{3}^{(3)}]\) (for \(C_{6}\)). Therefore, in the presence of TRS (class AI), the classification for \(N\) bands is given by the indices [63] \[\chi_{\mathcal{T}}^{(2)} =\left([X_{1}^{(2)}],[Y_{1}^{(2)}],[M_{1}^{(2)}];N\right),\] \[\chi_{\mathcal{T}}^{(3)} =\left([K_{1}^{(3)}],[K_{2}^{(3)}];N\right),\] \[\chi_{\mathcal{T}}^{(4)} =\left([X_{1}^{(2)}],[M_{1}^{(4)}],[M_{2}^{(4)}];N\right),\] \[\chi_{\mathcal{T}}^{(6)} =\left([M_{1}^{(2)}],[K_{1}^{(3)}];N\right). \tag{16}\] On breaking TRS, the classification of 2D \(C_{n}\)-symmetric PhCs must include the Chern number since it can now admit non-zero values. Furthermore, breaking TRS reduces the number of constraints on the invariants (i.e., condition (iii) above is relaxed) and therefore increases the number of invariants required to identify distinct topological phases uniquely. Taking into account these considerations, the most general classification (class A) of 2D \(C_{n}\)-symmetric PhCs is given by the indices \[\chi^{(2)} =\left(C\,\middle|\,\rho^{(2)}\right)=\left(C\,\middle|\,[X_{1}^{ (2)}],[Y_{1}^{(2)}],[M_{1}^{(2)}];N\right),\] \[\chi^{(3)} =\left(C\,\middle|\,\rho^{(3)}\right)=\left(C\,\middle|\,[K_{1}^{(3 )}],[K_{2}^{(3)}],[K_{1}^{\prime(3)}],[K_{2}^{\prime(3)}];N\right),\] \[\chi^{(4)} =\left(C\,\middle|\,\rho^{(4)}\right)=\left(C\,\middle|\,[X_{1}^{ (2)}],[M_{1}^{(4)}],[M_{2}^{(4)}],[M_{4}^{(4)}];N\right),\] \[\chi^{(6)} =\left(C\,\middle|\,\rho^{(6)}\right)=\left(C\,\middle|\,[M_{1}^{ (2)}],[K_{1}^{(3)}],[K_{2}^{(3)}];N\right), \tag{17}\] where \(C\) is the Chern number given by \[C=\frac{1}{2\pi}\int_{\text{BZ}}\text{Tr}[\nabla_{\mathbf{k}}\times\mathcal{ A}(\mathbf{k})]\ \text{d}^{2}\mathbf{k}. \tag{18}\] Similar to the 1D case, we can exhaustively calculate the values of \(\chi^{(n)}\) (in the case when \(C=0\)) or \(\chi_{\mathcal{T}}^{(n)}\) by induction of band representations. To perform this, we require knowledge about the Wannier functions' internal symmetry representation, known as "site symmetry representation", \(\rho(C_{n})\), as well as the location of their gauge-invariant centers, the Wannier centers. We provide a detailed derivation of the symmetry-indicator invariants at HSPs and the corresponding indices for all 2D atomic limits, with and without TRS, in Appendix D and show the final results in Tables 1-4. Each case in these tables uniquely identifies an atomic limit protected by the corresponding rotational symmetry. ### Relation between symmetry-indicator invariants and Chern number The Chern number mod \(n\) can be inferred from the rotation eigenvalues at HSPs of systems with \(C_{n}\) symmetry [84]. Using this, relations between the Chern numbers, Eq. (18), and the symmetry indicator invariants can be derived, as done in Appendix B. These relations take the form of equivalence relations modulo the rotation order of the considered group: \[C^{(2)} =-[X_{1}^{(2)}]-[Y_{1}^{(2)}]-[M_{1}^{(2)}]\quad(\text{mod }2),\] \[C^{(3)} =-[K_{1}^{(3)}]-2[K_{2}^{(3)}]+2[K_{1}^{\prime(3)}]+[K_{2}^{\prime( 3)}]\quad(\text{mod }3),\] \[C^{(4)} =2[M_{1}^{(4)}]+[M_{2}^{(4)}]-[M_{4}^{(4)}]-2[X_{1}^{(2)}]\quad( \text{mod }4),\] \[C^{(6)} =-8[K_{1}^{(3)}]-4[K_{2}^{(3)}]+3[M_{1}^{(2)}]\quad(\text{mod }6),\] Compared to the direct evaluation of Eq. (18), these relations provide a fast and simple way to calculate the Chern number mod \(n\) for \(C_{n}\)-symmetric PhCs with broken TRS. ### Index theorems \(C_{n}\)-symmetric PhCs with different \(\chi^{(n)}\) or \(\chi_{\mathcal{T}}^{(n)}\) belong to different topological phases, as they cannot be deformed into one another without closing the bulk energy gap or breaking the symmetry1[85; 86; 65]. Furthermore, for Wannierizable bands, the Wannier center configuration directly determines the existence of a filling anomaly and consequently the possible existence of in-gap edge and corner states. Therefore, finding the symmetry-indicator invariants is useful in establishing a bulk-boundary correspondence for such bands. The presence of edge states is directly related to the dipole moment of the Wannier centers. In 1D, this takes the form of Eq. (10) whereas in 2D, Ref. [63] showed that the bands have dipole moments indicated by \[\mathbf{P}^{(2)} =\frac{1}{2}\left([Y_{1}^{(2)}]+[M_{1}^{(2)}]\right)\mathbf{a}_{1} +\frac{1}{2}\left([X_{1}^{(2)}]+[M_{1}^{(2)}]\right)\mathbf{a}_{2},\] \[\mathbf{P}^{(4)} =\frac{1}{2}[X_{1}^{(2)}](\mathbf{a}_{1}+\mathbf{a}_{2}),\] \[\mathbf{P}^{(6)} =\mathbf{0}, \tag{20}\] where the superscript \(n\) in \(\mathbf{P}^{(n)}\) labels the \(C_{n}\) symmetry. The dipole moments in Eq. (20) are defined modulo 1 and are valid for both TR-symmetric and TR-broken PhCs, as long as the Chern number vanishes in the latter case. \(\mathbf{P}^{(2)}\) is a \(\mathbb{Z}_{2}\times\mathbb{Z}_{2}\) index and \(\mathbf{P}^{(4)}\) is a \(\mathbb{Z}_{2}\) index. In the case of \(C_{3}\) symmetry, the dipole moment is given by \[\mathbf{P}^{(3)} =\frac{2}{3}\left([K_{1}^{(3)}]+2[K_{2}^{(3)}]\right)(\mathbf{a} _{1}+\mathbf{a}_{2})\quad(\text{under TRS}),\] \[\mathbf{P}^{(3)} =\left([K_{1}^{(3)}]+[K_{2}^{(3)}]-\frac{2}{3}[K_{1}^{\prime(3)}] -\frac{1}{3}[K_{2}^{\prime(3)}]\right)(\mathbf{a}_{1}+\mathbf{a}_{2})\] \[(\text{under broken TRS}), \tag{21}\] where \(\mathbf{P}^{(3)}\) is a \(\mathbb{Z}_{3}\) index for \(C_{3}\) symmetry. In all cases, non-trivial \(\mathbf{P}\) is associated with an edge-induced filling anomaly. For 2D spinless systems, such as the PhCs considered here, \(\mathcal{I}\) and \(C_{2}\) have identical transformation properties and are isomorphic operations that send \(x,y\rightarrow-x,-y\). Therefore, for \(C_{2}\), \(C_{4}\), and \(C_{6}\) symmetries, a non-trivial \(\mathbf{P}\) is associated with a counting mismatch of \(\overline{1}_{2}\) in the edge spectrum since inversion symmetry (\(\mathcal{I}\)) is a subgroup of these rotations, and an edge supercell (with one periodic direction) can always be chosen such that \(\mathcal{I}\) is maintained. In the case of \(C_{2}\) symmetry, the counting mismatch is a \(\mathbb{Z}_{2}\times\mathbb{Z}_{2}\) invariant as edge supercells in both directions must be independently considered (i.e., finite-in-\(x\), periodic-in-\(y\) or finite-in-\(y\), periodic-in-\(x\)). In the case of \(C_{4}\) symmetry, the edge spectrum is identical in both directions, and therefore the counting mismatch is a \(\mathbb{Z}_{2}\) invariant. In the case of \(C_{6}\) symmetry, both \(\mathbf{P}^{(6)}\) and the counting mismatch in the edge spectrum are always trivial. Since \(\mathcal{I}\) is not a subgroup of \(C_{3}\) symmetry, an edge supercell can never be chosen such that \(\mathcal{I}\) is maintained. Therefore, the counting mismatch cannot distinguish between different values of \(\mathbf{P}^{(3)}\). Instead, in this case, the fractionalization of energy density at \begin{table} \begin{tabular}{l c c c} \hline WP & Site symm. & \(\chi_{\mathcal{T}}^{(2)}\) & \(\chi^{(2)}\) \\ \hline \(1a\) & \(\rho(C_{2})=\text{any}\) & \((0,0,0;1)\) & \((0\,|\,0,0,0;1)\) \\ \hline \(1c\) & \(\rho(C_{2})=+1\) & \((-1,0,-1;1)\) & \((0\,|\,1,0,-1;1)\) \\ \(1c\) & \(\rho(C_{2})=-1\) & \((1,0,1;1)\) & \((0\,|\,1,0,1;1)\) \\ \hline \(1d\) & \(\rho(C_{2})=+1\) & \((0,-1,-1;1)\) & \((0\,|\,0,-1,-1;1)\) \\ \(1d\) & \(\rho(C_{2})=-1\) & \((0,1,1;1)\) & \((0\,|\,0,1,1;1)\) \\ \hline \(1b\) & \(\rho(C_{2})=+1\) & \((-1,-1,0;1)\) & \((0\,|\,1,-1,-1;0;1)\) \\ \(1b\) & \(\rho(C_{2})=-1\) & \((1,1,0;1)\) & \((0\,|\,1,1,0;1)\) \\ \hline \end{tabular} \end{table} Table 1: \(C_{2}\) symmetry: Indices induced from every maximal Wyckoff position (WP). \begin{table} \begin{tabular}{l c c c} \hline WP & Site symm. & \(\chi_{\mathcal{T}}^{(3)}\) & \(\chi^{(3)}\) \\ \hline \(1a\) & \(\rho(C_{3})=\text{any}\) & \((0,0;1)\) & \((0\,|\,0,0,0;1)\) \\ \hline \(1b\) & \(\rho(C_{3})=\pm 1\) & \((-1,1;1)\) & \((0\,|\,-1,1,-1,0;1)\) \\ \(1b\) & \(\rho(C_{3})=e^{i\frac{2\pi}{3}\sigma_{c}}\) & \((1,-1;2)\) & \((0\,|\,1,-1,1,0;2)\) \\ \(1b\) & \(\rho(C_{3})=e^{i\frac{2\pi}{3}}\) & \(-\) & \((0\,|\,0,-1,-1;1)\) \\ \(1b\) & \(\rho(C_{3})=e^{i\frac{4\pi}{3}}\) & \(-\) & \((0\,|\,1,0,0,1;1)\) \\ \hline \(1c\) & \(\rho(C_{3})=+1\) & \((-1,0;1)\) & \((0\,|\,-1,0,-1;1;1)\) \\ \(1c\) & \(\rho(C_{3})=e^{i\frac{2\pi}{3}\sigma_{c}}\) & \((1,0;2)\) & \((0\,|\,1,0,1,-1;2)\) \\ \(1c\) & \(\rho(C_{3})=e^{i\frac{2\pi}{3}}\) & \(-\) & \((0\,|\,1,-1,0,-1;1)\) \\ \(1c\) & \(\rho(C_{3})=e^{i\frac{4\pi}{3}}\) & \(-\) & \((0\,|\,0,1,1,0;1)\) \\ \hline \end{tabular} \end{table} Table 2: \(C_{3}\) symmetry: Indices induced from every maximal Wyckoff position. \begin{table} \begin{tabular}{l c c c} \hline WP & Site symm. & \(\chi_{\mathcal{T}}^{(4)}\) & \(\chi^{(4)}\) \\ \hline \(1a\) & \(\rho(C_{4})=\text{any}\) & \((0,0,0;1)\) & \((0\,|\,0,0,0;1)\) \\ \hline \(2c\) & \(\rho(C_{2})=+1\) & \((-1,-1,1;2)\) & \((0\,|\,1,-1,1,1;2)\) \\ \(2c\) & \(\rho(C_{2})=-1\) & \((1,1,-1;2)\) & \((0\,|\,1,1,-1,-;2)\) \\ \hline \(1b\) & \(\rho(C_{4})=+1\) & \((-1,-1,0;1)\) & \((0\,|\,1,-1,0,0;1)\) \\ \(1b\) & \(\rho(C_{4})=-1\) & \((-1,1,0;1)\) & \((0\,|\,-1,1,0,0;1)\) \\ \(1b\) & \(\rho(C_{4})=\mathrm{i}\sigma_{z}\) & \((2,0,0;2)\) & \((0\,|\,2,0,0,0;2)\) \\ \(1b\) & \(\rho(C_{4})=\mathrm{i}\) & \(-\) & \((0\,|\,1,0,-1,1;1)\) \\ \(1b\) & \(\rho(C_{4})=-\mathrm{i}\) & \(-\) & \((0\,|\,1,0,1,-1;1)\) \\ \hline \end{tabular} \end{table} Table 3: \(C_{4}\) symmetry: Indices induced from every maximal Wyckoff position. the edges must be directly calculated using the eigenmodes of a \(C_{3}\)-symmetric finite system. Additionally, some Wannier center configurations can lead to higher-order topological states. In class AI, these phases are determined by the corner "charges" 2 Footnote 2: In PhCs, the electromagnetic energy density is analogous to electronic charge density fractionally quantized at corners. \[Q^{(2)}_{\text{corner},\mathcal{T}} =\frac{1}{4}\left(-[X^{(2)}_{1}]-[Y^{(2)}_{1}]+[M^{(2)}_{1}]\right),\] \[Q^{(3)}_{\text{corner},\mathcal{T}} =\frac{1}{3}[K^{(3)}_{2}],\] \[Q^{(4)}_{\text{corner},\mathcal{T}} =\frac{1}{4}\left([X^{(2)}_{1}]+2[M^{(4)}_{1}]+3[M^{(4)}_{2}] \right),\] \[Q^{(6)}_{\text{corner},\mathcal{T}} =\frac{1}{4}[M^{(2)}_{1}]+\frac{1}{6}[K^{(3)}_{1}], \tag{22}\] as shown initially in Ref. [63]. We extend this to class A, where they are \[Q^{(2)}_{\text{corner}} =\frac{1}{4}\left(-[X^{(2)}_{1}]-[Y^{(2)}_{1}]+[M^{(2)}_{1}] \right),\] \[Q^{(3)}_{\text{corner}} =\frac{1}{3}\left([K^{(3)}_{1}]+[K^{(3)}_{2}]-[K^{\prime}_{1}] \right),\] \[Q^{(4)}_{\text{corner}} =\frac{1}{4}\left([X^{(2)}_{1}]+2[M^{(4)}_{1}]+\frac{3}{2}[M^{(4) }_{2}]+\frac{3}{2}[M^{(4)}_{4}]\right),\] \[Q^{(6)}_{\text{corner}} =\frac{1}{4}[M^{(2)}_{1}]+\frac{2}{3}[K^{(3)}_{1}], \tag{23}\] \(Q^{(n)}_{\text{corner},\mathcal{T}}\) (for TR-symmetric) or \(Q^{(n)}_{\text{corner}}\) (for TR-broken), are \(\mathcal{Z}_{n}\) topological quantities and are associated with a corner-induced filling anomaly, a counting mismatch of states \(\in\{\bar{\partial}_{n},\ldots n-1_{n}\}\) in a finite system with \(n\) symmetry-related sectors and possibly the presence of in-gap corner-localized states. The derivation of these formulae and other details concerning the finite systems where these formulae are valid are given in Appendix E. For the formulae in Eq. (23), we have assumed that the Chern number vanishes in the TR-broken case and that the bands are OALs with well-defined Wannier centers. However, it is also possible for fractional charges to localize at disclinations in \(C_{n}\)-symmetric systems with non-Wannierizable Chern bands. In such cases, the formulae for disclination charges contain a Chern number contribution and contributions from the symmetry-indicator invariants [87]. We note that the formulae in Eq. (23) are consistent with the disclination charges given in Ref. [87] with a vanishing Chern number contribution as is expected. Finally, we note that in fermionic systems, where insulating states rely on completely filled bands, a quantization of corner charge requires \(\mathbf{P}^{(n)}=\mathbf{0}\). In photonic systems, however, we are only concerned with the _existence_ of localized states, and the \(\mathbf{P}^{(n)}=\mathbf{0}\) constraint can be relaxed. Therefore, we also consider cases where \(\mathbf{P}^{(n)}\) and \(Q\) can simultaneously admit non-trivial values, leading to both edge and corner states that may be degenerate with each other and/or with the bulk bands. However, their associated counting mismatch remains robust. ## IV Design and characterization of 2D topological photonic crystals In the previous sections, we exhaustively built the topological classifications in class A and AI. We also identified the indices that correspond to OAL phases via the induction of the band representations from the symmetry representation of the Wannier functions and the Wyckoff positions of their Wannier centers. This classification forms a linear algebraic structure, such that when two sets of bands of a \(C_{n}\)-symmetric system, in phases \(\chi_{1}\) and \(\chi_{2}\) respectively, are combined, they are in phase \(\chi_{1}+\chi_{2}\). This observation forms the basis of a strategy we now propose to diagnose and design topological PhCs. Given a PhC, our starting point is the calculation of the \(C_{n}\) symmetry representations at HSPs for \(N\) bands to determine \(\rho^{(n)}\) (here, \(\rho^{(n)}=\chi_{\mathcal{T}}^{(n)}\) for TR-symmetric systems). \(\rho^{(n)}\) can always be expressed as the following linear combination \[\rho^{(n)}=\sum_{p}\alpha_{p}\,\rho_{p}^{(n)}, \tag{24}\] where \(\rho_{p}^{(n)}\) correspond to the indices of atomic limits in Tables 1-4. Since the \(\rho_{p}^{(n)}\) for different site symmetry representations for the same Wannier center configuration are linearly dependent, the linear combination in Eq. (24) is non-unique, and all possible linear combinations must be examined to obtain the correct topological characterization. The topology of this set of \(N\) bands can then be determined by the following set of rules [88; 61]: (i) If the bands are in an OAL phase, there exists a linear combination such that the coefficients \(\{\alpha_{p}\}\) are all positive integers (the converse is not true). (ii) If a linear combination with positive integer \(\{\alpha_{p}\}\) is impossible and at least one negative integer coefficient is required, the bands are in a fragile topological phase. (iii) If a linear combination with integer \(\{\alpha_{p}\}\) is not possible, the bands are either gapless under TRS, in which case we have a Dirac semi-metal phase, or are gapped and have a non-vanishing Chern number under broken TRS. In the following sections, we provide examples that illustrate these cases. ### Example 1: OAL phases with four-fold rotation in class AI We now show an example of an OAL phase and its associated boundary signatures in a 2D PhC. Similar OAL phases have been widely implemented in PhCs [36; 37; 38; 39; 40; 41; 46; 47]. Consider two PhCs with unit cells shown in the inset of Fig. 4(a), which consist of four dielectric square pillars in a \(C_{4v}\)-symmetric configuration with \(\varepsilon=12\). These two unit cell choices, referred to as "expanded" and "contracted", are related by a half-lattice-constant shift along the \(x\) and \(y\) directions. We will consider the first four TM bands for the following analysis. The symmetry-indicator invariants can be computed using the relevant rotation eigenvalues of the electromagnetic eigenmodes at the HSPs \(\mathbf{\Gamma}\), \(\mathbf{X}\) and \(\mathbf{M}\) for both unit cell types; these are shown in Fig. 4(a). For the contracted unit cell, bands 1 and 4 have the index \(\chi_{\mathcal{T}}^{(4)}=(0,0,0;1)\), and the pair of degenerate bands \(2+3\) have the index \(\chi^{(4)}_{\mathcal{T}}=(0,0,0;2)\). Each of these indices corresponds to Wannier centers located at the \(1a\) Wyckoff position in the 2D unit cell shown in Fig. 3(b). For the expanded unit cell, bands \(1\) and \(4\) have the indices \(\chi^{(4)}_{\mathcal{T}}=(-1,-1,0;1)\) and \(\chi^{(4)}_{\mathcal{T}}=(-1,+1,0;1)\) respectively. Bands \(2+3\) have the index \(\chi^{(4)}_{\mathcal{T}}=(2,0,0;2)\). Each of these indices corresponds to the Wannier centers at the \(1b\) Wyckoff position. These indices lead to \(\mathbf{P}^{(4)}=(1/2,1/2)\) and \(Q^{(4)}_{\text{corner},\mathcal{T}}=1/4\) for bands \(1\) and \(4\) and \(\mathbf{P}^{(4)}=\mathbf{0}\) and \(Q^{(4)}_{\text{corner},\mathcal{T}}=1/2\) for bands \(2+3\). The Wannierizable nature of these atomic limit bands can also be established by examining the Wilson loops as shown in Fig. 4(b). Here, the Wilson loop eigenvalues for each band are calculated by integrating the Berry connection along one momentum direction and plotting it as a function of the other momentum. This indicates the locations of the hybrid Wannier centers that are exponentially localized in one spatial direction but delocalized in another. The observed shifts in the Wilson loop eigenvalues between the contracted and expanded unit cells are consistent with the real space shifts that relate the two unit cell types where the Wannier centers reside at the \(1a\) and \(1b\) positions, respectively. To illustrate that the dipole moments of the bands lead to edge states, we simulate a finite system consisting of the expanded and contracted unit cells in a strip geometry. The strip geometry is a large supercell along one direction, consisting of an inner domain with the expanded unit cell and an outer domain with the contracted unit cell with periodic boundaries along both directions, as shown in Fig. 4(c). We consider a supercell of size \(25\times 1\) unit cells, and therefore expect the spectrum Figure 4: (a) TM-polarized band structure of a \(C_{4v}\) symmetric PhC with \(s=12\). The two possible types of \(C_{4}\)-symmetric unit cells are shown in the insets along with the 2D BZ. \(C_{4}\) eigenvalues at \(\mathbf{\Gamma}\) and \(\mathbf{M}\), \(C_{2}\) eigenvalues at \(\mathbf{X}\) are shown for the first four bands. (b) Wilson loop eigenvalues \(\mathcal{W}_{y}(\mathcal{W}_{x})\) for bands \(1\), \(2+3\) and \(4\) along \(k_{x}(k_{y})\) for both types of unit cells. (c) Edge spectrum consisting of a total of \(25\) unit cells of the two types in a strip configuration (shown on the right). An odd-integer counting mismatch per band leads to the presence of edge states in the first and second TM bandgaps. (d) The dielectric and \(E_{z}\) mode profile of one of the four corner modes in a finite system of size \(15\times 15\) unit cells consisting of the two types of unit cells in a core-cladding configuration. (e) A tiling of the unit cells with Wannier centers (solid circles) for one band of the inner core of size \(7\times 7\). Additional Wannier centers from other bands (hollow circles) are required to maintain \(C_{4}\) symmetry. Counting the Wannier centers along the boundary sites, we see that \(24\) edge and four corner states are expected in this finite configuration. (f) A schematic of the DoS for the structure in (d). A counting mismatch of states for bands \(1\) to \(4\) leads to four degenerate corner states in the first TM bandgap. The counting mismatch for the edge states depends on the system size for such a finite configuration. to contain 25 states per band. However, due to the non-zero dipole moments, bands 1 and 4 have a counting mismatch of one missing state (= \(\overline{1}_{2}\)) each. In contrast, bands 2+3, which have a vanishing dipole moment, exhibit a counting mismatch of two missing states (= \(\overline{0}_{2}\)) as shown in the edge spectrum in Fig. 4(c). The non-zero corner charges for bands 1 to 4 will similarly lead to a counting mismatch due to the presence of corner states in a finite system with corners. To illustrate this, we now examine a finite \(C_{4}\)-symmetric system in a core-cladding configuration as shown in Fig. 4(d). This finite system has four symmetry-related sectors with four corners and has a size of \(15\times 15=225\) unit cells. Therefore, each band contributes 225 states to the spectrum of the finite system. However, the non-zero dipole moment of bands of the inner core region leads to edge states on all edges, as we have discussed previously and shown in Fig. 4(c). In the finite system, these edge states have a size-dependent counting mismatch. If we consider a finite tiling of size \(7\times 7\) unit cells that represent the inner core, each with a Wannier center at \(1b\), we observe that additional Wannier centers from other bands are required to maintain \(C_{4}\) symmetry, as shown in Fig. 4(e). Counting the Wannier centers that live on the entire boundary between the core and cladding, we can predict the appearance of 24 edge states and 4 corner states. In Fig. 4(f), we show a schematic of the calculated DoS of the full \(15\times 15\) finite system, up to the frequency range of the first four TM bands and identify the number of bulk, edge, and corner states from their localization and mode profiles. The state counting in Fig. 4(f) confirms the predicted 24 edge states and 4 corner states. The counting mismatch due to the corners is size-independent and is identified in Fig. 4(f) as equal to one missing state (= \(\overline{1}_{4}\)) each for bands 1 and 4 and two missing states for bands 2+3 (= \(\overline{2}_{4}\)), accounting for the expected number of corner states and consistent with the corner charges of the bands. We point out that even if a \(C_{4}\)-preserving perturbation to the corners pushes the four corner states into any of the bulk bands, the counting mismatch remains. For example, if the four corner states were pushed into band 1, the counting mismatch for this band would go from one missing state to three additional states, both of which are equal modulo 4 (\(\overline{-3}_{4}\) = \(\overline{1}_{4}\)). ### Example 2: Dirac semi-metal in class AI Next, we show the topological characterization of a PhC with Dirac points in class AI. We do this via three distinct perspectives: (1) Examining the symmetry-indicator invariants of 1D subsystems, (2) computing the Wilson loops, and (3) constructing the indices of the 2D bands of the system. Consider the \(C_{2}\)-symmetric PhC in the inset of Fig. 5(a), which consists of an elliptical disc (\(\varepsilon=12\)) with its semi-major and semi-minor axes oriented along the diagonals of a square unit cell. This PhC's TM spectrum exhibits two sets of Dirac points along the \(\boldsymbol{\Gamma}-\mathbf{M}\) direction, one between bands 2 and 3, and one between bands 3 and 4, as shown in Fig. 5(a). We first examine the topology of the gapped phases of 1D subsystems that are obtained by fixing one of the momenta, say \(k_{y}\). In this example, bands 2 and 4 have different \(C_{2}\) eigenvalues (and hence \(\mathcal{I}\) eigenvalues in the 1D subsystem) at the \(\boldsymbol{\Gamma}\) and \(\mathbf{X}\) points, corresponding to a 1D topological phase at the \(k_{y}=0\) cut with \([X_{1}]=1\) ( or equivalently, \(\theta=\pi\) from Eq.(7)). On the other hand, these bands have the same \(C_{2}\) eigenvalues at the \(\mathbf{Y}\) and \(\mathbf{M}\) points, corresponding to a trivial phase at the \(k_{y}=\pi/a\) cut with \([X_{1}]=0\) (or equivalently, \(\theta=0\)). These Dirac points are thus the required transition points that separate trivial and topological gapped phases of the 1D subsystems. This change in the topology of the one-dimensional subsystem at the Dirac points can also be seen from the Wilson loop spectrum. The Wilson loop eigenvalues plotted in Fig. 5(b) exhibit jump discontinuities from 0 to \(\pi\) at the momenta of the Dirac points, which correspond to a switch in the value of \([X_{1}]\) from 0 to 1. Consequently, edge states only appear in the portion of the 1D edge Brillouin zone that is topologically non-trivial. Fig. 5(c) shows the edge spectrum for the PhC with open boundaries along \(x\) and periodic boundaries along \(y\). The Wilson loop can also help diagnose generic Dirac points that may be present in the interior of the Brillouin zone. In the current example, there are two additional pairs of jump discontinuities in the Wilson loop spectrum for band 4 which Figure 5: (a) TM-polarized band structure of a \(C_{2y}\)-symmetric PhC whose unit cell is shown in the inset. \(C_{2}\) eigenvalues at \(\boldsymbol{\Gamma}\), \(\mathbf{X}\), \(\mathbf{Y}\) and \(\mathbf{M}\) are shown for the first four bands. (b) Wilson loop eigenvalues \(\mathcal{W}_{y}\) for the bands 2, 3, and 4 plotted as a function of \(k_{x}\). The discontinuities indicate the presence of Dirac points. (c) Edge spectrum of this PhC showing edge states (marked with arrows) whose dispersion terminates at Dirac points (marked with circles) on the left (red) and right (light red) edges. (d) Dirac points are gap-closing points that separate 1D topological phases with different Berry phases. They can also be thought of as sources of \(\pi\) Berry phase. are due to such generic Dirac points between bands 4 and 5. Since the bands 2, 3, and 4 are non-degenerate at all HSPs, we can classify them by constructing the 2D indices under TRS from table 1, which are respectively \(\chi^{(2)}_{\mathcal{T}}=(-1,-1,-1;1)\), \(\chi^{(2)}_{\mathcal{T}}=(0,0,0;1)\) and \(\chi^{(2)}_{\mathcal{T}}=(1,1,1;1)\). The indices for bands 2 and 4 are not found in table 1, and expanding these in a linear combination of OALs results in fractional coefficients \(\{\alpha_{p}\}\). Therefore, these are stable topological bands and must contain a gapless point somewhere in the BZ under TRS. In this example, the PhC has Dirac points on high-symmetry lines as seen in Fig. 5(a). Band 3 is an example of a situation where stable topological bands could have the same indices as atomic limit bands. Relevant to PhC design, these invariants can be useful for finding spectrally-isolated Dirac points for applications such as creating cavity states that are algebraically localized to embedded point defects [89, 28, 29, 30, 31] or enabling large-area single-mode lasing [90, 91]. ### Example 3: Chern insulator in class A Consider the PhC introduced in the previous section. We break TRS for this PhC by introducing non-diagonal terms in the permeability tensor, which correspond to a response of a gyromagnetic material under a magnetic field applied in the \(z\)-direction. Specifically, we set the permeability tensor to \[\mu=\begin{bmatrix}\mu&\mathrm{i}\kappa&0\\ -\mathrm{i}\kappa&\mu&0\\ 0&0&\mu_{0}\end{bmatrix}, \tag{25}\] where \(\mu=\mu_{0}\) is the vacuum permeability and \(\kappa=0.25\mu_{0}\). The Dirac points that were previously protected by a combination of inversion and TRS are now gapped, and bands 2, 3 and 4 are non-degenerate and have the invariants \(\chi^{(2)}=(-1\,|\,-1,-1;1)\), \(\chi^{(2)}=(+2\,|\,0,0,0;1)\) and \(\chi^{(2)}=(+1\,|\,1,1,1;1)\), respectively. The first invariant of the listed tuples is the Chern number, obtained as the winding number of the Wilson loop spectrum in Fig. 6(b). The winding numbers agree with the symmetry-imposed constraints in Eq. (19). The Chern number leads to chiral edge states at the boundary of a finite system as shown in Fig. 6(c). These edge states exhibit unidirectional transport and have been observed in gyromagnetic PhCs at microwave frequencies [19, 16]. Proposed applications for these edge states include optical isolators and slow-light devices that could significantly outperform their conventional counterparts [92, 93, 94]. ### Example 4: Fragile phase in class AI Fragile phases have bands that exhibit a symmetry-protected winding in their Wilson loop spectrum, indicating that the bands cannot form a symmetry-preserving Wannier representation. However, when considered as a set along with additional atomic limit bands, the full set becomes Wannierizable, and accordingly, the Wilson loop winding is lost. They are characterized by indices that must be written as a linear combination of the invariants in Tables 1-4 with at least one negative integer coefficient. We now present a novel PhC design with fragile bands in a \(C_{4v}\) symmetry setting whose unit cell is shown in the inset of Fig. 7(a). The PhC is composed of three materials, \(\varepsilon_{1}=1\) (white), \(\varepsilon_{2}=16\) (black) and \(\varepsilon_{3}=4\) (gray). We consider the two isolated and degenerate bands, bands \(8+9\) in the TE-polarized band structure of this PhC shown in Fig. 7(a). Using the relevant rotation eigenvalues of the electromagnetic eigenmodes at the HSPs, we compute the invariant for these bands to be \(\chi^{(4)}_{\mathcal{T}}=(0,2,-1;2)\). Since this invariant is not found in Table 3, we express it as the following linear combination of OALs from Table 3: \(\chi^{(4)}_{\mathcal{T}}=(0,0,-1;2)=1\times(1,1,-1;2)+1\times(-1,-1,0;1)+(-1) \times(0,0,0;1)\). The requirement of a negative integer coefficient in this expansion indicates that this set of two bands is fragile. The non-Wannierizable nature of these bands is also evident from the Wilson loop spectrum in Fig. 7(b) which shows opposite winding of the two eigenvalues. A different PhC realization of a fragile phase with \(C_{6}\) symmetry was previously reported in [45]. Like OAL phases, fragile PhCs may host corner states resulting from the total corner charge of all Wannierizable components in their decomposition [95, 63]. Figure 6: (a) TM-polarized band structure of a \(C_{2}\)-symmetric gyromagnetic PhC whose unit cell is shown in the inset. The Chern numbers for the first four bands are also shown. (b) Wilson loop eigenvalues \(\mathcal{W}_{\mathcal{Y}}\) for the bands 2, 3, and 4 plotted as a function of \(k_{x}\). The winding of the eigenvalues indicates the non-Wannierizability of the bands, and the winding number is equal to the Chern number of the band. (c) Edge spectrum showing projected bulk bands (blue) and chiral edge states (red). ## V Other topological phases Finally, we discuss a selection of other topological phases where crystalline symmetries play a crucial role, but whose realization may not be directly inferred from the topological indices presented so far. ### Quantum spin-Hall analogs The electronic quantum spin-Hall effect (QSHE) can be thought of as being deformable to two Chern insulators with opposite Chern numbers stacked on top of each other, one for each spin degree of freedom [96, 97, 98]. This creates spin-polarized "helical" edge states on the boundary of a finite sample which are protected against back-scattering due to the Kramers' degeneracy at time-reversal invariant momenta. Since the bosonic TR operator squares to +1, photons lack the Kramers degeneracy enjoyed by their fermionic counterparts (whose TR operator squares to -1). A photonic counterpart to the QSHE consequently necessitates a replacement for Kramers degeneracy. This can be achieved by incorporating spatial symmetries, particularly \(C_{6v}\) symmetry, to construct a pseudo-TR operator [20]. It can be shown that the bulk topology of such a PhC is identical to that of the QSHE by explicit calculation of the pseudo-spin polarized Wilson loop spectrum [99], where an opposite winding of the two eigenvalues is observed. However, since this winding is enforced by a crystalline symmetry, it is more appropriate to classify these PhCs as fragile phases than as true QSH systems. Nevertheless, such PhCs have states with a well-defined pseudo-spin, analogous to the spin of electrons [20, 21, 22, 23, 24] and exhibit pseudo-spin polarized helical edge states, similar to the QSHE, as shown in Fig. 8(a). The presence of an edge necessarily breaks the \(C_{6v}\) symmetry of the bulk and therefore also the pseudo-TR symmetry allowing for the hybridization of the edge states. This opens a gap in the edge spectrum as shown in Fig. 8(a) and allows for the back-scattering of the edge states in the vicinity of the gap. ### Valley-Hall phases As shown previously, Dirac points can be gapped by breaking TRS, thereby creating bands with a non-zero Chern number. Breaking inversion symmetry (i.e., two-fold rotation in a 2D system) can also gap Dirac points and introduce local Berry curvature with boundary manifestations. Reducing \(C_{6}\) symmetry to \(C_{3}\) gaps the Dirac points that generically exist at the \(\mathbf{K}\) (\(\mathbf{K}^{\prime}\)) points of the BZ. This causes the Berry curvature to peak at the "valleys" formed at the \(\mathbf{K}\) (\(\mathbf{K}^{\prime}\)) points. Due to TRS, the total Berry curvature and the Chern number are identically zero. However, the non-zero local Berry curvature at the \(\mathbf{K}\) (\(\mathbf{K}^{\prime}\)) valleys can be used to define valley Chern numbers such that \(C_{\mathbf{K}}=-C_{\mathbf{K}^{\prime}}\). In this case, the bulk-boundary correspondence is only well-defined at the boundary between two such systems, one spatially inverted with respect to the other. The edge states that thus emerge have a dispersion as shown in Fig. 8(b) and can generally backscatter, unlike the chiral edge states of a Chern insulator. Certain types of edge geometries and symmetry-preserving perturbations are known to suppress inter-valley scattering, leading to nearly perfect (but incidental) backscatter-free transport in the absence of structural imperfections [33]. However, in the presence of random disorder, typically introduced by fabrication imperfections, it was recently shown that these valley-Hall edge states may not perform better than conventional edge states for practical light transport [35]. Valley-Hall edge states have been observed in PhC designs spanning orders of magnitude in frequency [32, 33, 34, 35]. ### Quadrupole and octupole topological insulators Quadrupole and octupole topological insulators (QTIs and OTIs, respectively) are a final example of crystalline symmetry-protected topological phases which host fractional corner charges, similar to, but ultimately distinct from, OAL insulators [100]. They are \(\mathbb{Z}_{2}\) classified, with fractional corner charges quantized to \(\{0,1/2\}\mod 1\). The prototypical model is \(C_{4v}\) symmetric [101]. Under \(C_{4}\) symmetry, the QTI phase is bulk-obstructed and therefore an atomic limit. However, relaxing \(C_{4v}\) down to only reflection symmetries also protects the quantization of corner charge, although their symmetry-indicator invariants due to reflection symmetry vanish. Thus, the protection due to reflection symmetries is more subtle than for OALs; they exhibit a gapped Wilson loop spectrum, not pinned by symmetries, and the change in topology here is accompanied by a gap closing in the Wilson loop spectrum, which implies a gap closing in the edge spectrum, instead of in the bulk spectrum. QTI and OTI phases require a set of anti-commuting spatial symmetries that can be achieved by threading a \(\pi\)-flux in simple tight-binding models. However, PhCs cannot be accurately described by such models. Instead, a quadrupole phase can be Figure 7: (a) TE-polarized band structure of a \(C_{4v}\) symmetric PhC with lattice constant, \(a\), whose unit cell is shown in the inset. This unit cell consists of dielectric discs of \(\varepsilon_{1}=1\) (white) with \(r_{1}=0.2a\) and \(\varepsilon_{2}=16\) (black) with \(r_{2}=0.225a\) in a background material of \(\varepsilon_{3}=4\) (gray). \(C_{4}\) eigenvalues at \(\mathbf{\Gamma}\) and \(\mathbf{M}\), \(C_{2}\) eigenvalues at \(\mathbf{X}\) are shown for bands 8 and 9. (b) Wilson loop eigenvalues \(\mathcal{W}_{y}\) for the bands 8+9, plotted as a function of \(k_{x}\). The opposite winding of the eigenvalues indicates the non-Wannierizablility of the bands, particularly that the bands are fragile. achieved by breaking time-reversal symmetry while preserving the product of mirror and time-reversal symmetries [42]. Alternatively, QTI phases can also be realized in PhCs with anti-commuting glide symmetries [43]. Topological indices that diagnose the QTI and OTI topologies have been recently demonstrated [102; 103], and follow the natural extension of the index for dipole moments [104]. Such indices have also recently been used to show that QTIs can also be protected solely by chiral symmetry [105; 106; 107], which has allowed the introduction of a \(\mathbb{Z}\) classification of higher-order topological insulators in 2D and 3D [108]. ## VI Discussion The past decade has seen the uncovering of a wide range of topological phenomena in PhCs [3; 4; 5; 14; 15; 16], validating the notion that topological band theory is a wave phenomenon, transcending the existence of bound orbital states present in electronic systems. Motivated by these recent developments, we have here extended the use of symmetry-indicator invariants to classify one- and two- dimensional PhCs with crystalline symmetries, with and without time-reversal symmetry. Through various examples, we have also demonstrated that the bulk-boundary correspondence of topological band theory carries over to these systems as well. In solids, the atomic ions form potentials that bind electronic orbitals. The electrons in the crystal hop between these orbitals, giving rise to Bloch energy bands that can often be described by simplified tight-binding models, where the hopping terms in the Hamiltonian are given by the overlap integrals between different orbitals. Photonic analogs of solid-state lattices have been achieved in periodic arrays of coupled waveguides, where each waveguide supports a guided mode so that the extended array can be thought of as having inter-orbital hoppings that also lead to tight-binding descriptions [109]. As a result, the electronic theory of non-interacting topological phases carries over directly to this case. In contrast, the PhCs studied in this work are not well-described by simple tight-binding models; instead, they necessarily require a continuum description based on the full-wave solution of Maxwell's equations. A further crucial difference between PhCs and solids is that PhCs host classical, bosonic waves unlike electrons, which are fermionic and described by a quantum wave function. For topological band theory, this has two consequences. First, there is no Kramers' degeneracy for electromagnetic waves, and thus there is no protection of helical edge states as in the QSHE phase of electronic systems. As discussed in Section V, the edge states in the PhC versions of the QSHE phase are not rigorously protected, in contrast to their electronic counterpart. Second, PhCs lack a notion of band filling and consequently require a subtler interpretation of the filling anomaly, involving instead the fractional quantization of electromagnetic mode density instead of charges [78]. We have demonstrated that this fractionalization comes from a _counting mismatch_ of states and that the boundary-localized states associated with it are the consequence of the conservation of the number of degrees of freedom in the system, and do not require a fixed band-filling (i.e., Fermi level). We have further demonstrated how the topological invariants based on symmetry indicators relate to the presence of counting mismatches and their boundary states. Finally, we have presented a novel \(C_{\mathcal{A}\nu}\)-symmetric PhC design hosting fragile bands. Beyond its appeal as a platform for exploring the new, fundamental physics of topology in a controllable setting, the merger of topology and PhCs hold substantial promise for the development of new technologies and device design strategies. We expect that the algebraic structure of the symmetry-indicator invariants will be useful in this pursuit. ## VII Acknowledgements We acknowledge fruitful discussions with Alexander Cerjan and Marius Jurgensen. M.C.R., S.V., T.C., and A.G. acknowledge the support of the U.S. Office of Naval Research (ONR) Multidisciplinary University Research Initiative (MURI) under Grant No. N00014-20-1-2325. M.C.R. and S.V. also acknowledge support from the Charles E. Kaufman Foundation under Grant No. KA2020-114794. A.G. also acknowledges funding from the National Science Foundation's Graduate Research Fellowship. T.C. also acknowledges the support of a research grant (project no. 42106) from Villum Fonden. W.A.B. acknowledges the support of the Eberly Postdoctoral Fellowship at Penn State University, the Moore Postdoctoral Fellowship at Princeton University, and startup funds from Emory University. Figure 8: (a) Pseudo-spin polarized helical edge states of a quantum spin-Hall analog PhC. (b) Edge states of a valley-Hall PhC. (c) Schematic of a PhC quadrupole insulator with vanishing bulk dipole moment and non-zero bulk quadrupole moment.
2305.14389
Breast Cancer Segmentation using Attention-based Convolutional Network and Explainable AI
Breast cancer (BC) remains a significant health threat, with no long-term cure currently available. Early detection is crucial, yet mammography interpretation is hindered by high false positives and negatives. With BC incidence projected to surpass lung cancer, improving early detection methods is vital. Thermography, using high-resolution infrared cameras, offers promise, especially when combined with artificial intelligence (AI). This work presents an attention-based convolutional neural network for segmentation, providing increased speed and precision in BC detection and classification. The system enhances images and performs cancer segmentation with explainable AI. We propose a transformer-attention-based convolutional architecture (UNet) for fault identification and employ Gradient-weighted Class Activation Mapping (Grad-CAM) to analyze areas of bias and weakness in the UNet architecture with IRT images. The superiority of our proposed framework is confirmed when compared with existing deep learning frameworks.
Jai Vardhan, Taraka Satya Krishna Teja Malisetti
2023-05-22T20:49:20Z
http://arxiv.org/abs/2305.14389v2
# Breast Cancer Segmentation using Attention-based Convolutional Network and Explainable AI ###### Abstract One of the most hazardous diseases for people is cancer, yet no long-term treatment is currently available. One of the most typical cancers is breast cancer. Early detection of breast cancer is the goal of screening mammography, as this allows for more effective therapy. High false positives and negative rates impact the interpretation of mammograms despite the existence of screening programmes worldwide. With an estimated 2.3 million new cases or 11.7% of all cancer cases, BC is expected to overtake lung cancer as the world's most prominent cause of cancer incidence in 2020. Early detection of Breast Cancer(BC) is the goal of screening mammography, as this allows for more effective therapy. The interpretation of mammograms is impacted by high false positives and negative rates despite the existence of screening programmes worldwide. There have been advances in several technologies that have helped reduce the mortality rate from this illness. Still, early discovery is the most effective way to stop the progress of the disease, amputation of the breast, and death. Infrared cameras with outstanding resolution and sensitivity are used in thermography, a promising tool for early detection. It is predicted that using thermal imaging in conjunction with artificial intelligence (AI), would produce outstanding predictability levels for the early detection of breast cancer. The current work utilizes attention-based convolution neural network for segmentation. When compared to existing works the current system is more precise and faster in detecting and classifying the BC diseases. Furthermore, this framework comprises image enhancement, and cancer segmentation with explainable AI (XAI). A transformer-attention-based [1] convolutional architecture (UNet) is proposed for fault identification. Moreover, to analyze the region of bias and weakness of the UNet architecture with IRT images, the Gradient-weighted Class Activation Mapping (Grad-CAM) is performed. The transcendence of the proposed architecture is verified in comparison with existing state-of-the-art deep learning frameworks for the BC dataset. Breast Cancer Detection, Image classification, Vision Transformers, Explainable AI, Convolutional Neural Networks ## 1 Introduction ### History of Breast Cancer(BC) Detection Breast cancer, a disease that has plagued humanity for centuries, has witnessed significant advancements in understanding, diagnosis, and treatment. Our knowledge about breast cancer can be traced back to ancient times, specifically to the Edwin Smith Surgical Papyrus, which dates back to 3,000-2,500 B.C. This historical document, attributed to the esteemed Egyptian physician-architect Imhotep, contains detailed accounts of breast cancer cases. Interestingly, it describes specific characteristics believed to render the disease incurable, such as a breast that felt cold to the touch, appeared bulging, and exhibited widespread growth. The concept of hormonal involvement in the disease emerged in the quest to comprehend the underlying factors contributing to breast cancer. Observations revealed that breast cancer tends to be more severe in younger women, leading to the hypothesis that hormones might play a significant role. Notably, these ideas predated the discovery of estrogen receptors by Jensen in 1967. In 1906, a pioneering Scottish surgeon named Beatson [2] introduced the concept of endocrine surgery by performing oophorectomy and adrenalectomy, essentially achieving castration, as a treatment for breast cancer. However, as medical understanding progressed, more refined approaches emerged. Extreme measures gave way to using estrogen receptor modulators, luteinizing hormone-releasing agonists, and aromatase inhibitors, which proved more effective and less invasive. The development of diagnostic techniques has been crucial in the fight against breast cancer. In 1913, radiographs, or X-rays, were first utilized to examine breast cancer patients, offering valuable insights into the disease. Building upon this progress, a German surgeon named Salmon conducted a study involving 3,000 patients, contributing to the growing knowledge base on breast cancer. Another significant milestone came in 1951 when ultrasound research in breast cancer detection commenced. The objective was to develop a technique to identify breast tumours and determine their malignancy. In 1952, with the sponsorship of 21 cases from other research, successful detection of breast cancer using ultrasound was achieved, leading to further evaluation of hospital ultrasonography [3] equipment. By 1954, ultrasound had firmly established itself as a promising method for identifying breast cancer. Advancements in ultrasound technology continued throughout the 1960s. The internal architecture of ultrasound systems was improved, and detection techniques were refined, enhancing the accuracy and reliability of breast cancer diagnosis. Innovations such as immersing breasts in controlled-temperature water during early pregnancy to aid tumour identification were explored. In 1982, thermal imaging, utilizing infrared technology, received approval from the U.S. Food and Drug Administration (FDA) as a diagnostic tool for breast cancer. This non-invasive technique involved capturing images that depict variations in temperature within the breast, potentially highlighting areas of concern. A landmark study conducted in 1996 compared thermal imaging with traditional X-rays for breast cancer diagnosis. The study successfully demonstrated that thermal imaging could effectively identify breast cancer in a patient, even when X-ray methods failed to do so. Recently authors in [4] used deep learning techniques to detect diseased leaves from drone-captured images that could be equipped in our case. These advancements in the understanding and diagnosis of breast cancer have significantly contributed to improving patient outcomes and shaping the field of oncology. From ancient medical texts to modern technological innovations, the ongoing pursuit of knowledge and innovation continues to propel the fight against breast cancer, offering hope for early detection, personalized treatment, and improved survival rates. ### _Motivation for automated Diagnostic systems_ The computer vision and machine learning based mechanisms are highly used in classification and detection tasks in several industries including security [2], robotics [6], Internet of Things [7]. In medical domain, computer vision is used for disease identification [8] and patient aiding [9]. Automated diagnostics improve evidence-based medicine, enhance treatment quality, encourage wellness, enable early illness identification, and lower total health care costs. Automation and technological advancements have improved tests' usability and accuracy, resulting in reports that are more accurate and timely. The requirement for breast cancer detection by automated diagnostic methods [10] increased because so many people make mistakes while evaluating and identifying breast cancer. ### _Research Challenges_ #### 1.3.1 Dataset Breast Cancer Dataset images are challenging to analyze due to low image quality, noise, and varying viewpoints due to the handheld nature of the sensor. * Single Private Dataset * Less number of Images * Existence of Class Imbalance #### 1.3.2 Extraction of Region Of Interest We could not employ any object detection algorithms in order to extract the necessary features. So, segmentation based methodologies are preferred. ## 2 Related Works Segmentation is an essential image processing procedure, but it can be challenging. Segmentation separates a picture into numerous differentiable regions with different properties. Indeed, it is performed to remove a target from the background. The following techniques are the most standard and familiar image segmentation procedures: thresholding, clustering, region-based segmentation, energy function-based technique, and edge-based segmentation are mentioned with their advantages and disadvantages in Table 1. ### _Edge-based Segmentation_ Generally, the edge-based technique establishes the borders between areas to segregate them by employing the discontinuity in degree values of pixel between them. These methodologies are rapid in real-time application implementations. It detects edge features via color variation which can be perceived as a black background with white lines. The Log, Roberts, Sobel, Prewitt, and Zero-cross filters can be applied to images for the same task. Nevertheless, the output quality is often inadequate for high-level computation. ### _Region-based Segmentation_ The intensity levels of nearby pixels influence region-based segmentation. The colour, grey level, shape and texture of the visual dictated the homogeneity of the segmentation zones. To do so, the picture is broken into smaller sub-regions. The machine was needed to grasp and differentiate between desired and undesirable regions by utilising the information from pixel intensity, specified areas, and density. The median range of intensity levels between 1 (white) and 0 (black) can be used to separate a picture from its surrounding environment [11] and fragment it. Density Slice represents the intensity distribution for the Region of Interest (ROI). After a picture is segmented, every pixel is reallocated based on whether the picture's intensity count is 1 or 0. ### _Thresholding Methods_ Thresholding is the most basic picture segmentation approach. Thresholds are classified into two types: Bi-level thresholding partitions photos into only two categories, while Multilevel thresholding (MT) splits pictures into more than two categories. Metaheuristic-dependent procedures such as ABC, BFO, and PSO, on the other hand, have indeed been utilized to estimate optimum MT values. ### _Energy function based Segmentation_ The general Snakes procedure was presented by Kass et al. [12]; in this procedure, a turn becomes beneath some energy till it arrests at the border. The turn drives to undervalue the energy. This procedure utilises a framework in which regional minima of energy function has a collection of solutions. By counting functional energy periods to the depreciation, the user can drive the model out of regional minima towards the preferred solution. The consequence is an operational prototype that descends [12] into the preferred explanation when positioned around it. Kass's snake pinnacle is dynamic, always underestimates its exuberance procedure, and displays an engaged demeanour. This method is recognised under the class of parametric Dynamic silhouette models. Active contour energy-based methods can be classified as parametric and geometric. The enthusiastic silhouette procedure uses parameterised turns to illustrate the shapes. ## 3 Methodology Two subsections are included in this section. The pre-processing section provides different processes that are applied to the picture data, such as frame conversion and image scaling. The U-Net architecture is used to do further processing on the pre-processed pictures. On certain difficult benchmark datasets that are frequently used for breast cancer segmentation and classification, the study produced considerable results. ### _Dataset_ The data examines ultrasound scans used to diagnose breast cancer. Pictures from the Breast Ultrasound Dataset are divided into normal, benign, and malignant images. Breast ultrasound pictures can yield excellent results for categorising, detecting, and segmenting breast cancer when used in conjunction with machine learning. Women between the ages of 25 and 75 are represented by breast ultrasound pictures in the baseline data. These numbers were gathered in 2018. 600 female patients make up the total number of patients. With an average image size of 500*500 pixels, the collection comprises of 780 images. PNG format is used for the pictures. The original photos are displayed alongside the real-world photographs. Three classes--normal, benign, and malignant--are established for the photos. ### _Histogram Equalization:_ Industrial infrared images often contain industrial noise, such as dust, smoke and misty background objects. This industrial noise in an IR image is responsible for low performance in the predictive models. Apart from the noise, extracting the region of interest (ROI) is necessary to achieve better performances in predictive algorithms, reducing the redundant information and computational effort in pattern \begin{table} \begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline **Method** & **Advantages** & **Disadvantages** \\ \hline Edge-Based & \({}^{\star}\) Complex in the existence of extensive or poorly determined edges & \({}^{\star}\) Quick \({}^{\star}\) Performs skillfully with high-contrast photos. \\ \hline Region Based & \({}^{\star}\) Laborious & \({}^{\star}\) For Uniformity, Region Growing is preferred. \({}^{\star}\) Excessive segmentation & \({}^{\star}\) Split and Merge segmentation \\ & \({}^{\star}\) Demanding the choice of a seed point & \({}^{\star}\) Watershed- delivers closed boundaries. \\ \hline Clustering & \({}^{\star}\) Costly & \({}^{\star}\) Performs satisfactorily for overlapped image data. \\ & \({}^{\star}\) Prone to outliers and initial groupings & \\ \hline Thresholding & \({}^{\star}\) Do not operate satisfactorily with noise and low contrast images & \({}^{\star}\) Uncomplicated \({}^{\star}\) Rapid computation \\ & Improper results in larger segmentations & \\ \hline Energy function Based [5] & \({}^{\star}\) Susceptible to snake initialization & \({}^{\star}\) Adjustable \({}^{\star}\) Need minor computation \\ \hline \end{tabular} \end{table} TABLE I: Comprehensive Review of BC Segmentation Techniques Fig. 1: Conceptual Diagram of proposed architecture Fig. 2: Three classes of the dataset (a) Benign (b) Malignant (c) Normal recognition. Most existing architectures do not consider these essential factors; instead, they fit the unprocessed images into the predictive framework. Considering these factors, the proposed methodology enhances the image's quality and extracts the ROI by Contrast Limited Adaptive Histogram Equalization (CLAHE) [13]. Unlike the existing architectures, this method reduces computational efforts and improves the overall performance of predictive models. CLAHE is an adaptation of adaptive histogram equalisation (AHE), which solves the problem of contrast over-amplification. Instead of processing complete image, CLAHE works with discrete sections of it called tiles. To eliminate the arbitrary borders, adjacent tiles are blended using bilinear interpolation. In contrast to other colour spaces, the acquired values of the quantitative metric characteristics, Entropy and RMS Contrast value, were significantly high. To highlight the region of interest and eliminate unnecessary background objects, CLAHE works efficiently with less computational time. ### _Image Segmentation_ The process of grouping together portions of an image that belong to the same object class is known as image segmentation. This method is additionally known as "pixel-level classification." To put it another way, it entails dividing up images (or video frames) into a number of segments or objects. Categorizing pixels with semantic labels (semantic segmentation) or splitting specific objects are two ways to define image segmentation (instance segmentation). Semantic segmentation [14] labels every image pixel at the pixel level using a set of object categories. A single label is predicted for the whole picture or frame in image classification, which is typically a more challenging task. By locating and identifying all the items of interest in a picture, instance segmentation broadens the scope of semantic segmentation. A new category of image segmentation models with notable performance improvements has been introduced by deep learning models. Deep learning-based image segmentation models frequently attain the best accuracy rates on well-known benchmarks, leading to a paradigm shift in the industry. Image segmentation effectively separates the homogenous areas at a pixel-accurate scale, enhancing medical images like CT scans and MRI. Medical image processing has adapted a huge variety of segmentation techniques to identify the pixels of human organs. #### 3.3.1 UNet for Image Segmentation We are able to conduct challenging tasks on computer vision datasets with high-quality results because to deep learning architectures like U-Net and CANet. Although the topic of computer vision is vast and has a wide range of interesting applications and challenges to solve, our focus in the upcoming articles will be on two architectures, U-Net and CANet, that are intended to address the problem of picture segmentation. There are two paths in it: an expansive path and a contracting path. The contracting route follows to a convolutional network's standard architecture. Although U-Net represents a big breakthrough in deep learning, it is as important to comprehend the earlier approaches that were used to handle challenges of this nature. The sliding window technique, which easily won the EM segmentation competition at ISBI in 2012, was one of the key instances that came to an end. It can observed why the design in the image is likely referred to as U-Net architecture by taking a quick glance at it. The following term is derived from the shape of the so-formed architecture, which is in the shape of a "U." We can tell that the network produced is a fully convolutional network just by looking at the structure and the many components used in the building of this architecture. They did not employ any additional layers, such as dense, flat, or layers of a similar nature. The graphic depiction demonstrates a route that initially contracts before expanding. Fig. 3: U-Net Architecture Diagram ## 4 Experimental Results And Analysis This section deals with an experimental analysis and implementation of the proposed methodology. The implementation and the analysis of the proposed framework are further categorized with a top-down approach into the histogram equalization, U-Net-based segmentation and understanding of the XAI, etc. The implementation results and the analysis will be portrayed and explained simultaneously. ### Computational Specifications This subsection stipulates the hardware and software specifications required for the implementation. The entire implementation of the framework is performed in Windows. With the aid of other tools, such as Kaggle API and CUDA, this OS facilitates establishing the neural network [15]. The computational backend programs for all these tools are in the Python programming language. Moreover, these algorithms do not require a high-performance graphics processing unit. The specifications and minimum requirements for training and testing the framework are depicted in Table 2. ### Evaluating trained Proposed architecture AttentionUNet/UNet is the best know model for Multi-Class Segmentation, thats why we will be creating an Attention UNet Model. As the image Dataset is old, please do not use it for any new medical operations. All the images are 500 X 500 pixels. RAM will not be enough so we will resize the Image to 256 X 256 pixels. In this experiment, the histogram equalisation is followed by training of the ViT model for breast picture segmentation. The U-Net architecture will consist of an Encoder Block, Decoder Block and an Attention Gate. Our system is evaluated using the standard standards for segmentation model evaluation. Fig. depicts the accuracy and loss learning curves for model training and model validation. These learning plots show a well-fitted learning algorithm because both the validation and training curves maintain a stable point with little gap. Performance was increased by incorporating three tasks into the training of the effective UNet model [16] at once: computing output, troubleshooting errors, and fine-tuning hyper-parameters. The maximum training and validation accuracies are 95% and 99%, respectively, after numerous iterations of tuning the hyper-parameters, which are accomplished with an idea. For Further analysis of the proposed architecture the appropriate experimental parameters have been taken into consideration, such as IoU, Loss, Accuracy, Precision, Recall, F1-Score, etc. \begin{table} \begin{tabular}{|c|c|} \hline **Specifications** & **System’s Configuration** \\ \hline Operating system & Windows \\ \hline CPU & Intel® i7 10th gen \\ \hline RAM & 32 GB Usable \\ \hline GPU & Nvidia Geforec 30/0Ti \\ \hline Frameworks & Tensorflow \\ \hline \end{tabular} \end{table} Table 2: Figure 4: Results of predicted, processed and GradCAM images Figure 5: The Experimental Results of Histogram Equalization IoU score \((A,B)=\dfrac{\text{Area of Intersection }(A,B)}{\text{Area of Union }(A,B)}=\dfrac{A\cap B}{A\cup B}\) (1) #### 4.2.1 Observations. * After 12 epochs model, the segmentation model results are outstanding. * The model was quickly able to detect black round spots but failed when the shape was irregular (Not the case with the current model because it is trained with high Steps Per Epoch (SPE)). * It also gets confused between the dark areas, which makes sense. * Training in chunks of 20 Epochs; will deliver reasonable control over the model, and the model will also perform well. * Surprisingly the results on Validation Data are way better than the Training Data on IoU. This may indicate that the model can perform way better than it can at the current point. The Loss is not Perfect. It increases in the last, but the model constructions look perfect. ## 5 Conclusion In this article, we investigated the use of microscopic hyperspectral imaging technology in breast tumour tissue microarrays and achieved breast tumour tissues' automated detection. We anticipate that it will help pathologists make a pathological diagnosis of breast cancer. The tissue microarray is convenient for creating a breast cancer data set in the experiment because of its modest size, considerable information richness, and one slice that contains the pathological condition of several patients. Additionally, the technology reduces the time required for sample preparation, data collecting, and picture processing, increasing the effectiveness of diagnostic procedures. The proposed framework combines the benefits of hyperspectral imaging technology and deep learning (feature learning and classification capabilities across multiple images) in terms of data processing. On a collection of photos that have been expertly labelled for a hyperspectral data set, conventional machine learning techniques and our approach are compared. The breast cancer nest tissue has been successfully captured using automatic feature extraction of U-Net, significantly enhancing classification accuracy. In order to explore more suitable methods for analysing microscopic hyperspectral pathological images, we are currently working on gathering more pathological samples and using the suggested method as a first step to quantitatively analyse the morphological characteristics of the breast cancer microenvironment. ## Acknowledgments The International Institute of Information Technology, Naya Raipur, provided support and technical assistance for this study.
2310.10359
An Anytime Algorithm for Good Arm Identification
In good arm identification (GAI), the goal is to identify one arm whose average performance exceeds a given threshold, referred to as good arm, if it exists. Few works have studied GAI in the fixed-budget setting, when the sampling budget is fixed beforehand, or the anytime setting, when a recommendation can be asked at any time. We propose APGAI, an anytime and parameter-free sampling rule for GAI in stochastic bandits. APGAI can be straightforwardly used in fixed-confidence and fixed-budget settings. First, we derive upper bounds on its probability of error at any time. They show that adaptive strategies are more efficient in detecting the absence of good arms than uniform sampling. Second, when APGAI is combined with a stopping rule, we prove upper bounds on the expected sampling complexity, holding at any confidence level. Finally, we show good empirical performance of APGAI on synthetic and real-world data. Our work offers an extensive overview of the GAI problem in all settings.
Marc Jourdan, Clémence Réda
2023-10-16T12:51:26Z
http://arxiv.org/abs/2310.10359v1
# An Anytime Algorithm for Good Arm Identification ###### Abstract In good arm identification (GAI), the goal is to identify one arm whose average performance exceeds a given threshold, referred to as good arm, if it exists. Few works have studied GAI in the fixed-budget setting, when the sampling budget is fixed beforehand, or the anytime setting, when a recommendation can be asked at any time. We propose APGAI, an anytime and parameter-free sampling rule for GAI in stochastic bandits. APGAI can be straightforwardly used in fixed-confidence and fixed-budget settings. First, we derive upper bounds on its probability of error at any time. They show that adaptive strategies are more efficient in detecting the absence of good arms than uniform sampling. Second, when APGAI is combined with a stopping rule, we prove upper bounds on the expected sampling complexity, holding at any confidence level. Finally, we show good empirical performance of APGAI on synthetic and real-world data. Our work offers an extensive overview of the GAI problem in all settings. ## 1 Introduction Multi-armed bandit algorithms are a family of approaches which demonstrated versatility in solving online allocation problems, where constraints are set on the possible allocations: _e.g._ randomized clinical trials (Thompson, 1933; Berry, 2006), hyperparameter optimization (Li et al., 2017; Shang et al., 2018), or active learning (Carpentier et al., 2011). The agents face a black-box environment, upon which they can sequentially act through actions, called _arms_. After sampling an arm \(a\in\mathcal{A}\), they receive output from the environment through a scalar observation, which is a realization from the unknown probability distribution \(\nu_{a}\) of arm \(a\) whose mean will be denoted by \(\mu_{a}\). Depending on their objectives, agents should have different sampling strategies. In _pure exploration_ problems, the goal is to answer a question about the set of arms. It has been studied in two major theoretical frameworks (Audibert et al., 2010; Gabillon et al., 2012; Jamieson and Nowak, 2014; Garivier and Kaufmann, 2016): the _fixed-confidence_ and _fixed-budget_ setting. In the fixed-confidence setting, the agent aims at minimizing the number of samples used to identify a correct answer with confidence \(1-\delta\). In the fixed-budget setting, the objective is to minimize the probability of misidentifying a correct answer with a fixed number of samples \(T\). While the constraint on \(\delta\) or \(T\) is supposed to be given, properly choosing it is challenging for the practitioner since a "good" choice typically depends on unknown quantities. Moreover, in medical applications (_e.g._ clinical trials or outcome scoring), the maximal budget is limited but might not be fixed beforehand. When the collected data shows sufficient evidence in favor of one answer, an experiment is often stopped before the initial budget is reached, referred to as _early stopping_. When additional sampling budget have been obtained due to new funding, an experiment can continue after the initial budget has been consumed, referred to as _continuation_. While early stopping and continuation are common practices, both fixed-confidence and fixed-budget settings fail to provide useful guarantees for them. Recently, the _anytime_ setting has received increased scrutiny as it fills this gap between theory and practice. In the anytime setting, the agent aims at achieving a low probability of error at any deterministic time (Jun and Nowak, 2016; Zhao et al., 2023; Jourdan et al., 2023). When the candidate answer has anytime guarantees, the practitioners can use continuation or early stopping (when combined with a stopping rule). The most studied topic in pure exploration is the _best arm (BAI) / Top-\(m\) identification_ problem, which aims at determining a subset of \(m\) arms with largest means (Karnin et al., 2013; Xu et al., 2018; Tirinzoni and Degenne, 2022). However, in some applications such as investigating treatment protocols, BAI requires too many samples for it to be useful in practice. To avoid wasteful queries, practitioners might be interested in easier tasks that identify one "good enough" option. For instance, in \(\varepsilon\)-BAI (Mannor and Tsitsiklis, 2004; Even-Dar et al., 2006; Garivier and Kaufmann, 2021; Jourdan et al., 2023b), the agent is interested in an arm which is \(\varepsilon\)-close to the best one, _i.e._\(\mu_{a}\geq\max_{k\in\mathcal{A}}\mu_{k}-\varepsilon\). The larger \(\varepsilon\) is, the easier the task. However, choosing a meaningful value of \(\varepsilon\) can be tricky. This is why the focus of this paper is good arm identification (GAI), where the agent aims to obtain a _good arm_, which is defined as an arm whose average performance exceeds a given threshold \(\theta\), _i.e._\(\mu_{a}\geq\theta\). For instance, in our outcome scoring problem (see Section 5), practitioners have enough information about the distributions to define a meaningful threshold beforehand. GAI and variants have been studied in the fixed-confidence setting (Kaufmann et al., 2018; Kano et al., 2019; Tabata et al., 2020), but algorithms for fixed-budget or anytime GAI are missing despite their practical relevance. In this paper, we fill this gap by introducing APGAI, an anytime and parameter-free sampling rule for GAI which is independent of a budget \(T\) or a confidence \(\delta\) and can be used in the fixed-budget and fixed-confidence settings. ### Problem Statement We denote by \(\mathcal{D}\) a set to which the distributions of the arms are known to belong. We suppose that all distributions in \(\mathcal{D}\) are \(\sigma\)-sub-Gaussian. A distribution \(\nu_{0}\) is \(\sigma\)-sub-Gaussian of mean \(\mu_{0}\) if it satisfies \(\mathbb{E}_{X\sim\nu_{0}}[e^{\lambda(X-\mu_{0})}]\leq e^{\sigma^{2}\lambda^{2} /2}\) for all \(\lambda\in\mathbb{R}\). By rescaling, we assume \(\sigma_{a}=1\) for all \(a\in\mathcal{A}\). Let \(\mathcal{A}\) be the set of arms of size \(K\). A bandit instance is defined by unknown distributions \(\nu:=(\nu_{a})_{a\in\mathcal{A}}\in\mathcal{D}^{K}\) with means \(\mu:=(\mu_{a})_{a\in\mathcal{A}}\in\mathbb{R}^{K}\). Given a threshold \(\theta\in\mathbb{R}\), the set of good arms is defined as \(\mathcal{A}_{\theta}(\mu):=\{a\in\mathcal{A}\mid\mu_{a}\geq\theta\}\), which we shorten to \(\mathcal{A}_{\theta}\) when \(\mu\) is unambiguous. In the remainder of the paper, we assume that \(\mu_{a}\neq\theta\) for all \(a\in\mathcal{A}\). Let the gap of arm \(a\) compared to \(\theta\) be \(\Delta_{a}:=|\mu_{a}-\theta|>0\). Let \(\Delta_{\min}=\min_{a\in\mathcal{A}}\Delta_{a}\) be the minimum gap over all arms. Let \[H_{1}(\mu):=\sum_{a\in\mathcal{A}}\Delta_{a}^{-2}\quad\text{and}\quad H_{ \theta}(\mu):=\sum_{a\in\mathcal{A}_{\theta}(\mu)}\Delta_{a}^{-2}. \tag{1}\] At time \(t\), the agent chooses an arm \(a_{t}\in\mathcal{A}\) based on past observations and receives a sample \(X_{a_{t},t}\), random variable with conditional distribution \(\nu_{a_{t}}\) given \(a_{t}\). Let \(\mathcal{F}_{t}:=\sigma(a_{1},X_{a_{1},1},\cdots,a_{t},X_{a_{t},t})\) be the \(\sigma\)-algebra, called _history_, which encompasses all the information available to the agent after \(t\) rounds. Identification StrategyIn the anytime setting, an _identification_ strategy is defined by two rules which are \(\mathcal{F}_{t}\)-measurable at time \(t\): a sampling rule \(a_{t+1}\in\mathcal{A}\) and a recommendation rule \(\hat{a}_{t}\in\mathcal{A}\cup\{\emptyset\}\). In GAI, the probability of error \(P_{\nu,\mathfrak{A}}^{\text{err}}(t):=\mathbb{P}_{\nu}(\mathcal{E}_{\mathfrak{ A}}^{\text{err}}(t))\) of algorithm \(\mathfrak{A}\) on instance \(\mu\) at time \(t\) is the probability of the error event \(\mathcal{E}_{\mathfrak{A}}^{\text{err}}(t)=\{\hat{a}_{t}\in\{\emptyset\}\cup( \mathcal{A}\setminus\mathcal{A}_{\theta})\}\) when \(\mathcal{A}_{\theta}\neq\emptyset\), otherwise \(\mathcal{E}_{\mathfrak{A}}^{\text{err}}(t)=\{\hat{a}_{t}\neq\emptyset\}\) when \(\mathcal{A}_{\theta}=\emptyset\). Those rules have a different objective depending on the considered setting. In anytime GAI, they are designed to ensure that \(P_{\nu,\mathfrak{A}}^{\text{err}}(t)\) is small at any time \(t\). In fixed-budget GAI, the goal is to have a low \(P_{\nu,\mathfrak{A}}^{\text{err}}(T)\), where \(T\) is fixed beforehand. Whereas in fixed-confidence GAI, these two rules are complemented by a stopping rule using a confidence level \(1-\delta\) fixed beforehand such that \(\mathfrak{A}\) stops sampling after \(\tau_{\delta}\) rounds. The stopping time \(\tau_{\delta}\) is also known as the sample complexity of a fixed-confidence algorithm. At stopping time \(\tau_{\delta}\), the algorithm should satisfy \(\delta\)-correctness, which means that \(\mathbb{P}_{\nu}(\{\tau_{\delta}<+\infty\}\cap\mathcal{E}_{\mathfrak{A}}^{ \text{err}}(\tau_{\delta}))\leq\delta\) for all instances \(\mu\). That requirement leads to a lower bound on the expected sample complexity on any instance. The following lemma is similar to other bounds derived in various settings linked to GAI (Kaufmann et al., 2018; Tabata et al., 2020). The proof in Appendix E.1 relies on the well-known change of measure inequality (Kaufmann et al., 2016, Lemma 1). **Lemma 1**.: _Let \(\delta\in(0,1)\). For all \(\delta\)-correct strategy and all Gaussian instances \(\nu_{a}=\mathcal{N}(\mu_{a},1)\) with \(\mu_{a}\neq\theta\), we have \(\liminf_{\delta\to 0}\mathbb{E}_{\nu}[\tau_{\delta}]/\log(1/\delta)\geq T ^{*}(\mu)\), where_ \[T^{*}(\mu):=\begin{cases}2\min_{a\in\mathcal{A}_{\theta}(\mu)}\Delta_{a}^{-2}& \text{if }\mathcal{A}_{\theta}(\mu)\neq\emptyset\\ 2H_{1}(\mu)&\text{otherwise}\;.\end{cases} \tag{2}\] A fixed-confidence algorithm is said to be _asymptotically optimal_ if it is \(\delta\)-correct, and its expected sample complexity matches the lower bound, _i.e._\(\limsup_{\delta\to 0}\mathbb{E}_{\nu}[\tau_{\delta}]/\log(1/\delta)\leq T^{*}(\mu)\). ### Contributions We propose APGAI, an anytime and parameter-free sampling rule for GAI in stochastic bandits, which is independent of a budget \(T\) or a confidence \(\delta\). APGAI is the first algorithm which can be employed without modification for fixed-budget GAI (and without prior knowledge of the budget) and fixed-confidence GAI. Furthermore, it enjoys guarantees in both settings. As such, APGAI allows both continuation and early stopping. First, we show an upper bound on \(P_{\nu,\text{APGAI}}^{\text{err}}(t)\) of the order \(\exp(-\mathcal{O}(t/H_{1}(\mu)))\) which holds for any deterministic time \(t\) (Theorem 1). Adaptive strategies are more efficient in detecting the absence of good arms than uniform sampling (see Section 3). Second, when combined with a GLR stopping rule, we derive an upper bound on \(\mathbb{E}_{\nu}[\tau_{\delta}]\) holding at any confidence level (Theorem 2). In particular, APGAI is asymptotically optimal for GAI with Gaussian distributions when there is no good arm. Finally, APGAI is easy to implement, computationally inexpensive and achieves good empirical performance in both settings on synthetic and real-world data with an outcome scoring problem for RNA-sequencing data (see Section 5). Our work offers an overview of the GAI problem in all settings. ### Related Work GAI has never been studied in the fixed-budget or anytime setting. In the fixed-confidence setting, several questions have been studied which are closely connected to GAI. Given two thresholds \(\theta_{L}<\theta_{U}\), Tabata et al. (2020) studies the Bad Existence Checking problem, in which the agent should output "negative" if \(\mathcal{A}_{\theta_{L}}(\mu)=\emptyset\) and "positive" if \(\mathcal{A}_{\theta_{U}}(\mu)\neq\emptyset\). They propose an elimination-based meta-algorithm called BAEC, and analyze its expected sample complexity when combined with several index-policy to define the sampling rule. Kano et al. (2019) considers identifying the whole set of good arms \(\mathcal{A}_{\theta}(\mu)\) with high probability, and returns the good arms in a sequential way. We refer to that problem as AllGAI. In Kano et al. (2019), they introduce three index-based GAI algorithms named APT-G, HDoC and LUCB-G, and show upper bounds on their expected sample complexity. A large number of algorithms from previously mentioned works bear a passing resemblance to the APT algorithm in Locatelli et al. (2016) which tackles the thresholding bandit problem in the fixed-budget setting. The latter should classify all arms into \(\mathcal{A}_{\theta}\) and \(\mathcal{A}_{\theta}^{\complement}\) at the end of the sampling phase. This resemblance lies in that those algorithms rely on an arm index for sampling. The arm indices in BAEC (Tabata et al., 2020), APT-G, HDoC and LUCB-G Kano et al. (2019) are reported in Algorithm 2 in Appendix D. Degenne and Koolen (2019) addressed the "any low arm" problem, which is a GAI problem for threshold \(-\theta\) on instance \(-\mu\). They introduce Sticky Track-and-Stop, which is asymptotically optimal in the fixed-confidence setting. In Kaufmann et al. (2018), the "bad arm existence" problem aims to answer "no" when \(\mathcal{A}_{-\theta}(-\mu)=\emptyset\), and "yes" otherwise. They propose an adaptation of Thompson Sampling performing some conditioning on the "worst event" (named Murphy Sampling). The empirical pulling proportions are shown to converge towards the allocation realizing \(T^{*}(\mu)\) in Lemma 1. Another related framework is the identification with high probability of \(k\) arms from \(\mathcal{A}_{\theta}(\mu)\)(Katz-Samuels and Jamieson, 2020). They introduce the _unverifiable sample complexity_. It is the minimum number of samples after which the algorithm always outputs a correct answer with high probability. It does not require to certify that the output is correct. ## 2 Anytime Parameter-Free Sampling Rule We propose the APGAI (**A**nytime **P**arameter-free **GAI**) algorithm, which is independent of a budget \(T\) or a confidence \(\delta\) and is summarized in Algorithm 1. NotationLet \(N_{a}(t)=\sum_{s\leq t}\mathds{1}\left(a_{s}=a\right)\) be the number of times arm \(a\) is sampled at the end of round \(t\), and \(\hat{\mu}_{a}(t)=\frac{1}{N_{a}(t)}\sum_{s\leq t}\mathds{1}\left(a_{s}=a \right)X_{a,s}\) be its empirical mean. For all \(a\in\mathcal{A}\) and all \(t\geq K\), let us define \[W_{a}^{+}(t)=\sqrt{N_{a}(t)}\Delta_{a}(t)_{+},\;W_{a}^{-}(t)=\sqrt{N_{a}(t)}( -\Delta_{a}(t))_{+} \tag{3}\] where \((x)_{+}:=\max(x,0)\) and \(\Delta_{a}(t):=\hat{\mu}_{a}(t)-\theta\). If arm \(a\) were a \(\sigma_{a}\)-sub-Gaussian distribution, the rescaling boils down to using \(\Delta_{a}(t)/\sigma_{a}\) instead of \(\Delta_{a}(t)\). This empirical transportation cost \(W_{a}^{+}(t)\) (resp. \(W_{a}^{-}(t)\)) represents the amount of information collected so far in favor of the hypothesis that \(\{\mu_{a}>\theta\}\) (resp. \(\{\mu_{a}<\theta\}\)). It is linked with the generalized likelihood ratio (GLR) as detailed in Appendix E.2. As initialization, we pull each arm \(n_{0}\in\mathbb{N}\) times, and we use \(n_{0}=1\). Recommendation RuleAt time \(t+1>n_{0}K\), the recommendation rule depends on whether the highest empirical mean lies below the threshold \(\theta\) or not. When \(\max_{a\in\mathcal{A}}\hat{\mu}_{a}(t)\leq\theta\), we recommend the empty set, _i.e._\(\hat{a}_{t}=\emptyset\). Otherwise, our candidate answer is the arm which is the most likely to be a good arm given the collected evidence, _i.e._\(\hat{a}_{t}\in\arg\max_{a\in\mathcal{A}}W_{a}^{+}(t)\). Sampling RuleThe next arm to pull is based on the APT\({}_{P}\) indices introduced by (Tabata et al., 2020) as a modification to the APT indices (Locatelli et al., 2016). At time \(t+1>n_{0}K\), we pull arm \(a_{t+1}\in\arg\max_{a\in\mathcal{A}}\sqrt{N_{a}(t)}(\hat{\mu}_{a}(t)-\theta)\). To emphasize the link with our recommendation rule, this sampling rule can also be written as \(a_{t+1}\in\arg\min_{a\in\mathcal{A}}W_{a}^{-}(t)\) when \(\max_{a\in\mathcal{A}}\hat{\mu}_{a}(t)\leq\theta\), and \(a_{t+1}\in\arg\max_{a\in\mathcal{A}}W_{a}^{+}(t)\) otherwise. Ties are broken arbitrarily at random, up to ``` 1:Input: threshold \(\theta\), set of arms \(\mathcal{A}\) 2:Update: empirical means \(\hat{\mu}(t)\) and empirical transportation costs \(W_{a}^{\pm}(t)\) as in (3) 3:if\(\max_{a\in\mathcal{A}}\hat{\mu}_{a}(t)\leq\theta\)then 4:\(\hat{a}_{t}:=\emptyset\) and \(a_{t+1}\in\arg\min_{a\in\mathcal{A}}W_{a}^{-}(t)\) 5:else 6:\(\hat{a}_{t}:=a_{t+1}\in\arg\max_{a\in\mathcal{A}}W_{a}^{+}(t)\) 7:end 8:return arm to pull \(a_{t+1}\) and recommendation \(\hat{a}_{t}\) ``` **Algorithm 1** APGAI the constraint that \(\hat{a}_{t}=a_{t+1}\) when \(\max_{a\in\mathcal{A}}\hat{\mu}_{a}(t)>\theta\). This formulation better highlights the dual behavior of APGAI. When \(\max_{a\in\mathcal{A}}\hat{\mu}_{a}(t)\leq\theta\), APGAI collects additional observations to verify that there are no good arms, hence pulling the arm which is the least likely to not be a good arm. Otherwise, APGAI gathers more samples to confirm its current belief that there is at least one good arm, hence pulling the arm which is the most likely to be a good arm. Memory and Computational CostAPGAI needs to maintain in memory the values \(N_{a}(t),\hat{\mu}_{a}(t),W^{\pm}_{a}(t)\) for each arm \(a\in\mathcal{A}\), hence the total memory cost is in \(\mathcal{O}(K)\). The computational cost of APGAI is in \(\mathcal{O}(K)\) per iteration, and its update cost is in \(\mathcal{O}(1)\). Differences to BAECWhile both APGAI and BAEC(APT\({}_{P}\)) rely on the APT\({}_{P}\) indices (Tabata et al., 2020), they differ significantly. BAEC is an elimination-based meta-algorithm which samples active arms and discards arms whose upper confidence bounds (UCB) on the empirical means are lower than \(\theta_{U}\). The recommendation rule of BAEC is only defined at the stopping time, and it depends on lower confidence bounds (LCB) and UCB. Since the UCB/LCB indices depend inversely on the gap \(\theta_{U}-\theta_{L}>0\) and on the confidence \(\delta\), BAEC is neither anytime nor parameter-free. More importantly, APGAI can be used without modification for fixed-confidence or fixed-budget GAI. In contrast, BAEC can solely be used in the fixed-confidence setting when \(\theta_{U}>\theta_{L}\), hence not for GAI itself (_i.e._ \(\theta_{U}=\theta_{L}\)). ## 3 Anytime Guarantees on the Probability of Error To allow continuation or (deterministic) early stopping, the candidate answer of APGAI should be associated with anytime theoretical guarantees. Theorem 1 shows an upper bound of the order \(\exp(-\mathcal{O}(t/H_{1}(\mu)))\) for \(P^{\mathrm{err}}_{\nu,\mathfrak{A}}(t)\) that holds for any deterministic time \(t\). **Theorem 1**.: _Let \(p(x)=x-0.5\log x\). The APGAI algorithm \(\mathfrak{A}\) satisfies that, for all \(\nu\in\mathcal{D}^{K}\) with mean \(\mu\) such that \(\Delta_{\min}>0\), for all \(t>n_{0}K+2|\mathcal{A}_{\theta}|\),_ \[P^{\mathrm{err}}_{\nu,\mathfrak{A}}(t)\leq Ke\sqrt{2}\log(e^{2}t)\exp\left(-p \left(\frac{t-n_{0}K-2|\mathcal{A}_{\theta}|}{2\alpha_{i_{\mu}}H_{1}(\mu)} \right)\right)\] _where \(H_{1}(\mu)\) as in (1), \((\alpha_{1},\alpha_{\theta})=(9,2)\) and \(i_{\mu}=1+(\theta-1)\mathds{1}\left(\mathcal{A}_{\theta}(\mu)\neq\emptyset\right)\)._ Theorem 1 holds for any deterministic time \(t>n_{0}K+2|\mathcal{A}_{\theta}|\) and any 1-sub-Gaussian instance \(\nu\). In the asymptotic regime where \(t\to+\infty\), Theorem 1 shows that \(\limsup_{t\to+\infty}t\log(1/P^{\mathrm{err}}_{\nu,\mathfrak{A}}(t))^{-1}\leq 2 \alpha_{i_{\mu}}H_{1}(\mu)\) for APGAI with \((\alpha_{1},\alpha_{\theta})=(9,2)\). We defer the reader to Appendix H for a detailed proof. Comparison With Uniform SamplingDespite the practical relevance of anytime and fixed-budget guarantees, APGAI is the first algorithm enjoying guarantees on the probability of error in GAI at any time \(t\) (hence at a given budget \(T\)). As baseline, we consider the uniform round-robin algorithm, named Unif, which returns the best empirical arm at time \(t\) if its empirical mean is higher than \(\theta\), and returns \(\emptyset\) otherwise. At time \(t\) such that \(t/K\in\mathbb{N}\), the recommendation of Unif is equivalent to the one used in APGAI, _i.e._\(\arg\max_{a\in\mathcal{A}}W^{+}_{a}(t)=\arg\max_{a\in\mathcal{A}}\hat{\mu}_{a}(t)\) since \(N_{a}(t)=t/K\). As the two algorithms only differ by their sampling rule, we can measure the benefits of adaptive sampling. Theorem 4 in Appendix C gives anytime upper bounds on \(P^{\mathrm{err}}_{\nu,\mathrm{Unif}}(t)\). In the asymptotic regime, Unif achieves a rate in \(2K\Delta_{\min}^{-2}\) when \(\mathcal{A}_{\theta}(\mu)=\emptyset\), and \(4K\min_{a\in\mathcal{A}_{\theta}(\mu)}\Delta_{a}^{-2}\) otherwise. While the latter rate is better than \(2H_{1}(\mu)\) when arms have dissimilar gaps, APGAI has better guarantees than Unif when there is no good arm. Our experiments shows that APGAI outperforms Unif on most instances (_e.g._ Figures 1 and 2), and is on par with it otherwise. Worst-case Lower BoundDegenne (2023) recently studied the existence of a complexity in fixed-budget pure exploration. While there is a complexity \(T^{\star}(\mu)\) as in (2) for the fixed-confidence setting, (Degenne, 2023, Theorem 6) shows that a sequence of fixed-budget algorithms \((\mathfrak{A}_{T})_{T}\) (where \(\mathfrak{A}_{T}\) denotes the algorithm using fixed budget \(T\)) cannot have a better asymptotic rate than \(KT^{\star}(\mu)\) on all Gaussian instances \[\exists\mu\in\mathbb{R}^{K},\ \limsup_{T\to+\infty}T\log(1/P^{\mathrm{err}}_{ \nu,\mathfrak{A}_{T}}(T))^{-1}\geq KT^{\star}(\mu)\:. \tag{4}\] Unif achieves the rate \(KT^{\star}(\mu)\) when \(\mathcal{A}_{\theta}\neq\emptyset\), but suffers from worse guarantees otherwise. Conversely, APGAI achieves the rate in \(T^{\star}(\mu)\) when \(\mathcal{A}_{\theta}=\emptyset\), but has sub-optimal guarantees otherwise. It does not conflict with (4) _e.g._ considering \(\mu\) with \(\mathcal{A}_{\theta}\neq\emptyset\) and such that there exists an arm \(a\in\mathcal{A}\) with \(\Delta_{a}\leq\max_{a\in\mathcal{A}_{\theta}}\Delta_{a}/\sqrt{K/2-1}\). Experiments in Section 5 suggest that the sub-optimal dependency when \(\mathcal{A}_{\theta}\neq\emptyset\) is not aligned with the good practical performance of APGAI. Formally proving better guarantees when \(\mathcal{A}_{\theta}(\mu)\neq\emptyset\) is a direction for future work. In fixed-budget GAI, a good strategy has highly different sampling modes depending on whether there is a good arm or not. Since wrongfully committing to one of those modes too early will incur higher error, it is challenging to find the perfect trade-off in an adaptive manner. Designing an algorithm whose guarantees are comparable to (4) for all instances is an open problem. ### Benchmark: Other GAI Algorithms To go beyond the comparison with Unif, we propose and analyze additional GAI algorithms. A summary of the comparison with APGAI is shown in Table 1. #### 3.1.1 From BAI to GAI Algorithms Since a BAI algorithm outputs the arm with highest mean, it can be adapted to GAI by comparing the mean of the returned arm to the known threshold. We study the GAI adaptations of two fixed-budget BAI algorithms: Successive Rejects (SR) (Audibert et al., 2010) and Sequential Halving (SH) (Karnin et al., 2013). SR-G and SH-G return \(\hat{a}_{T}=\emptyset\) when \(\hat{\mu}_{a_{T}}(T)\leq\theta\) and \(\hat{a}_{T}=a_{T}\) otherwise, where \(a_{T}\) is the arm that would be recommended for the BAI problem, _i.e._ the last arm that was not eliminated. Theorems 5 and 6 in Appendix C give an upper bound on \(P^{\text{err}}_{\nu,\text{SR-G}}(T)\) and \(P^{\text{err}}_{\nu,\text{SR-G}}(T)\) at the fixed budget \(T\). In the asymptotic regime, their rate is in \(4\log(K)\Delta_{\min}^{-2}\) when \(\mathcal{A}_{\theta}(\mu)=\emptyset\), otherwise \[\mathcal{O}(\log(K)\max\{\max_{a\in\mathcal{A}_{\theta}}\Delta_{a}^{-2},\max _{i>I}i(\max_{a\in\mathcal{A}}\mu_{a}-\mu_{(i)})^{-2}\})\] with \(I^{\star}=|\arg\max_{a\in\mathcal{A}}\mu_{a}|\) and \(\mu_{(i)}\) be the \(i^{\text{th}}\) largest mean in vector \(\mu\). Recently, Zhao et al. (2023) have provided a finer analysis of SH. Using their result yields mildly improved rates. We defer the reader to Appendix C for further details. Those rates are better than \(2H_{1}(\mu)\) when there is one good arm with large mean and the remaining arms have means slightly smaller than \(\theta\). However, APGAI has better guarantees than SR-G and SH-G when there is one good arm with mean slightly smaller than the largest mean. Doubling TrickThe doubling trick allows the conversion of any fixed-budget algorithm into an anytime algorithm. It considers a sequences of algorithms that are run with increasing budgets \((T_{k})_{k\geq 1}\), and recommends the answer outputted by the last instance. Zhao et al. (2023) shows that Doubling SH obtains the same guarantees than SH in BAI, hence Theorem 5 also holds for its GAI counterpart DSH-G (resp. Theorem 6 for DSR-G) at the cost of a multiplicative factor 4 in the rate. Empirically, our experiments show that APGAI is always better than DSR-G and DSH-G (Fig. 1 and 2). #### 3.1.2 Prior Knowledge-based GAI Algorithms Several fixed-budget BAI algorithms assume that the agent has access to some prior knowledge on unknown quantities to design upper/lower confidence bounds (UCB/LCB), _e.g._ UCB-E (Audibert et al., 2010) and UGaEpE (Gabillon et al., 2012). While this assumption is often not realistic, it yields better guarantees. We investigate those approaches for fixed-budget GAI. We propose an elimination-based meta-algorithm for fixed-budget GAI called PKGAI (**P**rior **K**nowledge-based GAI), described in Appendix D. As for BAEC, PKGAI(\(\star\)) takes as input an index policy \(\star\) which is used to define the sampling rule. The main difference to BAEC lies in the definition of the UCB/LCB since they depend both on the budget \(T\) and on knowledge of \(H_{1}(\mu)\) and \(H_{\theta}(\mu)\). We provide upper confidence bounds on the probability of error at time \(T\) holding for any choice of indices (Theorem 7 for PKGAI(\(\star\))) and for uniform round-robin sampling (Theorem 8 for PKGAI(Unif)). The obtained upper bounds on \(P^{\text{err}}_{\nu,\text{PKGAI}}(T)\) are marginally lower than the ones obtained for APGAI, while APGAI does not require the knowledge of \(H_{1}(\mu)\) and \(H_{\theta}(\mu)\). ### Unverifiable Sample Complexity The _unverifiable sample complexity_ was defined in Katz-Samuels and Jamieson (2020) as the smallest stopping time \(\tau_{U,\delta}\) after which an algorithm always outputs a correct answer with probability at least \(1-\delta\). In GAI, this means that algorithm \(\mathfrak{A}\) satisfies \(\mathbb{P}_{\nu}(\bigcup_{t\geq\tau_{U,\delta}}S^{\text{err}}_{\mathfrak{A}}(t ))\leq\delta\). Compared to the fixed-confidence setting, it does not require to certify that the candidate answer is correct. Authors in Zhao et al. (2023) notice that anytime bounds on the error can imply an unverifiable sample complexity bound. Theorem 3 in Appendix B.3 gives a deterministic upper bound on the unverifiable sample complexity \(\tau_{U,\delta}\) of APGAI, _i.e._ \[U_{\delta}(\mu)=_{\delta\to 0}2\alpha_{i_{\mu}}H_{1}(\mu)\log(1/\delta)+ \mathcal{O}(\log\log(1/\delta))\] with \(i_{\mu}=1+(\theta-1)\mathds{1}\left(\mathcal{A}_{\theta}(\mu)\neq\emptyset\right)\) and \((\alpha_{1},\alpha_{\theta})=(9,2)\). While such upper bounds are known in BAI (Katz-Samuels and Jamieson, 2020; Zhao et al., 2023; Jourdan et al., 2023), this is the first result for GAI. Fixed-Confidence Guarantees In some applications, the practitioner has a strict constraint on the confidence \(\delta\) associated with the candidate answer. This constraint simultaneously supersedes any limitation on the sampling budget and allows early stopping when enough evidence is collected (random since data-dependent). In the fixed-confidence setting, an identification strategy should define a stopping rule in addition of the sampling and recommendation rules. Stopping RuleWe couple APGAI with the GLR stopping rule (Garivier and Kaufmann, 2016) for GAI (see Appendix E.2), which coincides with the Box stopping rule introduced in Kaufmann et al. (2018). At fixed confidence \(\delta\), we stop at \(\tau_{\delta}:=\min(\tau_{>,\delta},\tau_{<,\delta})\) \[\begin{split}\text{where }\tau_{>,\delta}&:=\inf\{t \mid\max_{\alpha\in\mathcal{A}}W_{a}^{+}(t)\geq\sqrt{2c(t,\delta)}\}\,,\\ \tau_{<,\delta}&:=\inf\{t\mid\min_{\alpha\in \mathcal{A}}W_{a}^{-}(t)\geq\sqrt{2c(t,\delta)}\}\,,\end{split} \tag{5}\] and \(c:\mathbb{N}\times(0,1)\to\mathbb{R}_{+}\) is a threshold function. Proven in Appendix G.1, Lemma 2 gives a threshold ensuring that the GLR stopping rule (5) is \(\delta\)-correct for all \(\delta\in(0,1)\), independently of the sampling rule. **Lemma 2**.: _Let \(\overline{W}_{-1}(x)=-W_{-1}(-e^{-x})\) for all \(x\geq 1\), where \(W_{-1}\) is the negative branch of the Lambert \(W\) function. It satisfies \(\overline{W}_{-1}(x)\approx x+\log x\). Let \(\delta\in(0,1)\). Given any sampling rule, using the threshold_ \[2c(t,\delta)=\overline{W}_{-1}(2\log(K/\delta)+4\log\log(e^{4}t)+1/2) \tag{6}\] _in the GLR stopping rule (5) yields a \(\delta\)-correct algorithm for \(1\)-sub-Gaussian distributions._ Non-asymptotic Upper BoundTheorem 2 gives an upper bound on the expected sample complexity of the resulting algorithm holding for any confidence \(\delta\). **Theorem 2**.: _Let \(\delta\in(0,1)\). Combined with GLR stopping (5) using threshold (6), the APGAI algorithm is \(\delta\)-correct and it satisfies that, for all \(\nu\in\mathcal{D}^{K}\) with mean \(\mu\) such that \(\Delta_{\min}>0\),_ \[\mathbb{E}_{\nu}[\tau_{\delta}]\leq C_{\mu}(\delta)+K\pi^{2}/6+1\;,\] _where \(i_{\mu}:=1+(\theta-1)\mathds{1}\left(\mathcal{A}_{\theta}(\mu)\neq\emptyset\right)\) and \(C_{\mu}(\delta):=\)_ \[\sup\{t\mid t\leq 2H_{i_{\mu}}(\mu)(\sqrt{c(t,\delta)}+\sqrt{3\log t})^{2}+D_{ i_{\mu}}(\mu)\}\,,\] _with \(H_{1}(\mu)\) and \(H_{\theta}(\mu)\) as in (1). \(D_{1}(\mu)\) and \(D_{\theta}(\mu)\) are defined in Lemmas 16 and 18 in Appendix F, satisfying_ \[D_{1}(\mu)\approx_{\Delta_{\min}\to+\infty}D_{\theta}(\mu)=\mathcal{O}(H_{1}( \mu)\log H_{1}(\mu))\;.\] _In the asymptotic regime, we obtain_ \[\limsup_{\delta\to 0}\mathbb{E}_{\nu}[\tau_{\delta}]/\log(1/\delta)\leq 2H_{i_{ \mu}}(\mu)\;.\] _since \(C_{\mu}(\delta)=_{\delta\to 0}2H_{i_{\mu}}(\mu)\log(1/\delta)+\mathcal{O}(\log\log(1/ \delta))\)._ Most importantly, Theorem 2 holds for any confidence \(\delta\in(0,1)\) and any \(1\)-sub-Gaussian instance \(\nu\). In the asymptotic regime where \(\delta\to 0\), Theorem 2 shows that \(\limsup_{\delta\to 0}\mathbb{E}_{\nu}[\tau_{\delta}]/\log(1/\delta)\leq 2H_{i_{ \mu}}(\mu)\). This implies that APGAI is asymptotically optimal for Gaussian distributions when \(\mathcal{A}_{\theta}=\emptyset\). When there are good arms, our upper bound scales as \(H_{\theta}(\mu)\log(1/\delta)\), which is better than the scaling in \(H_{1}(\mu)\log(1/\delta)\) obtained for the unverifiable sample complexity. However, when \(\mathcal{A}_{\theta}\neq\emptyset\), our upper bound is sub-optimal compared to \(2\min_{a\in\mathcal{A}}\Delta_{-}^{-2}\) (see Lemma 1). This sub-optimal scaling stems from the greediness of APGAI when \(\mathcal{A}_{\theta}\neq\emptyset\) since there is no mechanism to detect an arm that is easiest to verify, _i.e._\(\arg\max_{a\in\mathcal{A}_{\theta}}\Delta_{a}\). Empirically, we observe that APGAI can suffer from large outliers when there are good arms with dissimilar gaps, and that adding forced exploration circumvents this issue (Figure 22 and Table 11 in Appendix I.5). Intuitively, a purely asymptotic analysis of APGAI would yield the dependency \(2\max_{a\in\mathcal{A}_{\theta}}\Delta_{a}^{-2}\) which is independent from \(|\mathcal{A}_{\theta}|\). This intuition is supported by empirical evidence (Figure 3), and we defer the reader to Appendix F.2.1 for more details. Compared to asymptotic results, our non-asymptotic guarantees hold for reasonable values of \(\delta\), with a \(\delta\)-independent scaling of the order \(\mathcal{O}(H_{1}(\mu)\log H_{1}(\mu))\). Comparison With Existing Upper BoundsTable 2 summarizes the asymptotic scaling of the upper bound on the expected sample complexity of existing GAI algorithms. While most GAI algorithms have better asymptotic guarantees when \(\mathcal{A}_{\theta}(\mu)\neq\emptyset\), APGAI is the only one of them which has anytime guarantees on the probability of error (Theorem 1). However, we emphasize that APGAI is not the best algorithm to tackle fixed-confidence GAI since it is designed for anytime GAI. Sticky Track-and-Stop (S-TaS) is asymptotically optimal for the "any low arm" problem (Degenne and Koolen, 2019), hence for GAI \begin{table} \begin{tabular}{l c c} Algorithm \(\mathfrak{A}\) & \(\mathcal{A}_{\theta}(\mu)=\emptyset\) & \(\mathcal{A}_{\theta}(\mu)\neq\emptyset\) \\ \hline APGAI[Th. 2] & \(H_{1}(\mu)\) & \(H_{\theta}(\mu)\) \\ S-TaS \(\lx@sectionsign\) & \(H_{1}(\mu)\) & \(\overline{\Delta}_{\max}^{-2}\) \\ (Degenne and Koolen, 2019) & & \\ HDoC & \(H_{1}(\mu)\) & \(\overline{\Delta}_{\max}^{-2}\) \\ (Kano et al., 2019) & & \\ APT-G, LUCB-G & \(H_{1}(\mu)\) & \(-\) \\ (Kano et al., 2019) & & \\ \end{tabular} \end{table} Table 2: Asymptotic upper bound \(2C(\mu)\) on the expected sample complexity of algorithm \(\mathfrak{A}\) on \(\nu\), _i.e._\(\limsup_{\delta\to 0}\mathbb{E}_{\nu}[\tau_{\delta}]/\log(1/\delta)\leq 2C(\mu)\). (\(\lx@sectionsign\)) Requires an ordering on the possible answers \(\mathcal{A}\cup\{\emptyset\}\). \(H_{1}(\mu)\) and \(H_{\theta}(\mu)\) as in (1), \(\overline{\Delta}_{\max}:=\max_{a\mathcal{A}_{\theta}}\Delta_{a}\). as well. Even though GAI is one of the few setting where S-TaS admits a computationally tractable implementation, its empirical performance heavily relies on the fixed ordering for the set of possible answers (see Table 5 in Appendix I.2). This partly explains the lack of non-asymptotic guarantees for S-TaS which is asymptotic by nature, while APGAI has non-asymptotic guarantees. For the "bad arm existence" problem, Kaufmann et al. (2018) proves that the empirical proportion \((N_{a}(t)/t)_{a\in\mathcal{A}}\) of Murphy Sampling converges almost surely towards the optimal allocation realizing the asymptotic lower bound of Lemma 1. While their result implies that \(\lim_{\delta\to 0}\tau_{\delta}/\log(1/\delta)=T^{*}(\mu)\) almost surely, the authors provide no upper bound on the expected sample complexity of Murphy Sampling. Finally, we consider the AllGAI algorithms introduced in Kano et al. (2019) (HDoC, LUCB-G and APT-G) which enjoy theoretical guarantees for some GAI instances as well. When \(\mathcal{A}_{\theta}(\mu)=\emptyset\), all three algorithms have an upper bound of the form \(2H_{1}(\mu)\log(1/\delta)+\mathcal{O}(\log\log(1/\delta))\). When \(\mathcal{A}_{\theta}(\mu)\neq\emptyset\), only HDoC admits an upper bound on the expected number of time to return one good arm, which is of the form \(2\min_{a\in\mathcal{A}_{\theta}}\Delta_{a}^{-2}\log(1/\delta)+\mathcal{O}( \log\log(1/\delta))\). The indices used for the elimination and recommendation in BAEC (Tabata et al., 2020) have a dependence in \(\mathcal{O}(-\log(\theta_{U}-\theta_{L}))\), hence BAEC is not defined for GAI where \(\theta_{U}=\theta_{L}\). While it is possible to use UCB/LCB which are agnostic to the gap \(\theta_{U}-\theta_{L}>0\), these choices have not been studied in Tabata et al. (2020). Extrapolating the theoretical guarantees of BAEC when \(\theta_{L}\to\theta_{U}\), one would expect an upper bound on its expected sample complexity of the form \(2H_{1}(\mu)\log(1/\delta)+\mathcal{O}((\log(1/\delta))^{2/3})\). ## 5 Experiments We assess the empirical performance of the APGAI in terms of empirical error, as well as empirical stopping time. Overall, APGAI perform favorably compared to other algorithms in both settings. Moreover, its empirical performance exceeds what its theoretical guarantees would suggest. This discrepancy between theory and practice paves the way for interesting future research. We present a fraction of our experiments, and defer the reader to Appendix I for supplementary experiments. Outcome Scoring ApplicationOur real-life motivation is outcome scoring from gene activity (transcriptomic) data. This application is focused on the treatment of encephalopathy of prematurity in infants. The goal is to determine the optimal protocol for the administration of stem cells among \(K=18\) realistic possibilities. Our collaborators tested all treatments, and made RNA-related measurements on treated samples. Computed on 3 technical replicates, the mean value in \([-1,1]\) (see Table 3 in Appendix I.1) corresponds to a cosine score computed between gene activity changes in treated and healthy samples. When the mean is higher than \(\theta=0.5\), the treatment is considered significantly positive. Traditional approaches use grid-search with a uniform allocation. We model this application as a Bernoulli instance, _i.e._ observations from arm \(a\) are drawn from a Bernoulli distribution with mean \(\max(\mu_{a},0)\) (which is \(1/2\)-sub-Gaussian). Fixed-budget Empirical ErrorThe APGAI algorithm is compared to fixed-budget GAI algorithms: SR-G, SH-G, PKGAI and Unif. For a fair comparison, the threshold functions in PKGAI do not use prior knowledge (see Appendix I.2.2, where theoretical thresholds are also considered). Several index policies are considered for PKGAI: Unif, APT\({}_{P}\), UCB and LCB-G. At time \(t\), the latter selects among the set \(\mathcal{S}_{t}\) of active candidates \(a_{t}\leftarrow\arg\max_{a\in\mathcal{S}_{t}}\sqrt{N_{a}(t)}\text{LCB}(a,t)\), where \(\text{LCB}(a,t)\) is the lower confidence bound on \(\mu_{a}-\theta\) at time \(t\). For a budget \(T\) up to \(200\), our results are averaged over \(1,000\) runs, and confidence intervals are displayed. On our outcome scoring application, Figure 1 first shows that all uniform samplings (SH-G, SR-G, Unif and PKGAI(Unif)) are less efficient at detecting one of the good arms contrary to the adaptive strategies. Moreover, APGAI actually performs as well as the elimination-based algorithms PKGAI(\(\star\)), while allowing early stopping as well. In Appendix I.3, we confirm the good performance of APGAI in terms of fixed-budget empirical error on other instances. Anytime Empirical ErrorThe APGAI algorithm is compared to anytime GAI algorithms: DSR-G, DSH-G (see Section 3.1.1) and Unif. Since DSH-G has poor empirical performance (see Figure 4), we consider the heuristic DSH-G-WR where each SH instance keeps Figure 1: Fixed-budget empirical error on our outcome scoring application (see RealL in Table 3). its history instead of discarding it. On two Gaussian instances (\(\mathcal{A}_{\theta}(\mu)\neq\emptyset\) and \(\mathcal{A}_{\theta}(\mu)=\emptyset\)), Figure 2 shows that APGAI has significantly smaller empirical error compared to Unif, which is itself better than DSR-G and DSH-G-WR. Our results are averaged over \(10,000\) runs, and confidence intervals are displayed. In Appendix I.4, we confirm the good performance of APGAI in terms of anytime empirical error on other instances, _e.g._ when \(\mathcal{A}_{\theta}(\mu)\neq\emptyset\) (Figure 18) and when \(|\mathcal{A}_{\theta}(\mu)|\) varies (Figure 16). Overall, APGAI appears to have better empirical performance than suggested by Theorem 1 when \(\mathcal{A}_{\theta}(\mu)\neq\emptyset\). Empirical Stopping TimeThe APGAI algorithm is compared to fixed-confidence GAI algorithms using the GLR stopping rule (5) with threshold (6) and confidence \(\delta=0.01\): Murphy Sampling (MS (Kaufmann et al., 2018)), HDoC, LUCB-G (Kano et al., 2019), Track-and-Stop for GAI (TaS (Garivier and Kaufmann, 2016)) and Unif (see Appendix I.2.3). In Figure 3, we study the impact of the number of good arms by considering Gaussian instances with two groups of arms. Our results are averaged over \(1,000\) runs, and the standard deviations are displayed. Figure 3 shows that the empirical performance of APGAI is invariant to varying \(|\mathcal{A}_{\theta}|\), and comparable to the one of TaS. In comparison, the other algorithms have worse performance, and they suffer from increased \(|\mathcal{A}_{\theta}|\) since they have an exploration bonus for each good arm. In contrast, APGAI is greedy enough to only focus its allocation to one of the good arms. While APGAI achieves the best performance when there is no good arm, it can suffer from large outliers when good arms have dissimilar means (Figure 22 in Appendix I.5). To circumvent this problem, it is enough to add forced exploration to APGAI (Table 11). While APGAI was designed for anytime GAI, it is remarkable that it also has theoretical guarantees in fixed-confidence GAI, and relatively small empirical stopping time. ## 6 Perspectives We propose APGAI, the first anytime and parameter-free sampling strategy for GAI in stochastic bandits, which is independent of a budget \(T\) or a confidence \(\delta\). In addition to showing its good empirical performance, we also provided guarantees on its probability of error at any deterministic time \(t\) (Theorem 1) and on its expected sample complexity at any confidence \(\delta\) when combined with the GLR stopping time (5) (Theorem 2). As such, APGAI allows both continuation and early stopping. We reviewed and analyzed a large number of baselines for each GAI setting for comparison. While we considered unstructured multi-armed bandits, many applications have a known structure. Investigating the GAI problem on _e.g._ linear or infinitely-armed bandits, would be an interesting subsequent work. In particular, working in a structured framework when facing a possibly infinite number of arms would bring out more compelling questions about how to explore the arm space in a both tractable and meaningful way. ## Acknowledgements Experiments presented in this paper were carried out using the Grid'5000 testbed, supported by a scientific interest group hosted by Inria and including CNRS, RENATER and several Universities as well as other organizations (see [https://www.grid5000.fr](https://www.grid5000.fr)). This work has been partially supported by the THIA ANR program "AI_PhD@Lille".
2308.12704
Embedding Ultra slow-roll inflaton dynamics in Warm Inflation
Slow-roll of the inflaton field defines the standard dynamics of the inflationary epoch. However, the inflationary system deviates from slow-roll when it encounters an extremely flat region of the inflaton potential, and enters a phase dubbed Ultra slow roll. In this article, we explore the possibility of realizing an Ultra slow-roll phase in a particularly interesting inflationary scenario, called Warm Inflation. In the Warm inflationary scenario a thermalized, sub-dominant radiation bath coexists with the inflaton energy density as an effect of dissipative dynamics. We show in this article that though the background dynamics indicate Ultra slow-roll when the potential becomes extremely flat, in Warm Inflation models, where the dissipation coefficient is a sole function of the temperature of the radiation bath, the system fails to maintain the thermal equilibrium as soon as it enters the Ultra slow-roll phase. As thermal equilibrium is a key feature of Warm Inflation, and as it is not yet known how to deal with Warm Inflation without thermal equilibrium, we could not analyze such systems any further in this article. However, we demonstrate that brief periods of Ultra slow-roll phase, which smoothly ends into standard slow-roll, can be accommodated in WI models where the dissipation coefficient is not only a function of the temperature of the radiation bath but also depends on the amplitude of the inflaton field. We theoretically determine the criteria of successfully embedding Ultra slow-roll in WI while the system remain in thermal equilibrium, and also demonstrate numerically that such short Ultra slow-roll phases can indeed be embedded in specific Warm Inflation models which comply with the theoretically determined criteria.
Sandip Biswas, Kaushik Bhattacharya, Suratna Das
2023-08-24T10:50:21Z
http://arxiv.org/abs/2308.12704v2
# Embedding Ultra slow-roll in Warm Inflation ###### Abstract Slow-roll of the inflaton field defines the standard dynamics of the inflationary epoch. However, the inflationary system deviates from slow-roll when it encounters an extremely flat region of the inflaton potential, and enters a phase dubbed Ultra slow roll. In this article, we explore the possibility of realizing an Ultra slow-roll phase in a particularly interesting inflationary scenario, called Warm Inflation. In the Warm inflationary scenario a thermalized, sub-dominant radiation bath coexists with the inflaton energy density as an effect of dissipative dynamics. We show in this article that though the background dynamics indicate Ultra slow-roll when the potential becomes extremely flat, in Warm Inflation models, where the dissipation coefficient is a sole function of the temperature of the radiation bath, the system fails to maintain the thermal equilibrium as soon as it enters the Ultra slow-roll phase. As thermal equilibrium is a key feature of Warm Inflation, and as it is not yet known how to deal with Warm Inflation without thermal equilibrium, we could not analyze such systems any further in this article. However, we demonstrate that brief periods of Ultra slow-roll phase, which smoothly ends into standard slow-roll, can be accommodated in WI models where the dissipation coefficient is not only a function of the temperature of the radiation bath but also depends on the amplitude of the inflaton field. We theoretically determine the criteria of successfully embedding Ultra slow-roll in WI while the system remain in thermal equilibrium, and also demonstrate numerically that such short Ultra slow-roll phases can indeed be embedded in specific Warm Inflation models which comply with the theoretically determined criteria. ## I Introduction Cosmic Inflation [1; 2; 3; 4; 5; 6], a brief period of near-exponential expansion of the early universe, has now become an integrated part of the standard big bang cosmology that not only solves the fine-tuning problems of the hot big bang model but also helps generating seeds for the large-scale structures we see today. In the standard scenario, cosmic inflation is driven by the potential energy of a slow-rolling scalar field, dubbed inflaton. To maintain the slow-rolling of the field during inflation, the potential of the inflaton field needs to be sufficiently flat. However, there are regimes when the slow-rolling conditions cannot remain valid. One such situation occurs when the potential becomes extremely flat, such as around inflection points. The system then deviates from the standard slow-rolling and enters a phase of so-called "Ultra slow-roll" [7]. The term 'Ultra slow-roll' was first coined in [8], and the dynamics of the system during such a phase were studied. However, a similar situation was first investigated in [9]. The attractive behaviour and stability of an Ultra slow-roll phase were investigated in [10]. Such a departure from slow-roll often turns out to be useful as such a phase allows the cosmological perturbations to grow sufficiently enough to generate Primordial Black Holes [11], a viable candidate for Dark Matter [12].1 Footnote 1: An issue regarding whether or not the large-scale cosmological perturbations, probed by the Cosmic Microwave Background, receive large non-perturbative corrections from the enhanced perturbations on small scales due to Ultra slow-roll is being debated in the literature [13; 14; 15; 16]. If they do, then such mechanisms of generating Primordial Black Holes will be ruled out. The issue is yet to be resolved. In the standard inflationary scenario, the couplings of the inflaton field with other particles are considered to be negligible. Thus, all other energy densities present before inflation are diluted away exponentially by the time inflation ends, and, therefore, to onset the standard hot big bang evolution post-inflation, a separate phase of reheating [17] is called for. This standard inflationary scenario will be referred to as 'Cold Inflation' (CI) in this article. However, there is an alternate inflationary scenario, dubbed 'Warm Inflation' (WI) [18] (for recent reviews on WI, see, e.g., [19; 20]), where those couplings of the inflaton field to other particles play a significant role during the inflationary evolution in dissipating inflaton's energy density into a subdominant, yet non-negligible, radiation bath. As a constant radiation bath is maintained throughout WI, this helps WI to transit smoothly into a radiation dominated universe once WI ends. Thus, WI, unlike CI, does not call for a reheating phase post inflation. Moreover, despite being sub-dominant, the produced radiation energy density, \(\rho_{r}\), satisfies the condition \(\rho_{r}^{1/4}>H\) (where \(H\) is the Hubble parameter during inflation), which upon assuming thermalization of the radiation bath yields the condition \(T>H\) (\(T\) being the temperature of the thermalized radiation bath). This thermalization condition is maintained throughout the evolution of WI, and plays a major role in determining the cosmological perturbations produced during WI [19; 20]. Graceful exit of WI, which is a much more complex process than in CI, has been extensively studied in [21]. WI has certain attractive features than its more conventional counterpart, CI. First of all, as mentioned above, WI is not in need of an extra reheating phase -- physics of which is not yet fully understood. Secondly, WI yields a more enhanced scalar curvature power spectrum compared to CI [22; 23], which lowers the tensor-to-scalar ratio significantly. This allows WI to accommodate potentials, such as quartic self-coupling potentials (\(\lambda\phi^{4}\)), which are otherwise ruled out in CI for generating way too much tensor-to-scalar ratios [24]. It has been shown in [25] that the observed baryon asymmetry in the nature can be explained by the dissipative effects of the WI alone, which is absent in CI. This also leads to observable baryon isocurvature perturbations which can help in checking the consistency of the WI models [26]. Certain WI models can also generate Primordial Black Holes without invoking any departure from slow-roll [27; 28; 29; 30]. Moreover, it has been shown recently that, while CI fails to comply [31] with the de Sitter Swampland Conjecture in String Theory proposed in [32; 33], one can easily overcome the obstacles in WI due to its very construction and can successfully accommodate the criteria of the conjecture within the framework of WI [34; 35; 36; 37; 38; 39]. Therefore, WI is preferred over CI as an inflationary paradigm in low energy effective field theories which decent from ultraviolet complete theories of gravity, such as String Theory. In this article, we investigate what happens to the dynamics of WI when it encounters an extremely flat region of a potential, like an inflection point. We discussed before that, in CI, the system significantly deviates from slow-rolling and enters a phase of Ultra slow-roll in a similar situation. Thus, can we expect a similar behaviour in WI as well? We found that though the background dynamics does show signatures of Ultra slow-roll, the system deviates from its thermal equilibrium exponentially fast in models of WI where the dissipative coefficient is a sole function of the temperature of the radiation bath. As thermal equilibrium of the system is a key feature of WI, it is not clear what happens to the dynamics of WI if the thermal equilibrium is lost during any phase of its evolution. Therefore, we could not analyze such systems any further in this article. However, we showed that WI models with dissipative coefficients depending on the temperature of the thermal bath as well as on the amplitude of the inflaton field can successfully realize brief periods of Ultra slow-roll while maintaining the overall thermalization of the system. We have organized the rest of the article as follows. In Sec. (II), we analyze the Ultra slow-roll dynamics in standard CI and define the criteria which distinguish Ultra slow-roll from slow-roll in terms of Hubble slow-roll parameters. In Sec. (III), we briefly discuss the WI dynamics under slow-roll and show how the thermalization of the system is maintained during the slow-roll phase. In Sec. (IV) we determine the theoretical criteria which can lead to an Ultra slow-roll phase in WI while maintaining thermal equilibrium of the system. This follows by the Sec. (V), where we analyze numerically a specific WI model (which can accommodate an Ultra slow-roll phase according to the theoretical criteria developed in Sec. (IV)) with two potentials with extremely flat regions, and show that brief periods of Ultra slow-roll phases can indeed be realized while maintaining the thermalization of the system. We also show that the system smoothly enters a standard slow-roll phase after these brief periods of Ultra slow-roll. In Sec. (VI), we discuss the main results obtained in this article and then conclude. ## II Ultra slow-roll in cold inflation To figure out how WI behaves when the potential becomes extremely flat, we need to first understand how the dynamics deviates from slow-roll (and enters an Ultra slow-roll phase) in CI in a similar situation. In this section we will closely follow the arguments given in [7], and, in the following section, we will generalise the arguments presented here for the case of WI. In canonical CI models, a single scalar inflaton field, \(\phi\), evolves according to the Klein-Gordon equation given as \[\ddot{\phi}+3H\dot{\phi}+V_{,\phi}=0, \tag{1}\] where the overdot denotes derivative with respect to the cosmic time \(t\) and \(V_{,\phi}=dV/d\phi\). Following [7], we will call the three terms in the above equation as the acceleration term, the friction term, and the slope term, respectively. The Friedmann-Lemaitre-Robertson-Walker scale factor, \(a(t)\), evolves according to the Friedmann equations: \[3M_{\rm Pl}^{2}H^{2} = \frac{\dot{\phi}^{2}}{2}+V(\phi), \tag{2}\] \[2M_{\rm Pl}^{2}H\ = -\dot{\phi}^{2}, \tag{3}\] where \(H\equiv\dot{a}/a\) is the Hubble parameter and \(M_{\rm Pl}\) is the reduced Planck mass. The background evolution is characterized in terms of the Hubble slow-roll parameters defined by [40]: \[\epsilon_{i+1}=\frac{d\ln\epsilon_{i}}{dN}, \tag{4}\] where \(N\equiv\ln a\) denotes the number of e-folds. Starting with \(\epsilon_{0}\propto 1/H\), the consecutive first slow-roll parameter \(\epsilon_{1}\) and the second slow-roll parameter \(\epsilon_{2}\) can be expressed as \[\epsilon_{1} \equiv -\frac{\dot{H}}{H^{2}}, \tag{5}\] \[\epsilon_{2} \equiv \frac{\dot{\epsilon}_{1}}{\epsilon_{1}H}=2\epsilon_{1}+\frac{ \ddot{H}}{H\dot{H}}. \tag{6}\] Inflation requires \(\epsilon_{1}<1\), and the validity of slow-roll approximation is ensured by the general conditions 1. During inflation the potential term dominates over the kinetic term. Thus, from the Friedmann equations given in Eq. (2.2) and Eq. (2.3, we see that \(\epsilon_{1}\sim 3\dot{\phi}^{2}/(2V(\phi))\), ensuring \(\epsilon_{1}\ll 1\) during inflation. And during slow-roll, the acceleration term (\(\ddot{\phi}\)) is negligible compared to the friction (\(H\dot{\phi}\)) and the slope (\(V_{,\phi}\)) terms in the Klein-Gordan equation given in Eq. (2.1), which then can be approximated as \[3H\dot{\phi}+V_{,\phi}\simeq 0. \tag{2.7}\] This suggests that \[\frac{\ddot{H}}{H\dot{H}}=\frac{2\ddot{\phi}}{H\dot{\phi}}\simeq 2\epsilon_{1}- 2\eta_{V}, \tag{2.8}\] where \(\eta_{V}\) is one of the two potential slow-roll parameters: \[\epsilon_{V}\equiv\frac{M_{\rm Pl}^{2}}{2}\left(\frac{V_{,\phi}}{V}\right)^{2 },\qquad\eta_{V}\equiv M_{\rm Pl}^{2}\frac{V_{,\phi\phi}}{V}. \tag{2.9}\] Both these potential slow-roll parameters quantify the flatness of the inflaton potential during inflation. As the potential requires to be nearly flat for the slow-rolling of the field, the potential slow-roll parameters require to be much smaller than unity during slow-roll inflation. Therefore, from Eq. (2.6) we see that \(|\epsilon_{2}|\sim|4\epsilon_{1}-2\eta_{V}|\ll 1\). Thus, both \(\epsilon_{1}\) and \(|\epsilon_{2}|\), being much smaller than unity, ensure the slow-roll dynamics of the inflaton field. However, when the potential becomes extremely flat, the slope term in the Klein-Gordan equation becomes negligible, and it becomes \[\ddot{\phi}+3H\dot{\phi}\simeq 0, \tag{2.10}\] yielding \(\epsilon_{2}\sim-6+2\epsilon_{1}\). We note that \(\epsilon_{1}\) remains much smaller than unity even when the potential becomes extremely flat, as the potential term dominates over the kinetic term. Therefore, inflation does not stop when the potential becomes extremely flat. However, \(\epsilon_{2}\) becomes of the order of unity (\(\epsilon_{2}\sim-6\)). This clearly indicates that the scalar field dynamics deviates from slow-roll when the potential becomes extremely flat, and enters a new phase of evolution, dubbed the Ultra slow-roll. ## III Warm Inflation: the Slow-roll Regime During WI, the inflaton field, \(\phi\), dissipates its energy to a radiation bath, maintaining a non-negligible radiation energy density, \(\rho_{r}\), throughout. This feature distinguishes WI from the standard CI scenario. Therefore, the equation of motion of the inflaton field also differs from that of in the CI scenario. The equations governing the dynamics of the inflaton field, \(\phi\), and the radiation bath, \(\rho_{r}\), in WI can be written as \[\ddot{\phi}+3H\dot{\phi}+V_{,\phi}=-\Upsilon(\phi,T)\dot{\phi}, \tag{3.1}\] \[\dot{\rho}_{r}+4H\rho_{r}=\Upsilon(\phi,T)\dot{\phi}^{2}. \tag{3.2}\] Here, \(\Upsilon\) is the dissipative term which can depend on the amplitude of the inflaton field, \(\phi\), as well as the temperature of the radiation bath, \(T\). It is assumed that the radiation bath, generated by the dissipation of the inflaton field, is in near thermal equilibrium throughout WI, and thus a temperature \(T\) can be defined. Different WI models make use of different functional form of this dissipative coefficient, e.g. see [41] for different dissipative coefficients used in WI models and their microphysical origin. However, we can write a general form of all these dissipative coefficients used in different models as \[\Upsilon(\phi,T)=C_{\Upsilon}T^{p}\phi^{c}M^{1-p-c}, \tag{3.3}\] where \(C_{\Upsilon}\) is a dimensionless constant carrying the signatures of the microscopic model used to derive the dissipative coefficient (such as the different coupling constants), and \(M\) is some appropriate mass-scale, so that the dimensionality of the dissipative coefficient is preserved, \([\Upsilon]=\)[mass] in our system of units where \(\hbar=c=1\). The numerical powers of \(T\) and \(\phi\), which are \(p\) and \(c\) respectively, can take positive or negative values, however, we will restrict ourselves to \(|p|<4\) due to the stability of the WI models [42; 43]. We define the dimensionless parameter \(Q\) as \[Q\equiv\frac{\Upsilon}{3H}, \tag{3.4}\] which is the ratio of the two friction terms appearing in the equation of motion of the inflation field, one due to dissipation (\(\Upsilon\dot{\phi}\)) and the other due to the Hubble expansion (\(3H\dot{\phi}\)). If the Hubble friction dominates over the friction due to dissipation, i.e. \(Q<1\), we call it a weak dissipative regime of WI. On the other hand, when the friction due to dissipation dominates the equation of motion of the inflaton field (\(Q>1\) in such cases), we call it a strong dissipative regime of WI. Apart from the two potential slow-roll parameters defined in Eq. (2.9), there are two additional slow-roll parameters in WI [44; 22]: \[\beta\equiv M_{P}^{2}\frac{\Upsilon_{,\phi}V_{,\phi}}{\Upsilon V},\qquad\delta \equiv\frac{TV_{,\phi T}}{V_{,\phi}}, \tag{3.5}\] which are required to ensure the slow-roll dynamics. During slow-roll \(\epsilon_{V}\), \(\eta_{V}\), \(\beta\) and \(\delta\) are all smaller than \(1+Q\) in both weak and strong dissipative regime. In the slow-roll regime, i.e. when the acceleration term in Eq. (3.1) is negligible with respect to all the other terms, we can approximate this equation as \[3H(1+Q)\dot{\phi}+V_{,\phi}\approx 0. \tag{3.6}\] However, as the inflaton potential energy density always dominates over the kinetic term and the radiation energy density during WI, the Friedmann equation given in Eq. (2.2) can be written as \[3M_{\rm Pl}^{2}H^{2}\approx V(\phi). \tag{3.7}\] As during WI a constant radiation bath is maintained by the dissipation of energy of the inflaton field into the radiation bath, we can assume \(\dot{\rho}_{r}\approx 0\), and can approximate Eq. (3.2) as \[4H\rho_{r}\approx\Upsilon\dot{\phi}^{2}. \tag{3.8}\] Also, as the radiation bath is near equilibrium, we can define a temperature \(T\) of the radiation bath as \[\rho_{r}=\frac{\pi^{2}}{30}g_{*}T^{4}, \tag{3.9}\] where \(g_{*}\) is the relativistic degrees of freedom of the radiation bath. We can then determine the evolution of \(T\) with respect to the e-foldings \(N\) during slow-roll as \[\frac{d\ln T}{dN}=\frac{\dot{T}}{HT}\simeq\frac{1}{4-p}\left( \frac{\epsilon_{V}}{1+Q}-\frac{\beta}{1+Q}\right)\ll 1.\] Here we have used the relation \(dN=Hdt\). The above equation ensures that during slow-roll, the temperature of the radiation bath evolves very slowly, maintaining the near equollibrium condition. With all these conditions of slow-roll WI, we find that during a slow-roll phase in WI \[\epsilon_{1} = \frac{\epsilon_{V}}{1+Q}, \tag{3.11}\] \[\epsilon_{2} = -2\frac{\eta_{V}}{1+Q}+\left(\frac{4+3Q}{1+Q}-\frac{p}{4-p}\frac {Q}{1+Q}\right)\frac{\epsilon_{V}}{1+Q}\] (3.12) \[+\left(\frac{4}{4-p}\frac{Q}{1+Q}\right)\frac{\beta}{1+Q}.\] In deriving these relations we have used the general form of the dissipative coefficient given in Eq. (3.3). As \(\epsilon_{V}\), \(\eta_{V}\) and \(\beta\) are all smaller than \(1+Q\) during slow-roll we see that \(\epsilon_{1}\) and \(|\epsilon_{2}|\) are both smaller than unity during slow-roll. In the next section we will see that this situation will change when the potential becomes extremely flat, as it happens in CI. ## IV Realising ultra slow-roll within warm inflation We note in Sec. (II) that in CI, the evolution enters an Ultra slow-roll phase when the potential becomes extremely flat, and we quantify it by showing that the magnitude of the second Hubble slow-roll parameter, \(\epsilon_{2}\), becomes of the order of unity, while the first Hubble slow-roll parameter, \(\epsilon_{1}\), remains much smaller than 1. Does a similar situation arise in WI when the potential becomes extremely flat? While answering this question, one also needs to keep in mind that WI maintains a radiation bath in near equilibrium throughout the slow-roll evolution. Can WI enter an Ultra slow-roll phase while maintaining the thermal equilibrium of the radiation bath present during the evolution? To answer these questions, we first determine the evolution of the temperature \(T\) for a phase when the potential becomes extremely flat. During such an evolution, as \(V_{,\phi}\approx 0\), we can approximate Eq.(3.1) as \[\ddot{\phi}+3H(1+Q)\dot{\phi}\approx 0. \tag{4.1}\] We will assume that during such evolution a constant radiation bath is maintained, and that radiation bath will remain near thermal equilibrium so that a temperature \(T\) can be defined. In such a case, both Eq. (3.8) and Eq. (3.9) would remain valid. From Eq.(4.1), Eq. (3.8) and Eq. (3.9) it is straightforward to show that the temperature \(T\) evolves as \[\frac{d\ln T}{dN}=\frac{1}{4-p}\left[c\frac{\dot{\phi}}{H\phi}+ \epsilon_{1}-6(1+Q)\right]\,, \tag{4.2}\] when the potential becomes extremely flat. We can see here that the first and the last terms on the right hand side of the above eqution are not proportional to any slow-roll parameters and can take large values. Thus, comparing this equation with Eq. (3.10), which depicts the evolution of \(T\) during slow-roll in WI, we see that one cannot conclude right away that the temperature will evolve slowly during the evolution through an extremely flat region of the potential. Can we make the temperature evolve slowly in such a case so that the system can evolve being near equilibrium? We will address this question by the end of this section. We will now see what happens to the first Hubble slow-roll parameter, \(\epsilon_{1}\), when the potential becomes extremely flat. We note that, in WI, \(-2M_{\rm Pl}^{2}\dot{H}=\rho_{\phi}+P_{\phi}+\rho_{r}+P_{r}=\dot{\phi}^{2}+(3/4 )\rho_{r}\), where \(\rho_{\phi}\) and \(P_{\phi}\) are the energy density and the pressure of the inflaton field, and similarly, \(\rho_{r}\) and \(P_{r}\) are the radiation energy density and radiation pressure, respectively. However, using Eq. (3.8), we can write \[-2M_{\rm Pl}^{2}\dot{H}=(1+Q)\dot{\phi}^{2}. \tag{4.3}\] Thus, using the above equation and Eq. (3.7), the first Hubble slow-roll parameter can be expressed as \[\epsilon_{1}\sim\frac{3}{2}\frac{(1+Q)\dot{\phi}^{2}}{V(\phi)}. \tag{4.4}\] Now, as during inflation the potential energy density dominates over kinetic energy density, we can see that in the weak dissipative regime (\(Q<1\)) \(\epsilon_{1}\) can remain much smaller than unity ensuring inflation to continue even when the potential becomes extremely flat. However, in strong dissipation regime, there are models where the \(Q\) values can be as large as \(\mathcal{O}(10^{3})\)[38; 39; 45]. In such models, it might happen that when the potential becomes extremely flat, \(Q\dot{\phi}^{2}\sim V(\phi)\), yielding \(\epsilon_{1}\sim 1\), which is an indication of end of inflation. But, there are WI models realised in strong dissipation [46] where \(Q\) is of the order of 10 or 100. In such models, inflation continues to take place even when the potential becomes extremely flat. Now, to determine the second Hubble parameter, \(\epsilon_{2}\), during such an evolution, we use Eq. (28), Eq. (30) and Eq. (31) to show that \[\frac{\ddot{H}}{\dot{H}H} = -6(1+Q)+\frac{4}{4-p}\frac{\Upsilon_{,\phi}\dot{\phi}}{3H^{2}(1+Q)} \tag{32}\] \[+\left[\frac{1}{4-p}\frac{\Upsilon_{,T}T}{\Upsilon}+1\right]\frac {Q}{1+Q}\epsilon_{1}-\frac{6Qp}{4-p}\,.\] Using this relation into Eq. (6), we see that \[\epsilon_{2}=-6\left(1+\frac{4Q}{4-p}\right)+\frac{4}{4-p}\left[c \frac{\dot{\phi}}{H\phi}+\epsilon_{1}\right]\frac{Q}{1+Q}+2\epsilon_{1},\] where we have again used the general form of the dissipative coefficient given in Eq. (27). Let us appraise the situation in two different regimes, strong dissipation and weak dissipation, separately. In strong dissipative regime (\(Q\gg 1\)), Eq. (31) can be approximated as \[\epsilon_{2}\sim\frac{4}{4-p}\left[-6Q+c\frac{\dot{\phi}}{H\phi}+ \epsilon_{1}\right]+2\epsilon_{1}\,. \tag{34}\] There are couple of models where WI can be realised in strong dissipative regime: one is presented in [46], and the other is presented in [45] (this model is dubbed Minimal Warm Inflation). In the strong dissipative WI model presented in [46], the dissipative coefficient, despite having a complex form, varies inversely with the temperature (\(\Upsilon\propto T^{-1}\)) during the evolution, and does not depend on the inflaton amplitude. Therefore, for this kind of model \(p=-1\) and \(c=0\). This yields \(\epsilon_{2}\sim(-24/5)Q+(14/5)\epsilon_{1}\). We have mentioned earlier that in this model, \(Q\) can be of the order of 10 or 100 (yielding \(\epsilon_{1}\ll 1\)), and thus the model can enter an Ultra slow-roll phase while inflating. However, looking at Eq. (30), we note that \(d\ln T/dN\sim-(6/5)Q\), and therefore, the temperature decreases exponentially with the e-foldings during the Ultra slow-roll phase. It indicates that the thermal equilibrium of the radiation bath cannot be maintained during Ultra slow-roll in such models, and the system will deviate from the basic WI picture. In the Minimal Warm Inflation model [45], the dissipative coefficient varies with the cubic power of temperature (\(\Upsilon\propto T^{3}\)) and does not depend on the inflaton amplitude. Therefore in this model \(p=3\) and \(c=0\), yielding \(\epsilon_{2}\sim-24Q+6\epsilon_{1}\). As mentioned earlier, \(Q\) can take very large values (\(\mathcal{O}(10^{3})\)) in such models. Despite that, if \(Q\phi^{2}\ll V(\phi)\), then \(\epsilon_{1}\ll 1\) and we get \(\epsilon_{2}\sim-24Q\), indicating an Ultra slow-roll phase. Even though, like in the previous case, we note that \(d\ln T/dN\sim-6Q\), and the temperature will exponentially fall with e-foldings taking the system away from thermal equilibrium, and deviating from the standard WI picture. In weak dissipative regime (\(Q\ll 1\)), Eq. (31) can be approximated as \[\epsilon_{2}=-6+\frac{4}{4-p}\left[c\frac{\dot{\phi}}{H\phi}+ \epsilon_{1}\right]Q+2\epsilon_{1}\,. \tag{35}\] We will deal with two kinds of models where WI is realised in weak dissipative regime. The first one is dubbed Warm Little Inflaton [47], where the dissipative coefficient varies linearly with the temperature (\(\Upsilon\propto T\)) and does not depend on the inflaton amplitude. Therefore, in this case, \(p=1\) and \(c=0\), yielding \(\epsilon_{2}=-6+(10/3)\epsilon_{1}\). As we have seen before, \(\epsilon_{1}\ll 1\) when the potential becomes extremely flat during weak dissipation, and therefore, we conclude that during such a phase \(\epsilon_{2}\sim-6\). Though it indicates an onset of an Ultra slow-roll phase, we again note that, as in the strong dissipative cases discussed above, Eq. (30) yields \(d\ln T/dN\sim-2\), indicating departure from thermal equilibrium. So far, we have discussed WI models where the dissipative coefficient varies solely with \(T\), both in strong and weak dissipative regimes, and saw that the system veers away from the thermal equilibrium state when an Ultra slow-roll like phase sets in. As it is not yet known how to treat WI when the thermal equilibrium of the radiation bath is lost, we will not further analyze the Ultra slow-roll phase in WI models where the dissipative coefficient solely varies with the temperature and has no dependence on the field amplitude. We will now analyze a WI model where the dissipative coefficient is a function of both \(T\) and \(\phi\). The model studied in [44; 48; 41] has a dissipative coefficient of the form \(\Upsilon\propto T^{3}/\phi^{2}\), and WI is realised in weak dissipative regime in such a model. This model has also been verified with the Planck data in [49; 50]. Being realized in weak dissipative regime, we can guarantee that \(\epsilon_{1}\) will remain much smaller than unity even when the potential becomes extremely flat. Our aim, now, is to keep \(|d\ln T/dN|\approx 0\) while making \(|\epsilon_{2}|\) larger than unity, so that an Ultra slow-roll phase can be realized while maintaining the thermal equilibrium of the system. Therefore, with \(\epsilon_{1}\ll 1\) and \(Q\ll 1\), Eq. (30) can be approximated as \[\frac{d\ln T}{dN}\approx\frac{1}{4-p}\left(c\frac{d\ln\phi}{dN}-6 \right). \tag{36}\] We note from this equation that, to keep \(|d\ln T/dN|\approx 0\), \(d\ln\phi/dN\) should be positive if \(c\) is positive and vice versa. In other words, \(|c||d\ln\phi/dN|\) should always remain positive. Hence, to maintain a thermal equilibrium we demand that during an Ultra slow-roll phase \[\left|\frac{d\ln\phi}{dN}\right|\sim\frac{6}{|c|}. \tag{37}\] If we impose this condition on \(\epsilon_{2}\) given in Eq. (35) with other conditions, like \(\epsilon_{1}\ll 1\) and \(Q\ll 1\), we obtain \[\epsilon_{2}\approx-6+\left(\frac{4}{4-p}\right)6Q. \tag{38}\] We can keep \(|\epsilon_{2}|>1\) in two ways: keeping \(\epsilon_{2}>1\) or demanding \(\epsilon_{2}<-1\). In the first case, when \(\epsilon_{2}>1\), we get \[Q>\frac{7}{6}\left(\frac{4-p}{4}\right). \tag{30}\] Therefore, to ensure that WI takes place in weak dissipative regime we constrain \(Q\) as \((7/6)[(4-p)/4]<Q<1\). For the model discussed above [41; 44; 48], this condition leads to \((7/24)<Q<1\). However, in the second case, when \(\epsilon_{2}<-1\), we get \[Q<\frac{5}{6}\left(\frac{4-p}{4}\right). \tag{31}\] This condition ensures that WI will take place in weak dissipative regime. For the model discussed above [41; 44; 48], this condition leads to \(Q<5/24\). However, we note that though the condition given in Eq. (29) allows the onset of Ultra slow-roll in WI, while maintaining thermal equilibrium, still Eq. (29) differs from the equation of motion of the inflaton field during Ultra slow-roll given in Eq. (30). Therefore, as Ultra slow-roll proceeds, the dynamics of the inflaton field will take the system away from the condition in Eq. (29), and the temperature of the system will start to evolve indicating a departure from thermal equilibrium. Thus, the system needs to exit from Ultra slow-roll before the temperature evolves too much to disrupt thermal equilibrium. We will show in the next section, by numerically evolving the system, that such a thermally equilibrated Ultra slow-roll phase can be realised in WI in specific cases. ## V Numerical analysis of viable ultra slow-roll phase in WI In WI, the inflaton field equation of motion and the evolution of the radiation bath (as well as the evolution of the temperature) are coupled as the inflaton field dissipates its energy to the radiation bath throughout WI. The system thus evolves according to the coupled equations given in Eq. (24) and Eq. (25). We will numerically evolve these two equations in cases where WI can enter an Ultra slow-roll phase, and then appraise its characteristics. We will consider two different potentials, the linear potential [7], and the cubic potential [10], where it has been shown previously that the system undergoes Ultra slow-roll in CI scenarios. In both these cases, we will make use of the dissipative coefficient \(\Upsilon=C_{\Upsilon}T^{3}/\phi^{2}\) and will ensure that WI takes place in weak dissipative regime. ### Linear Potential: \(V(\phi)=V_{0}+M_{0}^{3}\phi\) This potential has been considered in [7] to analyze the dynamics of Ultra slow-roll in CI. In this potential, when \(V_{0}\gg M_{0}^{3}\phi\), the potential becomes extremely flat, and the system can enter an Ultra slow-roll phase. We chose the parameters \(V_{0}\) and \(M_{0}\) accordingly, which we have quoted in the caption of Fig. (1). We see from Fig. (1) that initially the acceleration term (\(\ddot{\phi}\)) and the friction term (\(3H(1+Q)\dot{\phi}\)) dominate over the slope term (\(V_{,\phi}\) ) (the graph of \(\ddot{\phi}\) and \(3H(1+Q)\dot{\phi}\) overlaps for the first few e-folds), indicating an Ultra slow-roll phase. Around 3.5 e-folds the acceleration term becomes subdominant, and the dynamics is governed by the friction and the slope terms, indicating an onset of the usual slow-roll phase. In Fig. (2), we note that the thermalization condition during Ultra slow-roll given in Eq. (29) is maintained for about the first 1.5 e-folds. However, as the condition for thermalization differs from the equation of the inflaton field during Ultra slow-roll (Eq. (30)), the thermalization condition cannot be maintained for a long period and the system will veer off from thermal equilibrium, as has been pointed out in the previous section. However, the overall thermalization condition of WI, \(T>H\), will remain maintained throughout the Ultra slow-roll phase, as has been depicted in Fig. (3). Fig. (4) depicts the evolution of the second Hubble parameter, \(\epsilon_{2}\), during this Ultra slow-roll evolution. We note that the Ultra slow-roll condition, \(\epsilon_{2}<-1\), is maintained till about 4 e-foldings and after that \(|\epsilon_{2}|\) becomes smaller than unity, indicating onset of slow-roll phase. Also, Fig. (5), where the evolution of \(Q\) is shown during Ultra slow-roll, ensures that WI takes place in weak dissipative regime during this Ultra slow-roll phase. Figure 1: The figure depicts the numerical evolution of the acceleration term (\(\ddot{\phi}\)), the friction term (\(3H(1+Q)\dot{\phi}\)) and the slope term (\(V_{,\phi}\)) present in the equation of motion of the inflaton field through an Ultra slow-roll phase in the case of linear potential. We have chosen the parameters as follows: \(V_{0}=(10^{-4}\,M_{\rm Pl})^{4}\), \(M_{0}=2.5\times 10^{-8}\,M_{\rm Pl}\), \(C_{\Upsilon}=10\) and \(g_{\star}=106.75\). Hence, we can see that in such a setup an Ultra slow-roll phase can be embedded in a weak dissipative WI model while maintaining thermal equilibrium. ### Cubic Potential: \(V(\phi)=V_{0}+\left[1+\left(\frac{\phi}{\phi_{0}}\right)^{3}\right]\) The cubic potential was considered in [10] while discussing the attractor behaviour of Ultra slow-roll in CI. This potential has an inflection point around \(\phi=0\). In WI, if we start near this inflection point region then the system can undergo an Ultra slow-roll phase. It is to note that we cannot analyze the system at the inflection point (\(\phi=0\)), as the dissipative coefficient (\(\Upsilon\propto T^{3}/\phi^{2}\)) will be ill-defined at this point. We can only analyze the system near about the inflection point. We first note in Fig. (6), that in this case the system quickly deviates from the thermalization condition given in Eq. (4.10) as soon as Ultra slow-roll begins. This indicates that the system should not linger in an Ultra slow-roll phase for a long time as that will lead to disruption of thermal equilibrium of the system. In Fig. (7), we notice that the coupled equations of the system do not let the Ultra slow-roll phase to last for long, and within nearly 1.5 e-folds the system tends to enter a slow-roll phase. We also note in Fig. (8) that the overall thermalization condition of WI (\(T>H\)) is maintained throughout the Ultra slow-roll phase. In Fig. (9) we see that the second Hubble slow-roll, \(\epsilon_{2}\), will remain larger than -1 for about 2.5 e-foldings and then we achieve \(|\epsilon_{2}|<1\) indicating an onset of an usual slow-roll phase. Also, Fig. (10) ensures that the whole dynamics takes place in a weak dissipative regime. Figure 4: Evolution of the second Hubble slow-roll parameter, \(\epsilon_{2}\), during Ultra slow-roll in the case of linear potential. Figure 5: Evolution of \(Q\) during Ultra slow-roll in the case of linear potentail. Figure 3: Evolution of the temperature and the Hubble parameter during Ultra slow-roll in the case of linear potential. Figure 2: Evolution of the thermalization condition given in Eq. (4.10) during Ultra slow-roll in the case of linear potential. Therefore, in this system, too, an Ultra slow-roll phase, though much shorter than the previous scenario, can be realized near an inflection point within a weak dissipative WI model while maintaining the overall thermal equilibrium of the system. ## VI Discussion and Conclusion CI undergoes an Ultra slow-roll phase when the potential becomes extremely flat [7]. In this manuscript we ask the question what happens to WI, a variant inflationary scenario, in a similar situation. As has been pointed out in the Introduction, a constant, subdominant, nearly thermally equilibrated radiation bath coexists during WI due to the continuous dissipation of energy of the inflaton field into this radiation bath. The coexistence of this radiation bath along with inflaton energy densitiy is the signature of WI which distinguishes it from the standard CI scenario. Therefore, one naturally expects the radiation bath, along with the thermal equilibrium of the system, to be maintained in WI even when the system passes through an extremely flat region of the potential. Figure 8: Evolution of the temperature and the Hubble parameter during Ultra slow-roll in the case of cubic potential. Figure 6: Evolution of the thermalization condition given in Eq. (4.10) during Ultra slow-roll in the case of cubic potential. Figure 10: Evolution of \(Q\) during Ultra slow-roll in the case of cubic potential. Figure 7: The figure depicts the numerical evolution of the acceleration term (\(\dot{\phi}\)), the friction term (\(3H(1+Q)\dot{\phi}\)) and the slope term (\(V_{\ast\phi}\)) present in the equation of motion of the inflaton field through an Ultra slow-roll phase in the case of cubic potential. We have chosen the parameters as follows: \(V_{0}=(10^{-4}\,M_{\rm Pl})^{4}\), \(\phi_{0}=2.5\times 10^{-1}\,M_{\rm Pl}\), \(C_{\rm T}=10^{4}\) and \(g_{\ast}=106.75\). Figure 9: Evolution of the second Hubble slow-roll parameter, \(\epsilon_{2}\), during Ultra slow-roll in the case of cubic potential. However, we found in this article that WI models with dissipative coefficients solely dependent on the temperature (\(\Upsilon\propto T^{p}\)) fail to maintain thermal equilibrium of the system when the system traverses through a very flat region of the potential. It is not yet known what happens to the WI dynamics when the thermal equilibrium is lost. We, therefore, could not further analyze these systems in such circumstances. On the other hand, we showed that the overall thermal equilibrium of the system can be maintained throughout a phase when the WI system traverses through an extremely flat region of the potential in cases where the dissipative coefficients are function of both the temperature and the inflaton amplitude. We particularly dealt with the models with dissipative coefficient of the form \(\Upsilon\propto T^{3}/\phi^{2}\). Such models have been shown to tally well the observations when WI takes place in weak dissipative regime [49; 50]. Therefore, we treated such models in the weak dissipative regime, and considered two potentials (linear and cubic) with extremely flat regions to demonstrate that an Ultra slow-roll phase can indeed be realised in WI while maintaining the overall thermal equilibrium of the system. Though it seems like a positive note to conclude the article, we would like to call attention to our incapability and discomfort to deal with WI systems with dissipative coefficients depending on the temperature alone when such systems encounter an extremely flat region of the potential. We have observed that these systems lose thermal equilibrium, a signature property of WI. We hope that this article will encourage more research in this field to reveal the true nature of the WI dynamics in such circumstances. We also leave the analysis of the cosmological perturbations during the Ultra slow-roll phase in WI for a future project. ###### Acknowledgements. SD would like to thank Rudnei Ramos for many useful discussions on Warm Inflation from time to time.
2305.12920
A Diachronic Analysis of Paradigm Shifts in NLP Research: When, How, and Why?
Understanding the fundamental concepts and trends in a scientific field is crucial for keeping abreast of its continuous advancement. In this study, we propose a systematic framework for analyzing the evolution of research topics in a scientific field using causal discovery and inference techniques. We define three variables to encompass diverse facets of the evolution of research topics within NLP and utilize a causal discovery algorithm to unveil the causal connections among these variables using observational data. Subsequently, we leverage this structure to measure the intensity of these relationships. By conducting extensive experiments on the ACL Anthology corpus, we demonstrate that our framework effectively uncovers evolutionary trends and the underlying causes for a wide range of NLP research topics. Specifically, we show that tasks and methods are primary drivers of research in NLP, with datasets following, while metrics have minimal impact.
Aniket Pramanick, Yufang Hou, Saif M. Mohammad, Iryna Gurevych
2023-05-22T11:08:00Z
http://arxiv.org/abs/2305.12920v3
# A Diachronic Analysis of the NLP Research Paradigm Shift: When, How, and Why? ###### Abstract Understanding the fundamental concepts and trends in a scientific field is crucial for keeping abreast of its ongoing development. In this study, we propose a systematic framework for analyzing the evolution of research topics in a scientific field using causal discovery and inference techniques. By conducting extensive experiments on the ACL Anthology corpus, we demonstrate that our framework effectively uncovers evolutionary trends and the underlying causes for a wide range of natural language processing (NLP) research topics. ## 1 Introduction History, when viewed as a repository of the evolution of scientific fields, produces an image of the transformation through which science has progressed. This transformation happens through innovation driven by research. Newer research topics often overwhelm the older ones and contribute to shaping the horizon of a research area [13]. The next generation of scientists learns to practice its trade by studying such evolution from the study of finished scientific achievements recorded in research papers. However, such historical study is very challenging and requires specialized knowledge as well as extensive analysis of the chronological progression of a research field. Furthermore, the rapid increase in scientific publications in recent years has made it difficult even for domain experts to stay current. Therefore, an automated method for tracking the evolution of research topics over time is necessary to provide an overview of the field and aid researchers in staying abreast of advancements more efficiently. In this work, we propose a systematic framework to examine the evolution of research topics in natural language processing (NLP) using causal discovery and inference techniques. Prior research on historical analysis of NLP has primarily focused on analyzing metadata of research papers [1, 19, 18, 17], such as title, author profile, affiliation, and publication venue. These studies have examined the research trends through unigram or bigram frequency analysis, but they do not provide insight into the underlying causes driving these research topics. In our study, we focus on analyzing the relations between an NLP task that is commonly viewed as a focused research topic (e.g., _Machine Translation_) and the key _variables_ that have significant influence on the target task, such as "_BLEU_" [2] or "_Transformers_" [20]. In our study, we use four types of entities to represent these variables that are innate to NLP: _tasks_ represent the research problems, _methods_ are the solutions or approaches to address the tasks, _datasets_ and _metrics_ indicate corpora and the evaluation techniques for a specific task. In the above example, "_BLEU_" is an evaluation metric and "_Transformers_" is a method. Specifically, for a given task \(t\), we aim to obtain answers to the following research questions: (1) Which entities (\(E\)) are indicative of the research trends for \(t\)? (2) Are there any causal relations between \(E\) and \(t\)? (3) What is the causal impact of each entity in \(E\) on the research trends of \(t\)? Unlike Uban et al. (2021) and Koch et al. (2021) that heavily rely on manual annotations and have limited coverage, our analysis is based on TDMM (_Task/Dataset/Metric/Method_) entities automatically extracted from 55K papers in the ACL Anthology1. Our framework not only recognizes the key entities driving the research direction of a research topic but also measures the causal effects of these entities on the target topic in an end-to-end fashion. Figure 1 shows the most influential entities for _Machine Translation_ (MT) in different time periods. For instance, "_statistical models_" used to be the popular method for MT in 1990-2002, and the evaluation metric "_BLEU_" is one of the top causal entities driving the MT research in 2003-2017. In the era of pre-trained large language models (LLMs) starting from 2018, "_transformer_" has become the popular method for MT. In summary, we make _three-fold_ contributions in this study: **First**, we propose a framework to quantify research activities, including (1) the trends and the stability of a research task; and (2) the relation intensity between a TDMM entity and a research task. **Second**, we leverage causal analysis algorithms to discover the causal structures among TDMM entities and measure the causal effects between a task and its related TDMM entities. To the best of our knowledge, this is the first historical study of a scientific research anthology from a causal perspective. **Finally**, through extensive experiments on the ACL Anthology, we provide a broad overview of the NLP research landscape with empirical evidence. ## 2 Related Work ### Scientific Trends Analysis Analyzing scientific trends has been of great research interest since the pioneering work by Hall et al. (2008). Under the umbrella of "scientometric", there is a vast literature focusing on citation patterns and combining topological measures with citation networks to analyze research trends (Small, 2006; Shibata et al., 2008; Boyack and Klavans, 2022). Another line of research focuses on metadata and content analysis. For example, Prabhakaran et al. (2016) used rhetorical framing to study trend patterns. Grudin (2009) and Liu et al. (2015) used prevalence correlation to analyze the interaction between the topics in publications and the research grants. Mohammad (2019) analyzed author profiles and highly impact papers in the ACL Anthology. Koch et al. (2021) studied dataset usage patterns among different research communities. The authors found that the broader trend of increasing concentration on a few datasets is moderated in NLP communities, and new datasets are created at higher rates within NLP. Uban et al. (2021) study the relations (e.g., _friendship_ or _arms race_) between NLP research topics based their co-occurrence in text and the degree of correlation between their popularity over time. The authors used topic models to exact topics from around 55k papers from the ACL Anthlogy and manually labelled these topics. In our work, we develop two entity recognition models to extract TDMM entities from 80k papers and focus on analyzing the causal relations between a task entity and other TDMM entities. ### Causality in NLP Existing works on NLP applying the causal algorithms mainly focus on two directions. The first line of work discovers causal relations among textual features or expressions of events in texts and uses them in various downstream tasks, such as Figure 1: Evolution timeline of Machine Translation (MT). The blue line shows the number of papers on MT from 1979 to 2022. Tables summarize the top causal entities and their types for MT in different time periods. The causal analysis result on 1979-1989 is omitted due to a relatively small number of MT papers in this time period, in which we cannot obtain the statistically significant causal entities for MT. question answering (Oh et al., 2016), commonsense reasoning (Bosselut et al., 2019; Sap et al., 2019), and relation extraction (Do et al., 2011; Mirza and Tonelli, 2014; Dunietz et al., 2017). Please refer to the survey paper by Feder et al. (2022) for more details. In another avenue of this field, researchers use textual features to represent elements of the causal graph, e.g., the cause (Jin et al., 2021), the effect (Fong and Grimmer, 2016) and the confounders (Veitch et al., 2020; Keith et al., 2020); they then use domain knowledge to define the structure of the causal graph for inference. Our work falls within this line of research, where we employ causal algorithms to analyze the trends in NLP research topics and the underlying causes. To the best of our knowledge, our work is the first effort to use a causal framework in scientific trend analysis. ## 3 Data Collection ACL Anthology CorpusACL Anthology is a rich source of NLP research papers. For this work, we collect 55,366 NLP papers that belong to the"ACL Events" category2 from the ACL anthology published between 1979 and 2022. For each paper, we use GROBID (GRO, 2008-2022) and the PDF table parser from Hou et al. (2019) to extract sentences from each of the individual sections as well as from the table and figure captions. In a post-processing step, we remove all the URLs from the extracted sentences. On average, we have 1258 papers per year and 1117 sentences per paper. Footnote 2: This category covers major NLP conferences, workshops, and journals including ACL, NAACL, EMNLP, EACL, AACL, CL, and TACL. Additionally, we also include papers published at COLING from the “Non-ACL events category”. TDMM Entity ExtractionTo identify _tasks_, _datasets_, _metrics_, and _methods_ entities from NLP papers, we developed two entity taggers based on Flair (Akbik et al., 2018). The first tagger is based on the TDMSci annotations (Hou et al., 2019) for recognizing _tasks_, _datasets_, and _metrics_ entities. The second tagger is trained using the SciERC dataset (Luan et al., 2018) to extract _methods_ entities. On the testing datasets of _TDMSci_ and _SciERC_, the two taggers achieve a micro-average F1 of 0.77 and 0.78 for the type partial match(Segura-Bedmar et al., 2013), respectively. In type partial match, a predicted entity is considered correct if it partially overlaps with a gold entity and they have the same type. For example, "_Penn Treebank_" is counted as a correct prediction even if the corresponding gold annotation is "_Penn Treebank dataset_". To further improve the precision of the TDMM taggers, we include only entities that appear in more than five papers in the dataset. For each paper, we collect most frequent task mentions appearing in title, abstract, the experiment section, table and figure captions to approximate the tasks that the paper has done research on. Taxonomy for Periods of ReferenceWe divide our time frame of reference (1979-2022) into four eras based on the dominant methodologies in NLP research that prevailed during each period. Note that our analysis framework is designed to be easily adaptable to any time interval split desired in a specific analysis by the end user. ## 4 Methods We illustrate our framework in Figure 2. After obtaining the TDMM entities from the whole corpus using the pre-processing steps as described in the previous section, we feed the data into two components: first, we automatically select a set of entity types (e.g., _dataset_ or _method_) that influence NLP tasks the most using Multiple Linear Regression (Section 4.1). Second, we design three variables and use them to discover causal structures among TDMM entities. We further perform causal inference to measure the causal effects between a target task and its associated TDMM entities. ### Multiple Linear Regression We use multiple linear regression to interpret the variables that influence the evolution of NLP tasks. More specifically, utilizing multiple regression, we select a set of variables (\(\{X_{i}\}\), TDMM entities) that determine the appearance or disappearance of task entities (\(Y\)) in NLP research. To predict the value of \(Y\) using the values of the set of variables \(\{X_{i}\}\), we perform multiple linear regression of \(Y\) on \(\{X_{i}\}\) and estimate a regression relationship \begin{table} \begin{tabular}{l c} \hline \hline Year Interval & Central Method \\ \hline 1979 - 1989 & Symbolic Approaches \\ 1990 - 2002 & Statistical Models \\ 2003 - 2017 & Neural Models \\ 2018 - 2022 & Pre-trained LLMs \\ \hline \hline \end{tabular} \end{table} Table 1: Eras of NLP research. \(Y=r_{0}+\sum_{i}r_{i}X_{i}\), from the data; when the distribution of the variables is unknown. ### Causal Analysis Methodology To investigate the NLP research trend from a causal perspective, we first define a set of variables, then leverage causal algorithms to discover relations among them and further use these relations for causal inference. #### 4.2.1 Causal Variables **Task Frequency Shift Value:** Unlike the previous works (Tan et al., 2017; Prabhakaran et al., 2016) that use word frequency, we define task frequency as the number of published papers that worked on the task in a given year. Due to the growing number of papers in NLP Research field, we further normalize the task frequency value by the total number of papers published in that year to obtain the normalized task frequency value. For a given task \(y\), Task Frequency Shift Value is the average change in the normalized task frequency value \(\Delta freq(y)\), between two years \(t_{1}\) and \(t_{2}\). \[\Delta freq(y)=\frac{f(y)_{t_{2}}-f(y)_{t_{1}}}{t_{2}-t_{1}} \tag{1}\] where \(f(y)_{t}\) is the normalized frequency of task \(y\) in year \(t\). We use task frequency shift value to quantify research trends on a given task. **Task Stability Value:** Wendlandt et al. (2018) defined semantic stability of a word as the percent overlap between its nearest neighbours in two representation spaces. We adapt this idea in our setting by defining _entity representation space_ for a given year. Specifically, we represent each paper in our dataset as a sequence of entity mentions (tasks, datasets, metrics and methods) by removing all the non-entity tokens. We call this the entity-representation of a paper. We obtain the entity representation space for a given year by applying "skip-gram with negative sampling" (Mikolov et al., 2013) on entity-representaion of the papers published on that year. More formally, given an entity representation of a paper \(e_{1},e_{2},...,e_{n}\), the objective of "skip-gram" is to maximize the mean log probability \(\frac{1}{n}\sum_{i=1}^{n}\sum_{-c\leq j\leq c}log(e_{i+j}|e_{i})\), where \(c\) is called the context window size. Finally, stability value \(\Deltastability(y)\) of a task \(y\) between two years \(t_{1}\) and \(t_{2}\) is defined as the percent overlap between the nearest \(l\) neighbouring entities of the given task in two representation spaces. \[\Delta stability(y)=\frac{|\mathcal{N}_{t_{1}}^{l}(y)\cap\mathcal{N}_{t_{2}}^{l }(y)|}{|\mathcal{N}_{t_{1}}^{l}(y)\cup\mathcal{N}_{t_{2}}^{l}(y)|} \tag{2}\] where \(\mathcal{N}_{t}^{l}(y)\) is the set of \(l\) neighbours of \(y\) in the representation space of year \(t\). Task stability value quantifies the relatedness of a given task to other related TDMM entities. **Entity Change Value:** To study the development of an NLP research area, it is necessary to track emerging as well as disappearing entities associated with it. For this purpose, we define the change value \(\delta_{y}(x)\) of an entity \(x\) of type \(\tau(x)\in\) {task, dataset, metric, method} with respect to a task Figure 2: System architecture. as the absolute difference in frequencies of \(x\) co-occurring with \(y\) in the same sentence, between years \(t_{1}\) and \(t_{2}\) normalized by the total number of entities of same type as \(x\) that co-occur with \(y\) in both years. \[\delta_{y}(x)=\frac{|C_{t_{1}}(x,y)-C_{t_{2}}(x,y)|}{\sum_{\forall e:\tau(e)= \tau(x)}\left(C_{t_{1}}(e,y)+C_{t_{2}}(e,y)\right)} \tag{3}\] where the frequency of \(x\) co-occurring with \(y\) in year \(t\) is given by \(C_{t}(x,y)\). #### 4.2.2 Causal Structure Discovery To facilitate the discovery of causal structure from purely observational data we use DirectLinGAM Shimizu et al. (2011) that assumes a non-Gaussian data generating process. The variables in Section 4.2.1 come from the frequency distribution of the data, hence do not follow the Gaussian distribution. DirectLinGAM uses an entropy-based measure to successively subtract the effect of each independent variable from the given data in the model. Unlike PC-Stable Colombo and Maathuis (2014), it is not based on iterative search in the parameter space, thus needs no initial guess or similar algorithmic parameters. Please refer to Appendix A for a detailed description of the algorithm. #### 4.2.3 Causal Inference After obtaining the causal structure between a task \(y\) and its associated TDMM entities, we measure the causal effects by the _entity change_ value of entity \(x\) on the _frequency shift_ and _stability_ values for a given task \(y\). For this purpose, we use probability density function instead of probability mass, as all our causal variables are continuous in nature. We measure the causal effects in two steps: first we estimate the probability density of the _entity change_ variable using a linear regression model. In the next step, we regress the _frequency shift_ and _stability_ against the _entity change_ value, weighted by the inverse probability densities obtained in the previous step. We model the functional form of this regression using a spline to avoid bias due to mis-specification. Finally, we calculate the causal effect as Veitch and Zaveri (2020): \[\mu(\Delta freq(y))=\mathbb{E}[\Delta freq(y)|\delta_{y}(x)] \tag{4}\] and similarly, \[\mu(\Delta stability(y))=\mathbb{E}[\Delta stability(y)|\delta_{y}(x)] \tag{5}\] ## 5 Results and Analysis ### Variables Influencing Research Trends To interpret the factors associated with research trends, we use multiple linear regression (see Section 4.1), commonly used in economics literature Barrios and Hochberg (2020). Statistically, we predict the number of task entities in a given year as a function of the number of all entities (tasks, methods, datasets, and metrics) before that year. Here, the partial regression coefficient indicates the degree of association between two variables when all the other variables are constant. We represent our results aggregated over all the years (1979-2022). **Evaluation:** To evaluate the regression model, we use the \(R^{2}\) measure, otherwise known as the coefficient of determination. \(R^{2}\) shows how well the data fits the regression model. In other words, it determines the proportion of variance in the dependent variable that can be explained by the selected independent variable set. #### 5.1.1 Optimized Number of Variables The first experiment is to compare how well different input variables fit the regression model. We use the coefficient of determination (\(R^{2}\)) to show how well the data fits the regression. We summarize the results in Table 2. The overall results show that the model fits the data well when we use all four variables - the number of tasks, datasets, methods, and metrics to predict the number of tasks in the following years (\(R^{2}\) value \(0.97\)3). Further, we test whether reducing the number of variables could achieve similar results. \begin{table} \begin{tabular}{l c} \hline \hline Variables & R-Squared (\(\uparrow\)) \\ \hline unique tasks & 0.87 \\ + unique datasets & 0.91 \\ + unique methods & 0.93 \\ + unique metrics & **0.97** \\ \hline \hline \end{tabular} \end{table} Table 2: Variable Selection for Regression. We see that using only one variable (number of tasks) drops the \(R^{2}\) value by \(0.1\) (\(R^{2}\) value \(0.87\)), fitting the data poorly to the model. Increasing the number of variables gives a better fit of the model, denoting that all four variables are significant to analyze the research trends. #### 5.1.2 Influence of the Variables Second, we measure the association of the target task entities with all other TDMM entities. We measure association using the partial regression coefficients (see Section 4.1). From the results in Table 3, we observe that the gradual appearance of new NLP tasks has driven the evolution of research in this field. Further, to investigate how Computational Linguistics has gradually evolved, we measure the association of the same variables in four different time intervals. We observe that between 1979 and 1989, when NLP started to evolve as an independent research field, new _datasets_ were created to drive research. In 1990-2002, the field progressed as new _methods_ were proposed. From 2003-2017, methods have the highest contribution to the progress of the research. However, tasks and datasets also have significant contributions. As evident from the results, between 2018 and 2022, datasets have the highest association with the task entities as a lot of newer datasets were created during this time. ### Causal Relation between the Variables To elicit the causal relationship between the task entities and other relevant TDMM entities in our dataset, we apply DirectLiNGAM Shimizu et al. (2011) discovery algorithm with \(5\%\) significance level (see Appendix A for more details). Figure 3 shows the discovered causal graph for frequency shift of task entities. Overall, we observe that the entity change values of associated tasks, methods, datasets and metrics have direct causal effect on the frequency shift values of the target tasks. Since frequency shift value quantifies the trend in NLP research, we infer from the causal graph that the trend of a task is governed primarily by the life cycles of its associated TDMM entities. We see similar causal relation on task stability value (see Figure 4, Appendix B). **Evaluation:** We perform a sensitivity analysis of the causal graph by adding Gaussian noise with zero mean and unit variance to the entity change values in the data Cinelli et al. (2019). This gives an estimate of robustness of the graph in presence of unobserved confounders. From the edge weights in Figure 3, we observe that the graph is stable to unobserved confounding, giving all edge probabilities greater than 0.5. ### Causal Impact of the Variables The organizers of ACL 20184 categorize NLP Research into 21 areas and provide a set of popular tasks for each area. Out of those, we curate 16 areas and select one task from each based on its frequency of occurrence in our corpus. We estimate the effect of TDMM entities (entity change value) behind the development of these tasks (frequency shift value) (see Section 4.2.1) and summarize the results in Table 4. Since we do not have confounders (Section 5.2), evaluating the causal effect reduces to estimating the conditional expectation of the frequency shift values given the entity change values. We present detailed result in Appendix B.2. Footnote 4: [https://acl2018.org/call-for-papers/](https://acl2018.org/call-for-papers/) In Table 4, we observe that between 2003-2017 RNNs had the highest influence in **Language Modeling** research, however, the trend shifted at the onset of Transformers. Since then transformers have had the highest influence in research on this task. **Dialogue Systems** are related to the Language Modeling task as it requires automatic response generation. Hence, research in this area is highly Figure 3: **Causal Graph: The graph shows that emergence and disappearance of entities (tasks, datasets, metrics and methods) have direct causal effect on the frequency shift of task entities. The weights on the edges denote its probability of appearance in presence of unobserved confounders.** \begin{table} \begin{tabular}{l l l l l} \hline \hline \multirow{2}{*}{Years} & \multicolumn{4}{c}{Partial Regression Coefficient} \\ \cline{2-5} & Tasks & Datasets & Methods & Metrics \\ \hline 1979 - 1989 & 0.35 & **2.24** & 0.21 & 0.02 \\ 1990 - 2002 & 0.82 & 0.89 & **2.86** & 0.81 \\ 2003 - 2017 & 5.37 & 6.26 & **7.00** & 0.69 \\ 2018 - 2022 & 1.47 & **3.38** & 1.79 & 0.41 \\ \hline 1979 - 2022 & **3.50** & 1.07 & 2.92 & 0.54 \\ \hline \hline \end{tabular} \end{table} Table 3: Variables Influencing NLP task entities. influenced by the Generative Models (Probabilistic Models between 1990-2002 and RNNs between 2003-2017). At the same time, large datasets were created to train the models efficiently at the onset of deep neural models (like RNNs). Between 2018-2022 many datasets were created among which MultiWoz is the most used for dialogue system. Machine Translationis another task, closely related to Language Modeling, and requires the generation of the translated text. Naturally, we observe the influence of similar entities in Machine Translation research. Probabilistic Models had the highest influence between 1990-2002. In recent years (2018-2022), Transformers influence this research area the most. However, we must remember that datasets are the innate part that shapes the direction of research in this area. This is evident from the results as we observe that WMT datasets have a great influence on Machine Translation research. Speech Recognitionis linked to Machine Translation in the sense that researchers have investigated end-to-end systems that translate speech. This is also evident from the results as Machine Translation influences research in Speech Recognition most between 2003 and 2022. Also, Hidden Markov Models have a significant influence on research in this area. Named Entity Recognition (NER)is also influence by Hidden Markov Models on in its early days (1990-2002) as NER is commonly posed as a sequence tagging problem. POS Tagging (2003-2017) and Relation Extraction also influence NER as these problems are often jointly solved in an end-to-end fashion. Parts-of-Speech (POS) Taggingwas initially (1990-2002), posed as a text classification problem. Later (2003-2017) various parser algorithms were used to solve this problem. We also observe that the word segmentation task is essential for POS Tagging hence influencing research in this area (2018-2022). For Semantic Parsing,parser algorithms are used to solve the task and significantly influence research in this area. Between 1979 and 1989 Grammar Induction techniques were used to elicit the underlying semantic parse trees. In recent years (2018-2022), dependency parsing and semantic parsing are jointly solved using the same model. Hence dependency parsing drives research in this area. Morphological Analysisis also influenced by research on dependency parsing and Universal Dependency Treebank is often used as a benchmark dataset for this task. In 1990-2002, researchers employ various statistical models in this task which is evident from our results. Semantic Role Labelingis also an important task in NLP Research; methods like Support Vector Machines and Neural Network Models are used to solve this task. Co-reference Resolutionis another task that is often solved using Neural Network models starting from 2018. Along with neural network models, Integer Linear Programming is also used to solve this problem from 2003 to 2017. Between 1990-2002 researchers created the MUC-VI dataset which drove research in this area forward. \begin{table} \begin{tabular}{l c c c c c} \hline \hline & \multicolumn{4}{c}{**Privacy Cause**} \\ \cline{2-6} **Task** & **1979-1989** & **1990-2002** & **2003-2017** & **2018-2022** & **1979-2022** \\ \hline Language Modeling & - & & Recurrent Neural Networks,\({}^{it}\) & Transformers\({}^{it}\) & Transferences\({}^{it}\) \\ Dialogue System & - & Probabilistic Generative Models\({}^{it}\) & Recurrent Neural Networks\({}^{it}\) & MultiWoz\({}^{it}\) & MultiWoz\({}^{it}\) \\ Machine Translation & - & Probabilistic Generative Models\({}^{it}\) & WMT Data\({}^{it}\) & Transformers\({}^{it}\) & Transferences\({}^{it}\) \\ Speech Recognition & Hidden Markov Models\({}^{it}\) & \({}^{it}\) & Hidden Markov Models\({}^{it}\) & Meta Mining Training\({}^{it}\) & Machine Training\({}^{it}\) & Hidden Markov Models\({}^{it}\) \\ \hline Named Entity Recognition & - & Hidden Markov Models\({}^{it}\) & POS Tagging\({}^{it}\) & Relation Extraction\({}^{it}\) & POS Tagging\({}^{it}\) \\ POS Tagging & - & Text Classification\({}^{it}\) & Paser Algorithms\({}^{it}\) & Word Segmentation\({}^{it}\) & Word Segmentation\({}^{it}\) \\ Semantic Parsing & Grammar Induction\({}^{it}\) & Paser Algorithms\({}^{it}\) & Paser Algorithms\({}^{it}\) & Paser Algorithms\({}^{it}\) & Dependency Parsing\({}^{it}\) \\ Morphological Analysis & - & Statistical Models\({}^{it}\) & Dependency Parsing\({}^{it}\) & 10 The French\({}^{it}\) \\ \hline Semantic Role Labeling & - & Support Vector Machines\({}^{it}\) & Neural Network Models\({}^{it}\) & Support Vector Machines\({}^{it}\) \\ Co-reference Resolution & - & MUC-VI Test Collection\({}^{it}\) & Integer Linear Programming\({}^{it}\) & Neural Network Models\({}^{it}\) & Neural Network Models\({}^{it}\) \\ \hline Word Sense Disambiguation & - & Worden\({}^{it}\) & Maximum Entropy Models\({}^{it}\) & Neural Network Models\({}^{it}\) & Worden\({}^{it}\) \\ \hline Sentiment Analysis & - & - & Text Transfer\({}^{\dagger}\) & Text Classification\({}^{it}\) & Text Classification\({}^{it}\) \\ Argument Mining & - & - & Text Classification\({}^{it}\) & Sentiment Analysis\({}^{it}\) & Sentiment Analysis\({}^{it}\) \\ \hline Question Answering & Paving Algorithms\({}^{it}\) & Information Extraction\({}^{it}\) & Information Extraction\({}^{it}\) & Pre-Trained 1LM\({}^{it}\) & Information Extraction\({}^{it}\) \\ Text Text Sentiment & - & Social Models\({}^{it}\) & Pre-Trained 1LM\({}^{it}\) & Pre-Trained 1LM\({}^{it}\) \\ Summarization & - & Worden\({}^{it}\) & Sentence Compression\({}^{it}\) & Pre-Trained 1LM\({}^{it}\) & Pre-Trained 1LM\({}^{it}\) \\ \hline \hline \end{tabular} \end{table} Table 4: **The primary reason behind the frequency shift of the tasks.** We analyze the trends in four different periods of reference. Overall, we observe that Methods (M), Tasks (T), and Datasets (D) are the primary causes behind the paradigm shift of these 16 NLP tasks from different research areas. “-” means there is not enough data instances for the causal analysis. Sentiment Analysishas drawn significant research interest. This task is often posed as a text classification problem and the Twitter dataset plays a major role in driving research in this area. Argument Miningis closely related to the task of Sentiment Analysis. Researchers investigate the sentiments behind the arguments to understand their dynamics. Hence, the sentiment analysis task influences Argument Mining. Additionally, the problem of classifying various argument components (like claims and evidence) is often posed as text classification problems, a fact prominent from the results shown in Table 4. The years 2018-2022 are often called the era of pre-trained large language models (LLMs) (Li et al., 2022). Pre-trained LLMs outperform existing models in a multitude of NLP Tasks and **Question Answering** is one of them. Before the arrival of pre-trained LLMs, question answering was commonly formulated as an information extraction problem. Researchers also explored parsing algorithms to parse the questions and align them to the potential answers. From the results, we also observe that **Textual Entailment** and **Summarization** are another two tasks, that have been heavily influenced by the pre-trained LLMs in 2018-2022. ## 6 Discussion Impact on Task Stability Value.Additionally, we measure the causal effect of research entities on Task Stability Value (see Section 4.2.1). From the resulting causal graph ( Figure 4, Appendix B) we observe that the usage of metrics in a given task is often pre-determined and hence predictable. Hence, metric entities are not the primary factor that makes the task unstable. However, in recent years, researchers often investigate models that solve a variety of tasks together, making the method entities the primary factor that makes a given task unstable. Correlations Between Task Frequency Change and Stability.We observe a slightly positive correlation between frequency change and stability of research tasks with a Pearson coefficient of 0.08. This is because, when a new task emerges, initially a few researchers start working on it which gradually increases its frequency of appearance. At the same time, researchers experiment with various methods and datasets to solve these newly emerged tasks (e.g., Math Problem Solving (Zhang et al., 2018)), causing high instability. On the contrary, the opposite is not always true: well-defined tasks are often the most researched and yet researchers always explore new ideas on these tasks which harms the stability. General Observation.Our analysis shows that research in NLP is primarily driven by tasks and methods; influence of datasets follows them, and metrics have minimum impact. Our analysis also illustrates the gradual paradigm shift in NLP research over time. In its early days, NLP was more focused on solving problems of practical interest like Speech Recognition or Machine Translation. Over time various tools and techniques, as well as datasets, were created and researchers indulged themselves to investigate even more complex problems like textual entailment, or argument mining, which not only requires domain knowledge but also extensive reasoning over data. We observe that pre-trained language models have appeared as a panacea to a multitude of tasks alleviating the need for task-specific solution approaches. ## 7 Conclusion In this paper, we study NLP research retrospectively from a causal perspective. We quantify research trends of task entities and propose a systematic framework leveraging causal algorithms to identify the key reasons behind the emergence or disappearance of NLP tasks. We believe that the causal analysis we presented here provides a useful direction to understand the interplay between research entities that drive forward NLP research. Adding to the new line of works involving causality and NLP (Feder et al., 2021) we show that a causal framework is also beneficial in analyzing a scientific field and gaining new insights otherwise achieved at the expense of domain expertise. We believe this framework could be easily extended to other research fields and hope that the causal techniques explored in this paper will inspire further development and adoption of such techniques within the field of NLP research. ### Limitations Our framework requires research papers tagged with entities as input. Hence the quality of the tags play a crucial role in the causal inference of our proposed method. The taggers generate noisy outputs, thus might require human intervention to denoise the tags. Moreover, causal algorithms require a large amount of data to produce statistically significant results. Hence, research areas that are less explored or newly emerging may not be always suitable for this framework to be applied on. Additionally we highlight that in this work we do not consider extra-linguistic factors like author affiliations, funding, gender etc. We leave them for future research works. ## Ethics Statement In this work, we only consider publicly available data as provided by ACL Anthology. We do not use any personal data. Further, the domain-experts do not require to share personal data of any kind to take part in our system evaluation survey.
2302.08707
Non-reduced components of the Hilbert scheme of curves using triple covers
In this paper we consider curves on a cone that pass through the vertex and are also triple covers of the base of the cone, which is a general smooth curve of genus $\gamma$ and degree $e$ in $\mathbb{P}^{e-\gamma}$. Using the free resolution of the ideal of such a curve found by Catalisano and Gimigliano, and a technique concerning deformations of curves introduced by Ciliberto, we show that the deformations of such curves remain on cones over a deformation of the base curve. This allows us to prove that for $\gamma \geq 3$ and $e \geq 4\gamma + 5$ there exists a non-reduced component $\mathcal{H}$ of the Hilbert scheme of smooth curves of genus $3e + 3\gamma$ and degree $3e+1$ in $\mathbb{P}^{e-\gamma+1}$. We show that $\dim T_{[X]} \mathcal{H} = \dim \mathcal{H} + 1 = (e - \gamma + 1)^2 + 7e + 5$ for a general point $[X] \in \mathcal{H}$.
Youngook Choi, Hristo Iliev, Seonja Kim
2023-02-17T05:45:51Z
http://arxiv.org/abs/2302.08707v3
# Non-reduced components of the Hilbert scheme of curves using triple covers ###### Abstract. In this paper we consider curves on a cone that pass through the vertex and are also triple covers of the base of the cone, which is a general smooth curve of genus \(\gamma\) and degree \(e\) in \(\mathbb{P}^{e-\gamma}\). Using the free resolution of the ideal of such a curve found by Catalisano and Gimigliano, and a technique concerning deformations of curves introduced by Ciliberto, we show that the deformations of such curves remain on cones over a deformation of the base curve. This allows us to prove that for \(\gamma\geq 3\) and \(e\geq 4\gamma+5\) there exists a non-reduced component \(\mathcal{H}\) of the Hilbert scheme of smooth curves of genus \(3e+3\gamma\) and degree \(3e+1\) in \(\mathbb{P}^{e-\gamma+1}\). We show that \(\dim T_{[X]}\mathcal{H}=\dim\mathcal{H}+1=(e-\gamma+1)^{2}+7e+5\) for a general point \([X]\in\mathcal{H}\). Key words and phrases:Hilbert scheme of curves, ruled surfaces, triple coverings, curves on cones 2020 Mathematics Subject Classification: Primary 14C05; Secondary 14H10 The first author was supported by Basic Science Research Program through the National Research Foundation of Korea(NRF) funded by the Ministry of Education(2019R1I1A3A01055643). The second author was supported by Grant KP-06-N 62/5 of Bulgarian National Science Fund. The third author was supported by the National Research Foundation of Korea(NRF) grant funded by the Korea government(MSIT) (2022R1A2C1005977). results proved in [10] and to describe the corresponding curves in a more geometric fashion. In [10, Theorem A] we identified a series of generically smooth components of \(\mathcal{I}_{2g-4\gamma+2,g,r}\) for every \(\gamma\geq 10\) and \(\gamma\leq r\leq g-3\gamma+2\), which extended [10, Theorem 4.3]. In our paper [10] we found a series of non-reduced components of \(\mathcal{I}_{2g-4\gamma+1,g,g-3\gamma+1}\) for every \(\gamma\geq 7\) and \(g\geq 6\gamma+5\). We proved that the corresponding non-reduced components parametrize curves that lie on cones, pass through the vertex of the corresponding cone and are double covers of its general hyperplane section, which is a linearly normal nonspecial curve of genus \(\gamma\). We remark that the non-reduced components from [10] are related to those in [10, Theorem 4.4]. In the present work we continue our study of smooth curves on cones that pass through the vertex of a cone and are \(m\)-covers, \(m\geq 3\), of the hyperplane section of the cone. The main result in the paper concerns the case \(m=3\) and says that under suitable numerical assumptions such families of curves give rise to non-reduced components of the Hilbert scheme of curves. It is formulated in the next theorem. **Main Theorem**.: _Assume that \(e\) and \(\gamma\) are integers such that \(e\geq 4\gamma+5\) and \(\gamma\geq 3\). Let_ \[g:=3\gamma+3e\,,\qquad d:=3e+1\quad\text{ and }\quad r:=e-\gamma+1\,.\] _Then the Hilbert scheme \(\mathcal{I}_{d,g,r}\) possesses a non-reduced component \(\mathcal{H}\) such that_ 1. \(\dim\mathcal{H}=r^{2}+7e+4\)_;_ 2. _at a general point_ \([X]\in\mathcal{H}\) _we have_ \(\dim T_{[X]}\mathcal{H}=\dim\mathcal{H}+1\)_;_ 3. _a general point_ \([X]\in\mathcal{H}\) _represents a curve_ \(X\) _lying on a cone_ \(F\) _over a smooth curve_ \(Y\) _of genus_ \(\gamma\) _and degree_ \(e\) _in_ \(\mathbb{P}^{r-1}\) _such that_ 1. \(X\subset\mathbb{P}^{r}\) _is projectively normal and passes through the vertex_ \(P\) _of the cone_ \(F\)_;_ 2. _there is a line_ \(l\) _from the ruling of_ \(F\) _that is tangent to_ \(X\) _at_ \(P\) _as the intersection multiplicity is two;_ 3. _the projection from_ \(P\) _to the hyperplane in_ \(\mathbb{P}^{r}\) _containing the curve_ \(Y\) _induces a morphism_ \(\varphi:X\to Y\) _of degree three ;_ 4. _the ramification divisor_ \(R_{\varphi}\) _is linearly equivalent to the divisor cut on_ \(X\) _by a quadric hypersurface together with_ \(Q_{1}+Q_{2}\)_, where_ \(Q_{1}\) _and_ \(Q_{2}\) _are the remaining two points in which the tangent line_ \(l\) _intersects_ \(X\) _besides_ \(P\)_._ Although our main result fits in the context of [10], [10] and [11], it is independent of them. To obtain it, we develop the approach used in [10], use the characterization of smooth curves on a cone that pass through its vertex given in [11], and apply similar arguments to those used in [10] and [10] to deduce that every deformation of a curve from the family of curves constructed in the theorem yields a curve from the same family. We remark that the technique used in the proof of our Main Theorem cannot be by applied in the proof of [10, Theorem B], as we explain the reasons for this in Remark 9. On the other hand, the possibility for curves on cones, which are algebraically equivalent to a high degree hypersurface intersection plus a line, to yield a non-reduced component of the Hilbert scheme of curves has already been suggested in [2, Remark 4.12]. In this sense our work was inspired by [2]. The free resolution of the ideal of a smooth curve on a cone passing through its vertex, obtained by Catalisano and Gimigliano in [1], plays an essential role in the proof of our main result. For this reason we describe their result in section 2 using a setup that fits the framework of the Main Theorem. Further in the same section we prove several results about smooth curves on cones that are \(m:1\) covers of the base of the cone and pass through its vertex. Also, for \(m=3\) we prove a technical result, namely Proposition 6, that plays an important role in the proof of our Main Theorem, which is given in section 3. We work over the field \(\mathbb{C}\). By _curve_ we understand a smooth integral projective algebraic curve. Given a line bundle \(L\) on a smooth projective variety \(X\), or a divisor \(\Delta\) associated to \(L\), we denote by \(|L|\) or \(|\Delta|\) the complete linear series \(\mathbb{P}\left(H^{0}(X,L)\right)\) on \(X\). For a line bundle \(L\) and a divisor \(\Delta\) on a variety \(X\), we abbreviate, occasionally, the notation of the line bundle \(L\otimes\mathcal{O}_{X}(\Delta)\) to simply \(L(\Delta)\). We use \(\sim\) to denote linear equivalence of divisors. Given a finite morphism \(\varphi:X\to Y\) of curves and a divisor \(\Delta=\sum n_{i}P_{i}\) on \(X\), we denote by \(\varphi(\Delta)\) the divisor \(\sum n_{i}\varphi(P_{i})\) on \(Y\). When \(X\) is an object of a family, we denote by \([X]\) the corresponding point of the Hilbert scheme representing the family. For all other definitions and properties of objects not explicitly introduced in the paper the reader can refer to [11] and [1]. ## 2. Preliminary results In our paper [2] we constructed a series of non-reduced components of the Hilbert scheme of curves using curves that lie on cones as each curve passes through the vertex of the corresponding cone. There, we considered only curves that are double covers of the base of the cone. On the other hand, curves on cones that are \(m:1\) covers of the base, \(m\geq 2\), and pass through the vertex have been studied by Catalisano and Gimigliano in [1] with a different aim. Motivated by an earlier work of Jaffe about smooth curves on a cone that pass through its vertex, see [10], Catalisano and Gimigliano showed in [1] that such curves are projectively normal, provided that the base curve of cone is, and gave a resolution of the ideal of such a curve in terms of a resolution of the ideal of the base curve. We will formulate below the main result of [1]. For this assume that: * is a smooth integral curve of genus \(\gamma\), * is a divisor of degree \(e\geq 2\gamma+1\) on \(\Gamma\), * is a point on \(\Gamma\), * is the ruled surface \(S=\mathbb{P}(\mathcal{O}_{\Gamma}\oplus\mathcal{O}_{\Gamma}(-E))\), * is the natural projection morphism \(f:S\to\Gamma\), \(\Gamma_{0}\) is the section of minimal self-intersection of \(f:S\to\Gamma\), that is, the one that corresponds to the exact sequence \[0\to\mathcal{O}_{\Gamma}\to\mathcal{O}_{\Gamma}\oplus\mathcal{O}_{\Gamma}(-E) \to\mathcal{O}_{\Gamma}(-E)\to 0\] with \(\Gamma_{0}^{2}=\deg\mathcal{O}_{\Gamma}(-E)=-e\), \(\Psi\) is the morphism determined by the linear series \(|\Gamma_{0}+E\mathfrak{f}|\) on \(S\). We remark that \(\Psi\) is isomorphism away from \(\Gamma_{0}\) and contracts \(\Gamma_{0}\) to a point, see [10] for more details. Thus, \(\Psi\) maps \(S\) into a cone, so we denote by \(F\) the image of \(S\) under \(\Psi\), that is, \(F=\Psi(S)\), and \(P\) the vertex of the cone \(F\), that is, \(P=\Psi(\Gamma_{0})\). Set \(r:=\dim|\Gamma_{0}+E\mathfrak{f}|\). Then the embedding \(F\subset\mathbb{P}^{r}\) is complete and the hyperplane sections of \(F\) are the images, under \(\Psi\), of the divisors from the linear series \(|\Gamma_{0}+E\mathfrak{f}|\) on \(S\). Let \(\sigma_{D}\) be a section of \(f:S\to\Gamma\) for whose image \(\sigma_{D}(\Gamma)=:D\) we have that \(D\) is a smooth curve in the linear series \(|\Gamma_{0}+E\mathfrak{f}|\) on \(S\), and let \(Y\) be the image of \(D\) under \(\Psi\), that is, \(Y=\Psi(D)\). The curves \(\Gamma\), \(D\) and \(Y\) are isomorphic to one another since \(\Psi\) is an isomorphism away from \(\Gamma_{0}\) and \(D\cdot\Gamma_{0}=(\Gamma_{0}+E\mathfrak{f})\cdot\Gamma_{0}=0\). Also, by [1, Proposition 1], \(r=e-\gamma+1\), and \(Y\) is a smooth, linearly normal curve of genus \(\gamma\) and degree \(e\) in \(\mathbb{P}^{r-1}\). In fact, due to \(e\geq 2\gamma+1\), it follows by [13] that \(Y\) is projectively normal. Thus, we can consider \(F\) as a cone in \(\mathbb{P}^{r}\) over the projectively normal curve \(Y\subset\mathbb{P}^{r-1}\). * _We call the above assortment of assumptions about \(\Gamma\), \(E\), \(q\), \(S\), \(f\), \(\Gamma_{0}\), \(\Psi\), \(F\), \(P\), \(r\), \(D\), \(\sigma_{D}\) and \(Y\), and the properties we described, the_ Main Setup_, and we abbreviate it as_ (MS). Catalisano-Gimigliano's result can now be formulated as follows. **Proposition 1**.: _([1, Proposition 2]) Assume the conditions and notations of_ (MS)_. Let \(C_{m}\in|m\Gamma_{0}+(mE+q)\mathfrak{f}|\) be general and \(X_{m}=\Psi(C_{m})\) be the image of \(C_{m}\) on \(F\), where \(m\geq 2\) is an integer. Then_ * \(X_{m}\) _is a smooth integral projectively normal curve that passes through the vertex_ \(P\)_;_ * _given a free resolution of the ideal sheaf_ \(\mathcal{I}_{Y}\) _of_ \(Y\)__ * \[0\to\mathcal{F}_{r-2}\to\mathcal{F}_{r-3}\to\cdots\to\mathcal{F}_{1}\to \mathcal{I}_{Y}\to 0\] _with_ \(\mathcal{F}_{i}=\bigoplus_{j=1}^{\beta_{i}}\mathcal{O}_{\mathbb{P}^{r}}(- \beta_{i,j})\)_,_ \(i=1,\ldots,r-2\)_, the ideal sheaf_ \(\mathcal{I}_{X_{m}}\) _of_ \(X_{m}\) _has a free resolution_ * \[0\to\mathcal{P}_{r-1}\to\mathcal{P}_{r-2}\to\cdots\to\mathcal{P}_{1}\to \mathcal{I}_{X_{m}}\to 0\,,\] _where_ \[\mathcal{P}_{1}=\bigoplus_{1}^{r-1}\mathcal{O}_{\mathbb{P}^{r}}(-m-1)\oplus \bigoplus_{j=1}^{\beta_{1}}\mathcal{O}_{\mathbb{P}^{r}}(-\beta_{1,j})\] \[\mathcal{P}_{k}=\bigoplus_{1}^{\binom{r-k}{k-1}}\mathcal{O}_{\mathbb{P}^{r}}(-m -k)\oplus\bigoplus_{j=1}^{\beta_{k}}\mathcal{O}_{\mathbb{P}^{r}}(-\beta_{k,j}) \oplus\bigoplus_{1}^{\beta_{k-1}}\mathcal{O}_{\mathbb{P}^{r}}(-m-\beta_{k-1,j}), \text{for }2\leq k\leq r-2\] \[\mathcal{P}_{r-1}=\mathcal{O}_{\mathbb{P}^{r}}(-m-r+1)\oplus \bigoplus_{1}^{\beta_{r-2}}\mathcal{O}_{\mathbb{P}^{r}}(-m-\beta_{r-2,j}).\] **Remark 2**.: _For any point \(z\in\Gamma\) the morphism \(\Psi\) maps the fiber \(z\mathfrak{f}\) to a line from the ruling of \(F\) passing through the point \(\Psi(\sigma_{D}(z))\) on \(Y\). Let \(l_{q}\subset F\) be the line corresponding to \(q\). As it is pointed out in [10, section 1], the curve \(X_{m}\), together with \((e-1)\) lines \(L_{1},\ldots,L_{e-1}\) from the ruling of \(F\), is cut on \(F\) by a degree \((m+1)\) hypersurface \(G_{m+1}\subset\mathbb{P}^{r}\), where \(L_{1},\ldots,L_{e-1}\) are the residual lines on \(F\) cut by a hyperplane that contains the line \(l_{q}\). We remark also that the smoothness of a general \(C_{m}\in|m\Gamma_{0}+(mE+q)\mathfrak{f}|\) follows by [11] and [10]._ Note that since the curve \(C_{m}\) is in linear equivalence class of \(m\Gamma_{0}+(mE+q)\mathfrak{f}\), the adjunction formula gives about its genus \(g\) \[2g-2 =(-2\Gamma_{0}+(K_{\Gamma}-E)\mathfrak{f}+m\Gamma_{0}+(mE+q) \mathfrak{f})\cdot(m\Gamma_{0}+(mE+q)\mathfrak{f})\] \[=m(m-1)e+2m\gamma-2\,,\] hence \(g=\binom{m}{2}e+m\gamma\). Likewise, \((\Gamma_{0}+E\mathfrak{f})\cdot C_{m}=me+1\), so \(X_{m}\) is a smooth curve of degree \(d=me+1\) and same genus \(g\). We remark also that if \(q_{0}\) is the point in which the fiber \(q\mathfrak{f}\) meets \(\Gamma_{0}\), then it follows by [12, Proposition 36] that the linear series \(|m\Gamma_{0}+(mE+q)\mathfrak{f}|\) has a unique base point at \(q_{0}\). This allows us to make the following observation about \(X_{m}\). **Proposition 3**.: _Assume the conditions and notations of (MS). Let \(l_{q}\) be as in Remark 2 and \(X_{m}\) be as above. The line \(l_{q}\) is tangent to \(X_{m}\) at the point \(P\) as their intersection multiplicity at \(P\) is exactly two._ Proof.: The morphism \(\Psi:S\to F\) is in fact the resolution of singularity of \(F\) at the vertex \(P\). Since \(C_{m}\) is the proper transform of \(X_{m}\) and \(q\mathfrak{f}\) is the proper transform of \(l_{q}\), they wouldn't meet on \(\Gamma_{0}=\Psi^{-1}(P)\) unless the intersection of \(X_{m}\) and \(l_{q}\) at \(P\) is of multiplicity at least two. On the other hand \(C_{m}\in|m\Gamma_{0}+(mE+q)\mathfrak{f}|\) is general, hence \(q\mathfrak{f}\) meets \(C_{m}\) in additional \(m-1\) points, all of which are distinct and away from \(\Gamma_{0}\). Since \(\Psi\) is an isomorphism away from \(\Gamma_{0}\), the images of those \(m-1\) points will be distinct points on \(l_{q}\) and away from \(P\). The inner projection with center \(P\) to the hyperplane containing \(Y\) yields an \(m:1\) covering \(X_{m}\to Y\), therefore the intersection multiplicity of \(X_{m}\) and \(l_{q}\) at \(P\) can only be two. It is convenient to have an explicit notation for the morphism mentioned in the proof of the lemma, so denote by \(\varphi:X_{m}\to Y\) the \(m:1\) covering morphism induced by the inner projection with center \(P\) to the hyperplane containing the curve \(Y\). We remark that the image \(\varphi(P)\) of the point \(P\) is by definition the point in which the tangent line \(l_{q}\) to \(X_{m}\) at \(P\) meets the hyperplane, which is the point \(\Psi(\sigma_{D}(q))=:Q\). Consider also the morphism \(\phi:C_{m}\to D\) defined as the composition \(\phi:=\sigma_{D}\circ(f_{|_{C_{m}}})\). Note that the morphism \(\phi\) coincides with the composition \((\Psi^{-1})_{|_{Y}}\circ\varphi\circ(\Psi_{|_{C_{m}}})\). Next we will derive a few facts involving the ramification divisor of \(\varphi\) but before that we summarize, for convenience of the reader, the _additional notations_. We will refer to them as (AN). (AN) \[\begin{array}{ll}C_{m}&\text{is a general curve in the linear series }|m\Gamma_{0}+(mE+q)\mathfrak{f}|,\\ q_{0}&\text{is the unique base point of }|m\Gamma_{0}+(mE+q)\mathfrak{f}|;\text{note that }q_{0}\in\Gamma_{0},\\ X_{m}&\text{is the image }\Psi(C_{m})\subset F\subset\mathbb{P}^{r}\text{ of }C_{m},\text{ which is smooth according to Remark 2},\\ \varphi&\text{is the }m:1\text{ covering morphism }\varphi:X_{m}\to Y\text{ induced by the projection with}\\ &\text{center }P\text{ to the hyperplane in }\mathbb{P}^{r}\text{ containing }Y,\\ \phi&\text{is the }m:1\text{ covering morphism }\phi:C_{m}\to D\text{ defined as }\phi:=\sigma_{D}\circ(f_{|_{C_{m}}}),\\ Q&\text{is the point on }Y\text{ defined as the image }\Psi(\sigma_{D}(q))\text{ of }q\in\Gamma.\end{array}\] **Proposition 4**.: _Assume the conditions and notations_ (MS) _and_ (AN)_. Denote by \(R_{\varphi}\) the ramification divisor of the morphism \(\varphi\). Then_ 1. \(R_{\varphi}\) _is linearly equivalent to the divisor cut on_ \(X_{m}\) _by a hypersurface of degree_ \((m-1)\) _together with the_ \((m-1)\) _points, besides_ \(P\)_, in which the line_ \(l_{q}\) _meets_ \(X_{m}\) _;_ 2. \(\deg R_{\varphi}=(m-1)(me+2)\) _;_ 3. _for the branch divisor_ \(\varphi(R_{\varphi})\) _of_ \(\varphi\) _we have_ \(\mathcal{O}_{Y}(\varphi(R_{\varphi}))\cong\mathcal{O}_{Y}(m(m-1))\otimes \mathcal{O}_{Y}(2(m-1)Q)\)_;_ 4. \(\varphi^{*}\mathcal{O}_{Y}(1)\cong\mathcal{O}_{X_{m}}(1)\otimes\mathcal{O}_{X _{m}}(-P)\) _._ Proof.: Since \(C_{m}\) and \(X_{m}\) are isomorphic, we can transform some of the claims about \(R_{\varphi}\) into claims about the ramification divisor \(R_{\phi}\) of the morphism \(\phi\), which are easier to prove. 1. For \(R_{\phi}\) we have \[\begin{split} R_{\phi}&\sim K_{C_{m}}-\phi^{*}K_{D} \\ &\sim(-2\Gamma_{0}+(K_{\Gamma}-E)\mathfrak{f}+m\Gamma_{0}+(mE+q) \mathfrak{f})_{|_{C_{m}}}-\phi^{*}K_{D}\\ &\sim((m-2)\Gamma_{0}+((m-1)E+q)\mathfrak{f})_{|_{C_{m}}}+K_{ \Gamma}\mathfrak{f}_{|_{C_{m}}}-\phi^{*}K_{D}\,.\end{split}\] The divisor \(K_{D}\) is the restriction of \(K_{S}+D\sim-2\Gamma_{0}+(K_{\Gamma}-E)\mathfrak{f}+\Gamma_{0}+E\mathfrak{f} \sim-\Gamma_{0}+K_{\Gamma}\mathfrak{f}\) on the curve \(D\). However, \(D\) doesn't meet \(\Gamma_{0}\), therefore \(\phi^{*}K_{D}\sim\phi^{*}((-\Gamma_{0}+K_{\Gamma}\mathfrak{f})_{|_{D}})=\phi^ {*}(K_{\Gamma}\mathfrak{f}_{|_{D}})\sim K_{\Gamma}\mathfrak{f}_{|_{C_{m}}}\). Therefore \[R_{\phi}\sim((m-2)\Gamma_{0}+((m-1)E+q)\mathfrak{f})_{|_{C_{m}}}\,.\] By the commutativity of the diagram (3) we have that the restriction of \(\Psi\) on \(C_{m}\) takes a divisor that is linearly equivalent to the ramification divisor of \(\phi\) into a divisor that is linearly equivalent to the ramification divisor of \(\varphi\). Consider \[R_{\phi}\sim((m-2)\Gamma_{0}+((m-1)E+q)\mathfrak{f})_{|_{C_{m}}}\sim((m-1)( \Gamma_{0}+E\mathfrak{f})+(q\mathfrak{f}-\Gamma_{0}))_{|_{C_{m}}}\,. \tag{4}\] Since \(\Gamma_{0}\) and \(C_{m}\) meet exactly at the point \(q_{0}\) in which the fiber \(q\mathfrak{f}\) meets \(\Gamma_{0}\), it follows that \((q\mathfrak{f}-\Gamma_{0}))_{|_{C_{m}}}\) is an effective divisor on \(C_{m}\) that consists of \(m-1\) points, say \(q_{1},\ldots,q_{m-1}\) on \(C_{m}\), in which \(q\mathfrak{f}\) intersects \(C_{m}\) besides \(q_{0}\). Hence, \[R_{\phi}\sim((m-1)(\Gamma_{0}+E\mathfrak{f}))_{|_{C_{m}}}+q_{1}+\cdots+q_{m-1}\,.\] The morphism \(\Psi:S\to\mathbb{P}^{r}\) is defined by the linear series \(|\Gamma_{0}+E\mathfrak{f}|\) on \(S\), so \(\Psi\) maps the restriction \(((m-1)(\Gamma_{0}+E\mathfrak{f}))_{|_{C_{m}}}\) to the divisor on \(X_{m}\) cut by a hypersurface of degree \(m-1\). Also, \(\Psi\) maps the fiber \(q\mathfrak{f}\) into the line \(l_{q}\). The images of the points \(q_{1},\ldots,q_{m-1}\) under \(\Psi\) will be the \(m-1\) points in which \(l_{q}\) meets \(X_{m}\) besides \(P\). Therefore, \(R_{\varphi}\) is linearly equivalent to the divisor cut on \(X_{m}\) by a hypersurface of degree \((m-1)\) together with the images of the points \(q_{1},\ldots,q_{m-1}\), which lie on \(l_{q}\) as claimed. 2. Since \(\deg X_{m}=(\Gamma_{0}+E\mathfrak{f})\cdot(m\Gamma_{0}+(mE+q)\mathfrak{f})= me+1\), it follows by (a) that \[\deg R_{\varphi}=(m-1)\deg X_{m}+(m-1)=(m-1)(me+2)\,.\] 3. To verify the last claim we show for the branch divisor \(\phi(R_{\phi})\) of \(\phi:C_{m}\to D\) that \[\phi(R_{\phi})\sim((m(m-1)E+2(m-1)q)\mathfrak{f})_{|_{D}}\,.\] Recall first that the map \(\phi:\operatorname{Div}(C_{m})\to\operatorname{Div}(D)\) is linear in the sense that \(\phi(\sum\limits_{j}n_{P_{j}}P_{j})=\sum\limits_{j}n_{P_{j}}\phi(P_{j})\), where \(P_{j}\in C_{m}\) and \(n_{j}\in\mathbb{Z}\). Note also that according to [10, Ex. IV.2.6, p. 306], the image of any divisor linearly equivalent to \(\sum\limits_{j}n_{P_{j}}P_{j}\) determines the linear equivalence class of \(\phi(\sum\limits_{j}n_{P_{j}}P_{j})\). Thus, as we claim just linear equivalence, the first equivalence in equation (4) implies that it is sufficient to verify that (1) \(\phi\left((m-2)(\Gamma_{0}+E\mathfrak{f})_{|_{C_{m}}}\right)\sim((m(m-2)E+(m- 2)q)\mathfrak{f})_{|_{D}}\), and (2) \(\phi\left((E+q)\mathfrak{f})_{|_{C_{m}}}\right)\sim((m(E+q))\mathfrak{f})_{|_{D}}\). The first claim follows from the fact that \(\Gamma_{0}\) and \(C_{m}\) intersect exactly at \(q_{0}\), \(\phi(q_{0})=q\mathfrak{f}_{|_{D}}\) and that \(\phi:C_{m}\to D\) is an \(m:1\) covering. The second claim follows by similar reasons. This implies about the branch divisor on \(D\) that \[\phi(R_{\phi}) \sim\phi((m-2)(\Gamma_{0}+E\mathfrak{f})_{|_{C_{m}}})+\phi\left( ((E+q)\mathfrak{f})_{|_{C_{m}}}\right)\] \[\sim((m(m-2)E+(m-2)q)\mathfrak{f})_{|_{D}}+((m(E+q))\mathfrak{f}) _{|_{D}}\] \[\sim((m(m-1)E+2(m-1)q)\mathfrak{f})_{|_{D}}\,.\] By the commutativity of diagram (3) we have that \[\varphi(R_{\varphi})\sim\Psi_{|_{D}}(\phi(R_{\phi}))\sim\Psi_{|_{D}}(\phi(((m(m -1)E+2(m-1)q)\mathfrak{f})_{|_{D}}))\,.\] Recall that \(\Gamma_{0}\) and \(D\) do not intersect and \(E\mathfrak{f}_{|_{D}}\equiv(\Gamma_{0}+E\mathfrak{f})_{|_{D}}\). Since the divisors from \(|\Gamma_{0}+E\mathfrak{f}|\) are mapped by \(\Psi\) into hyperplane sections of \(F\), it follows that the branch divisor \(\varphi(R_{\varphi})\) is linearly equivalent to a divisor on \(Y\) cut by a hypersurface of degree \(m(m-1)\) together with the with the divisor \(2(m-1)Q\), where \(Q\) is the point in which the line \(l_{q}\) meets \(Y\). Therefore, \(\mathcal{O}_{Y}(\varphi(R_{\varphi}))\cong\mathcal{O}_{Y}(m(m-1))\otimes \mathcal{O}_{Y}(2(m-1)Q)\) as it was claimed. * The claim and its proof are contained in the proof of [2, Proposition 2]. The proposition that follows will be used in the proof of the Main Theorem to identify the curves of given degree and genus that lie a cone in terms of the linear equivalence class of a specific divisor on the desingularization of the cone. **Proposition 5**.: _Suppose that \(F\subset\mathbb{P}^{r}\) is a cone over a smooth integral linearly normal curve \(Y\) of genus \(\gamma\) and degree \(e\geq 2\gamma+1\) in \(\mathbb{P}^{r-1}\). Let \(S\) be the ruled surface defined as the blow-up of \(F\) at its vertex, and let \(f:S\to Y\) be the natural surjective morphism with a section \(Y_{0}\) of minimal self-intersection. If \(X\) is a smooth integral curve of degree \(d=me+1\) and genus \(g=\binom{m}{2}e+m\gamma\) on \(F\), then its proper transform \(C\) on \(S\) is linearly equivalent to \(mY_{0}+(mE+q)\mathfrak{f}\), where \(E\) is an effective divisor of degree \(e\) on \(Y\) such that \(S\cong\mathbb{P}(\mathcal{O}_{Y}\oplus\mathcal{O}_{Y}(-E))\) and \(q\) is a point on \(Y\)._ Proof.: Since \(S\) is the blow-up of \(F\) at its vertex, it must be a decomposable ruled surface over \(Y\). Since \(Y_{0}\) is the section of minimal self-intersection of \(f:S\to Y\), we must have that \(\deg E=-Y_{0}^{2}=\deg Y=e\). The Picard group of \(S\) is generated by \(Y_{0}\) and the pullbacks via \(f^{*}\) of the divisors on \(Y\). Hence, \(C\sim aY_{0}+B\mathfrak{f}\) for a divisor \(B\) on \(Y\). For the degree of \(X\) we have \[me+1=\deg X=(Y_{0}+E\mathfrak{f})\cdot(aY_{0}+B\mathfrak{f})=-ae+ae+\deg B\,,\] so \(\deg B=me+1\). Applying the adjunction theorem for \(C\) we get \[2g-2 =(K_{C}+C)\cdot C\] \[=(-2Y_{0}+(K_{Y}-E)\mathfrak{f}+aY_{0}+B\mathfrak{f})\cdot(aY_{0 }+B\mathfrak{f})\] \[=((a-2)Y_{0}+(K_{Y}-E+B)\mathfrak{f})\cdot(aY_{0}+B\mathfrak{f})\] \[=a(a-2)(-e)+(me+1)(a-2)+a(2\gamma-2-e+me+1)\] \[=-ea^{2}+2ae+(me+1)a-2me-2+(2\gamma-1+(m-1)e)a\,.\] Since \(2g-2=m(m-1)e+2m\gamma-2\), we obtain \[ea^{2}-((2m+1)e+2\gamma)a+m(m+1)e+2m\gamma=0\,. \tag{5}\] Solving (5) for \(a\) we obtain solutions \(a=m\) and \(a=m+1+\frac{2\gamma}{e}\). Since \(e\geq 2\gamma+1\), the second number is not an integer, so \(a=m\) is the only solution. It remains to prove the claim about \(B\), that is, \(B\sim mE+q\) for some point \(q\in Y\). An argument similar to that in [1, Prop. V.2.6, p.371] shows that \(j_{*}\mathcal{O}_{Y_{0}}(Y_{0})\cong\mathcal{O}_{Y}(-E)\) where \(j\) is the isomorphism \(j:Y_{0}\to Y\). Namely, consider the exact sequence \[0\to\mathcal{O}_{S}\to\mathcal{O}_{S}(Y_{0})\to\mathcal{O}_{Y_{0}}\otimes \mathcal{O}_{S}(Y_{0})\to 0\,,\] and push it down to \(Y\). By Grauert's theorem we have \[0\to f_{*}\mathcal{O}_{S}\to f_{*}\mathcal{O}_{S}(Y_{0})\to j_{*}(\mathcal{O}_ {Y_{0}}(Y_{0}))\to 0\,.\] Since \(f_{*}\mathcal{O}_{S}\equiv\mathcal{O}_{Y}\) and \(f_{*}\mathcal{O}_{S}(Y_{0})\cong\mathcal{O}_{Y}\oplus\mathcal{O}_{Y}(-E)\), we deduce that \(j_{*}\mathcal{O}_{Y_{0}}(Y_{0})\cong\mathcal{O}_{Y}(-E)\). Further, \(C\) is a smooth curve on \(S\) and \(C\cdot Y_{0}=(mY_{0}+B\mathfrak{f})\cdot Y_{0}=1\), so \(C\) intersects \(Y_{0}\) in a single point, say \(z=C\cap Y_{0}\). Since \(C\sim mY_{0}+B\mathfrak{f}\), the restrictions \(C_{|_{Y_{0}}}\) and \((mY_{0}+B\mathfrak{f})_{|_{Y_{0}}}\) must be linearly equivalent too. Hence, \[z\sim(mY_{0}+B\mathfrak{f})_{|_{Y_{0}}}\,,\] or equivalently, \(j(z)\sim-mE+B\) on \(Y_{0}\). Taking \(q:=j(z)\), we obtain \(B\sim mE+q\). In the proof of the main theorem in section 3 we will need the exact form of \(\varphi_{*}\mathcal{O}_{X_{m}}\) and \(\varphi_{*}(\mathcal{O}_{X_{m}}(P))\) for \(m=3\). The statement giving the explicit expressions of those bundles develops an idea encountered in [10, Proposition 2.2]. Due to obvious reasons, we give a formulation and a proof only in the case \(m=3\), which is sufficient for our purposes. **Proposition 6**.: _Assume the conditions and notations_ (MS) _and_ (AN)_. Fix \(m=3\) and denote \(C_{3}=:C\) and \(X_{3}=:X\). Then_ 1. \(\varphi_{*}(\mathcal{O}_{X}(P))\cong\mathcal{O}_{Y}\oplus\mathcal{O}_{Y}(-1) \oplus(\mathcal{O}_{Y}(-2)\otimes\mathcal{O}_{Y}(-Q))\)_,_ 2. \(\varphi_{*}\mathcal{O}_{X}\cong\mathcal{O}_{Y}\oplus(\mathcal{O}_{Y}(-1) \otimes\mathcal{O}_{Y}(-Q))\oplus(\mathcal{O}_{Y}(-2)\otimes\mathcal{O}_{Y}( -Q))\)_._ Proof.: The equivalent statements about \(\phi:C\to D\) appear as 1. \(\phi_{*}(\mathcal{O}_{C}(\Gamma_{0}))\cong\mathcal{O}_{D}\oplus\mathcal{O}_{ D}(-E\mathfrak{f})\oplus\mathcal{O}_{D}(-(2E+q)\mathfrak{f})\), 2. \(\phi_{*}\mathcal{O}_{C}\cong\mathcal{O}_{D}\oplus\mathcal{O}_{D}(-(E+q) \mathfrak{f})\oplus\mathcal{O}_{D}(-(2E+q)\mathfrak{f})\). If we denote by \(\nu\) the morphism \(f_{|_{C}}:C\to\Gamma\), or equivalently, if \(\iota\) is the embedding \(\iota:C\hookrightarrow S\) and \(\nu\) is the composition \(f\circ\iota\), the two claims translate into 1. \(\nu_{*}(\mathcal{O}_{C}(q_{0}))\cong\mathcal{O}_{\Gamma}\oplus\mathcal{O}_{ \Gamma}(-E)\oplus\mathcal{O}_{\Gamma}(-2E-q)\), 2. \(\nu_{*}\mathcal{O}_{C}\cong\mathcal{O}_{\Gamma}\oplus\mathcal{O}_{\Gamma}(-E- q)\oplus\mathcal{O}_{\Gamma}(-2E-q)\). It is sufficient to prove claims (a\({}^{\prime\prime}\)) and (b\({}^{\prime\prime}\)), which we will do next. We remark that claim (b\({}^{\prime\prime}\)) has been proven by Miranda for varieties of arbitrary dimension, see [11, Proposition 8.1, p.1150]. Here we give a proof of it (for curves) as well, as it is easy to do in our context. Since \(C\in|3\Gamma_{0}+(3E+q)\mathfrak{f}|\), there is an exact sequence \[0\to\mathcal{O}_{S}(-2\Gamma_{0}-(3E+q)\mathfrak{f})\to\mathcal{O}_{S}(\Gamma _{0})\to\iota_{*}\mathcal{O}_{C}(\Gamma_{0})\equiv\iota_{*}\mathcal{O}_{C}(q_{ 0})\to 0\,.\] Pushing it down to \(\Gamma\) via \(f_{*}\), we get the exact sequence (6) For every point \(z\in\Gamma\) we have that \(z\mathfrak{f}\cong\mathbb{P}^{1}\) and \(\deg(-2\Gamma_{0}-(3E+q)\mathfrak{f})_{|_{z\mathfrak{f}}})=\deg(-2\Gamma_{0} \cdot\mathfrak{f})=-2\), hence \[h^{i}(z\mathfrak{f},\mathcal{O}_{S}(-2\Gamma_{0}-(3E+q)\mathfrak{f})_{|_{z \mathfrak{f}}})=h^{i}(\mathbb{P}^{1},\mathcal{O}_{\mathbb{P}^{1}}(-2))=\begin{cases} 0&\text{ if }i=0\\ 1&\text{ if }i=1\,.\end{cases}\] By Grauert's theorem, see [10, Theorem III.12.9], it follows that \(f_{*}\mathcal{O}_{S}(-2\Gamma_{0}-(3E+q)\mathfrak{f})\) vanishes, while \(R^{1}f_{*}\mathcal{O}_{S}(-2\Gamma_{0}-(3E+q)\mathfrak{f})\) must be a locally free sheaf of rank one, that is, a line bundle on \(\Gamma\). From the definition of \(S\) we have \(f_{*}\mathcal{O}_{S}(\Gamma_{0})\cong\mathcal{O}_{\Gamma}\oplus\mathcal{O}_{ \Gamma}(-E)\), and since \(h^{1}(z\mathfrak{f},\mathcal{O}_{S}(\Gamma_{0})_{|_{z\mathfrak{f}}})=h^{1}( \mathbb{P}^{1},\mathcal{O}_{\mathbb{P}^{1}}(1))=0\), the Grauert's theorem implies that (6) reduces to \[0\to\mathcal{O}_{\Gamma}\oplus\mathcal{O}_{\Gamma}(-E)\to\nu_{*}\mathcal{O}_ {C}(q_{0})\to R^{1}f_{*}\mathcal{O}_{S}(-2\Gamma_{0}-(3E+q)\mathfrak{f})\to 0\,. \tag{7}\] Since \(\nu:C\to\Gamma\) is a triple covering morphism, the push-forward \(\nu_{*}\mathcal{O}_{C}\) must split as \[\nu_{*}\mathcal{O}_{C}=\mathcal{O}_{\Gamma}\oplus\mathcal{E}^{\vee}\] where \(\mathcal{E}\) is a vector bundle of rank two on \(\Gamma\) for which its dual bundle \(\mathcal{E}^{\vee}\) is the Tschirnhausen module of \(\nu\). Denote \(\beta:=\det(\nu_{*}\mathcal{O}_{C})=\det\mathcal{E}^{\vee}\). Using [10, Ex. IV.2.6(d), p.306], we obtain easily that \(\deg\beta=-3e-2\). Since \(\Gamma_{0}\) and \(C\) meet exactly at the point \(q_{0}\), which is mapped by \(\nu\) into \(q\) on \(\Gamma\), it follows by [10, Ex. IV.2.6(a), p.306] that \[\det(\nu_{*}\mathcal{O}_{C}(q_{0}))\cong\det(\nu_{*}\mathcal{O}_{C})\otimes \mathcal{O}_{\Gamma}(q)\cong\beta(q)\,.\] Therefore \[R^{1}f_{*}\mathcal{O}_{S}(-2\Gamma_{0}-(3E+q)\mathfrak{f}) \cong\det(\nu_{*}\mathcal{O}_{C}(q_{0}))\otimes(\det(\mathcal{O}_ {\Gamma}\oplus\mathcal{O}_{\Gamma}(-E))^{-1}\] \[\cong\beta(q)\otimes\mathcal{O}_{\Gamma}(E)\] \[\cong\beta(E+q)\,.\] Since \(\deg(\beta^{\vee}(-2E-q))=e+1>2\gamma-2\), we have \[\operatorname{Ext}^{1}(R^{1}f_{*}\mathcal{O}_{S}(-2\Gamma_{0}-(3E +q)\mathfrak{f}),\mathcal{O}_{\Gamma}\oplus\mathcal{O}_{\Gamma}(-E)) =\operatorname{Ext}^{1}(\beta(E+q),\mathcal{O}_{\Gamma}\oplus \mathcal{O}_{\Gamma}(-E))\] \[=H^{1}(\Gamma,\beta^{\vee}(-E-q)\oplus\beta^{\vee}(-2E-q))\] \[=0\,.\] This implies that the exact sequence (7) splits, so we get \[\nu_{*}\mathcal{O}_{C}(q_{0})\cong\mathcal{O}_{\Gamma}\oplus\mathcal{O}_{ \Gamma}(-E)\oplus\beta(E+q)\,. \tag{8}\] Since the Tschirnhausen module \(\mathcal{E}^{\vee}\) is determined uniquely by the covering morphism \(\nu:C\to\Gamma\) and since \(\iota:C\hookrightarrow S=\mathbb{P}(\mathcal{O}_{\Gamma}\oplus\mathcal{O}_{ \Gamma}(-E))\) is an embedding, it follows by [11, Ex. IV.2.6(a), p.306] that \[\det(\nu_{*}\mathcal{O}_{C}(q_{0}))\cong\det(\nu_{*}\mathcal{O}_{C}(q_{0})) \otimes\mathcal{O}_{\Gamma}(q_{0})\cong\beta(q_{0})\,.\] Therefore \[R^{1}f_{*}\mathcal{O}_{S}(-2\Gamma_{0}-(3E+q)\mathfrak{f}) \cong\det(\nu_{*}\mathcal{O}_{C}(q_{0}))\otimes(\det(\mathcal{O}_{ \Gamma}\oplus\mathcal{O}_{\Gamma}(-E))^{-1}\] \[\cong\beta(q)\otimes\mathcal{O}_{\Gamma}(E)\] \[\cong\beta(E+q)\,.\] Since \(\deg(\beta^{\vee}(-2E-q))=e+1>2\gamma-2\), we have \[\operatorname{Ext}^{1}(R^{1}f_{*}\mathcal{O}_{S}(-2\Gamma_{0}-(3E+q) \mathfrak{f}),\mathcal{O}_{\Gamma}\oplus\mathcal{O}_{\Gamma}(-E)) =\operatorname{Ext}^{1}(\beta(E+q),\mathcal{O}_{\Gamma}\oplus \mathcal{O}_{\Gamma}(-E))\] \[=H^{1}(\Gamma,\beta^{\vee}(-E-q)\oplus\beta^{\vee}(-2E-q))\] \[=0\,.\] This implies that the exact sequence (7) splits, so we get \[\nu_{*}\mathcal{O}_{C}(q_{0})\cong\mathcal{O}_{\Gamma}\oplus\mathcal{O}_{ \Gamma}(-E)\oplus\beta(E+q)\,. \tag{8}\] Since the Tschirnhausen module \(\mathcal{E}^{\vee}\) is determined uniquely by the covering morphism \(\nu:C\to\Gamma\) and since \(\iota:C\hookrightarrow S=\mathbb{P}(\mathcal{O}_{\Gamma}\oplus\mathcal{O}_{ \Gamma}(-E))\) is an embedding, it follows by [11, Ex. IV.2.6(a), p.306] that \[\det(\nu_{*}\mathcal{O}_{C}(q_{0}))\cong\det(\nu_{*}\mathcal{O}_{C}(q_{0})) \otimes\mathcal{O}_{\Gamma}(q_{0})\cong\beta(q_{0})\,.\] Therefore \[R^{1}f_{*}\mathcal{O}_{C}(-2\Gamma_{0}-(3E+q)\mathfrak{f}) \cong\det(\nu_{*}\mathcal{O}_{C}(q_{0}))\otimes(\det(\mathcal{O}_{ \Gamma}\oplus\mathcal{O}_{\Gamma}(-E))^{-1}\] \[\cong\beta(q)\otimes\mathcal{O}_{\Gamma}(E)\] \[\cong\beta(E+q)\,.\] Since \(\deg(\beta^{\vee}(-2E-q))=e+1>2\gamma-2\), we have \[\operatorname{Ext}^{1}(R^{1}f_{*}\mathcal{O}_{S}(-2\Gamma_{0}-(3E+q) \mathfrak{f}),\mathcal{O}_{\Gamma}\oplus\mathcal{O}_{\Gamma}(-E)) =\operatorname{Ext}^{1}(\beta(E+q),\mathcal{O}_{\Gamma}\oplus \mathcal{O}_{\Gamma}(-E))\] \[=H^{1}(\Gamma,\beta^{\vee}(-E-q)\oplus\beta^{\vee}(-2E-q))\] \[=0\,.\] This implies that the exact sequence (7) splits, so we get \[\nu_{*}\mathcal{O}_{C}(q_{0})\cong\mathcal{O}_{\Gamma}\oplus\mathcal{O}_{ \Gamma}(-E)\oplus\beta(E+q)\,. \tag{8}\] Since the Tschirnhausen module \(\mathcal{E}^{\vee}\) is determined uniquely by the covering morphism \(\nu:C\to\Gamma\) and since \(\iota:C\hookrightarrow S=\mathbb{P}(\mathcal{O}_{\Gamma}\oplus\mathcal{O}_{ \Gamma}(-E))\) is an embedding, it follows by [11, Ex. IV.2.6(a), p.306] that \[\det(\nu_{*}\mathcal{O}_{C}(q_{0}))\cong\det(\nu_{*}\mathcal{O}_{C}(q_{0})) \otimes\mathcal{O}_{\Gamma}(q_{0})\cong\beta(q_{0})\,.\] Therefore \[R^{1}f_{*}\mathcal{O}_{C}(-2\Gamma_{0}-(3E+q)\mathfrak{f}) \cong\det(\nu_{*}\mathcal{O}_{C}(q_{0}))\otimes(\det(\mathcal{O}_{ \Gamma}\oplus\mathcal{O}_{\Gamma}(-E))^{-1}\] \[\cong\beta(q)\otimes\mathcal{O}_{\Gamma}(E)\] \[\cong\beta(E+q)\,.\] Since \(\deg(\beta^{\vee}(-2E-q))=e+1>2\gamma-2\), we have \[\operatorname{Ext}^{1}(R^{1}f_{*}\mathcal{O}_{C}(-2\Gamma_{0}-(3E+q)\mathfrak{f}), \mathcal{O}_{\Gamma}\oplus\mathcal{O}_{\Gamma}(-E)) =\operatorname{Ext}^{1}(\beta(E+q),\mathcal{O}_{\Gamma}\oplus \mathcal{O}_{\Gamma}(-E))\] \[=H^{1}(\Gamma,\beta^{\vee}(-E-q)\oplus\beta^{\vee}(-2E-q))\] \[=0\ Theorem 1.3, p.439] that \[\nu_{*}\mathcal{O}_{C}\cong\mathcal{O}_{\Gamma}\oplus((\mathcal{O}_{\Gamma} \oplus\mathcal{O}_{\Gamma}(-E))\otimes\mathcal{L})\] for some line bundle \(\mathcal{L}\) on \(\Gamma\). Using \(\deg(\det\nu_{*}\mathcal{O}_{C})=\deg\beta=-3e-2\), we get \(\deg\mathcal{L}=-e-1\). From \[0\to\nu_{*}\mathcal{O}_{C}\to\nu_{*}\mathcal{O}_{C}(q_{0})\to\mathcal{O}_{q} \to 0\,,\] we obtain \[0\to\mathcal{O}_{\Gamma}\oplus\mathcal{L}\oplus\mathcal{L}(-E)\to\mathcal{O}_ {\Gamma}\oplus\mathcal{O}_{\Gamma}(-E)\oplus\beta(E+q)\to\mathcal{O}_{q}\to 0\,,\] The map from the summand \(\mathcal{L}(-E)\) to the summand \(\beta(E+q)\) is nonzero and both line bundles are of degree \(-2e-1\). Therefore \(\mathcal{L}\cong\beta(E+q)\). On the other hand \(\beta=\det(\nu_{*}\mathcal{O}_{C})=\mathcal{L}^{2}(-E)\), which gives \(\mathcal{L}=\mathcal{O}_{\Gamma}(-E-q)\). The last implies \(\beta\cong\mathcal{O}_{\Gamma}(-3E-2q)\). This gives both statements (a) and (b) in the proposition. Finally, we recall one more result that will be used in the proof of the Main Theorem in section 3. **Proposition 7**.: ([1, Proposition 2]) _Let \(X\) be a non-degenerate smooth integral curve in \(\mathbb{P}^{r}\), where \(r\geq 3\). Let \(H\) be a hyperplane in \(\mathbb{P}^{r}\) and \(P\) be a point on \(X\). Suppose that the inner projection \(\varphi\,:\,X\to H\cong\mathbb{P}^{r-1}\) with center \(P\) maps \(X\) to a non-degenerate smooth integral curve \(Y\) in \(H\). Denote by \(R_{\varphi}\) the ramification divisor of \(\varphi\). Then_ \[0\to\mathcal{O}_{X}(R_{\varphi})\otimes\mathcal{O}_{X}(1)\otimes\mathcal{O}_ {X}(2P)\to N_{X/\mathbb{P}^{r}}\to\varphi^{*}N_{Y/\mathbb{P}^{r-1}}\otimes \mathcal{O}_{X}(P)\to 0\,,\] _where \(N_{X/\mathbb{P}^{r}}\) is the normal bundle of \(X\) in \(\mathbb{P}^{r}\) and \(N_{Y/\mathbb{P}^{r-1}}\) is the normal bundle of \(Y\) in \(H\cong\mathbb{P}^{r-1}\)._ ## 3. Proof of the main theorem Recall the basic numerical assumptions in the theorem: \(\gamma\geq 3\) and \(e\geq 4\gamma+5\). Throughout this section we also fix \[g:=3e+3\gamma\,,\quad d:=3e+1=g-3\gamma+1\quad\text{ and }\quad r:=e-\gamma+1= \frac{g}{3}-2\gamma+1\,.\] The technique used in the proof is derived from [1, 1] and [13]. The proof itself proceeds in three main steps: 1. We construct a family \(\mathcal{F}\) of curves satisfying the characterization (iii) in the main theorem, then we consider the closure \(\mathcal{H}\) of the subset of \(\mathcal{I}_{d,g,r}\) parametrizing the family \(\mathcal{F}\) and show that \[\dim\mathcal{H}=r^{2}+7e+4\,.\] 2. For a general curve \(X\) from the family \(\mathcal{F}\) we show that \[\dim T_{[X]}\mathcal{H}=h^{0}(X,N_{X/\mathbb{P}^{r}})=r^{2}+7e+5=\dim\mathcal{ H}+1\,.\] **Step III.** We show that \(\mathcal{H}\) forms an irreducible component of \(\mathcal{I}_{d,g,r}\). **Step I.** Construction of the family. Let \(\Gamma\in\mathcal{M}_{\gamma}\) be a general curve of genus \(\gamma\) and \(E\) be a general divisor of degree \(e\geq 4\gamma+5\) on \(\Gamma\). Let \(q\in\Gamma\). Consider the ruled surface \(S:=\mathbb{P}(\mathcal{O}_{\Gamma}\oplus\mathcal{O}_{\Gamma}(-E))\) with natural projection \(f:S\to\Gamma\). Denote by \(\Gamma_{0}\) the section of minimal self-intersection on \(S\), that is, \(\Gamma_{0}^{2}=-e\). As it was mentioned in section 2, \(\operatorname{Pic}(S)\cong\mathbb{Z}[\Gamma_{0}]\oplus f^{*}(\operatorname{ Pic}(\Gamma))\). Just as there, for a divisor \(\Delta\in\operatorname{Div}(\Gamma)\) we denote by \(\Delta\mathfrak{f}\) the divisor \(f^{*}(\Delta)\) on \(S\). Consider the morphism \(\Psi:=\Psi_{|\Gamma_{0}+E\mathfrak{f}|}:S\to\mathbb{P}^{r}\) determined by the linear series \(\mathcal{O}_{S}(\Gamma_{0}+E\mathfrak{f})\) on \(S\). Define \(\mathcal{F}\) as the family of curves that are images of the divisors from the linear series \(|3\Gamma_{0}+(3E+q)\mathfrak{f}|\) on \(S\) under the morphism \(\Psi\), by varying \(\Gamma\) in \(\mathcal{M}_{\gamma}\), running \(E\) through the set of general effective divisors of degree \(e\) on \(\Gamma\) and \(q\in\Gamma\). Note that a general \(X\in\mathcal{F}\) satisfies property (iii) in the theorem by propositions 1, 3 and 4. For the dimension of \(\mathcal{F}\) we have \[\dim\mathcal{F}=\] \[+ 3\gamma-3\,:\,\text{number of parameters of curves $\Gamma\in\mathcal{M}_{\gamma}$}\] \[+ \gamma\,:\,\text{number of parameters of line bundles $\mathcal{O}_{\Gamma}(E)\in\operatorname{Pic}(\Gamma)$ of degree $e\geq 4\gamma+5$}\] necessary to fix the geometrically ruled surface \(\mathbb{P}(\mathcal{O}_{\Gamma}\oplus\mathcal{O}_{\Gamma}(-E))\) \[+ (r+1)^{2}-1=\dim(\operatorname{Aut}(\mathbb{P}^{r}))\] \[+ 1\,:\,\text{number of parameters necessary to fix $q\in\Gamma$}\] \[- (e-\gamma+2)=\dim G_{F}\text{, where $G_{F}$ is the subgroup of $\operatorname{Aut}(\mathbb{P}^{r})$ fixing the scroll $F$, see \@@cite[cite]{[\@@bibref{}{Leff}{}{}]}}\] \[\@@bibref{}{Leff}{}{}{}\text{[CCFM09, Lemma 6.4, p. 148]}\] \[+ 6e-3\gamma+6=\dim|3\Gamma_{0}+(3E+q)\mathfrak{f}|\,:\,\text{number of parameters to choose a curve in the linear equivalence class of $3\Gamma_{0}+(3E+q)\mathfrak{f}$ on $S$.}\] Define \(\mathcal{H}\) as the closure in \(\mathcal{I}_{d,g,r}\) of the set parametrizing \(\mathcal{F}\). Accounting the above numbers we get \[\dim\mathcal{H}=\dim\mathcal{F}=r^{2}+7e+4\,,\] whence Step I is completed. **Step II.** Computation of the tangent space to \(\mathcal{H}\). Let \(X\in\mathcal{F}\) be a general curve from the family, that is, \(X\) is the image \(\Psi(C)\) of a general \(C\in|3\Gamma_{0}+(3E+q)\mathfrak{f}|\) on \(S\), the base curve \(\Gamma\in\mathcal{M}_{\gamma}\) is general, and \(E\in\operatorname{Div}^{e}(\Gamma)\) and \(q\in\Gamma\) are also general. Also, \(X\) lies on the cone \(F:=\Psi(S)\) over a curve \(Y\subset\mathbb{P}^{r-1}\) that is the image \(Y:=\Psi(D)\) of a general \(D\in|\Gamma_{0}+E\mathfrak{f}|\). Let \(l_{q}\) be the line from the ruling of \(F\) that is the image of \(q\mathfrak{f}\) and \(Q=l_{q}\cap Y\). Denote by \(\varphi:X\to Y\) the projection with center \(P\) of \(X\) to the hyperplane containing \(Y\). It is a \(3:1\) covering morphism. Recall that by Proposition 4 its ramification divisor \(R_{\varphi}\) is linearly equivalent to the divisor on \(X\) cut by a quadric hypersurface and the two points, say \(Q_{1}\) and \(Q_{2}\), besides \(P\), in which the line meets \(X\). Applying Proposition 7, we obtain the short exact sequence \[0\to\mathcal{O}_{X}(3)\otimes\mathcal{O}_{X}(Q_{1}+Q_{2}+2P)\to N_{X/\mathbb{P}^ {r}}\to\varphi^{*}N_{Y/\mathbb{P}^{r-1}}\otimes\mathcal{O}_{X}(P)\to 0\,, \tag{9}\] in which \(N_{X/\mathbb{P}^{r}}\) is the normal bundle of \(X\) in \(\mathbb{P}^{r}\) and \(N_{Y/\mathbb{P}^{r-1}}\) is the normal bundle of \(Y\) in \(\mathbb{P}^{r-1}\). Due to \(e\geq 2\gamma+1\), the Hilbert scheme \(\mathcal{I}_{e,\gamma,e-\gamma}\) is irreducible and generically smooth of the expected dimension \(\dim\mathcal{I}_{e,\gamma,e-\gamma}=\lambda_{e,\gamma,r-1}=er-(r-4)(\gamma-1)\). Since \(\Gamma\in\mathcal{M}_{\gamma}\) is general and \(Y\) is isomorphic to \(\Gamma\) and \(\deg Y=e\), it follows that \[h^{0}(Y,N_{Y/\mathbb{P}^{r-1}})=er-(r-4)(\gamma-1)\,.\] For the degree of the line bundle \(\mathcal{O}_{X}(3)\otimes\mathcal{O}_{X}(Q_{1}+Q_{2}+2P)\) in (9) we have \[\deg\left(\mathcal{O}_{X}(3)\otimes\mathcal{O}_{X}(Q_{1}+Q_{2}+2P)\right)=3 \deg X+4=3(3e+1)+4=9e+7\,.\] According to the assumptions in the theorem, \(g=3e+3\gamma\) and \(e\geq 4\gamma+5\), so for the degree of the line bundle \(\mathcal{O}_{X}(3)\otimes\mathcal{O}_{X}(Q_{1}+Q_{2}+2P)\) we obtain \(9e+7>2g-2\). Thus, \(\mathcal{O}_{X}(3)\otimes\mathcal{O}_{X}(Q_{1}+Q_{2}+2P)\) is nonspecial, hence \[h^{0}(X,N_{X/\mathbb{P}^{r}})=h^{0}(X,\mathcal{O}_{X}(3)\otimes\mathcal{O}_{X }(Q_{1}+Q_{2}+2P))+h^{0}(X,\varphi^{*}N_{Y/\mathbb{P}^{r-1}}\otimes\mathcal{O }_{X}(P))\,. \tag{10}\] By the Riemann-Roch theorem \[h^{0}(X,\mathcal{O}_{X}(3)\otimes\mathcal{O}_{X}(Q_{1}+Q_{2}+2P))=6e-3\gamma+ 8\,.\] To compute \(h^{0}(X,\varphi^{*}N_{Y/\mathbb{P}^{r-1}}\otimes\mathcal{O}_{X}(P))\) we use the projection formula, that is, \[h^{0}(X,\varphi^{*}N_{Y/\mathbb{P}^{r-1}}\otimes\mathcal{O}_{X }(P)) =h^{0}(Y,\varphi_{*}(\,\varphi^{*}N_{Y/\mathbb{P}^{r-1}}\otimes \mathcal{O}_{X}(P)\,))\] \[=h^{0}(Y,N_{Y/\mathbb{P}^{r-1}}\otimes\varphi_{*}\mathcal{O}_{X}( P))\,.\] By Proposition 6 we have \(\varphi_{*}(\mathcal{O}_{X}(P))\cong\mathcal{O}_{Y}\oplus\mathcal{O}_{Y}(-1) \oplus(\mathcal{O}_{Y}(-2)\otimes\mathcal{O}_{Y}(-Q))\), so it follows that \[h^{0}(Y,N_{Y/\mathbb{P}^{r-1}}\otimes\varphi_{*}\mathcal{O}_{X} (P))\\ =h^{0}(Y,N_{Y/\mathbb{P}^{r-1}})+h^{0}(Y,N_{Y/\mathbb{P}^{r-1}}(- 1))+h^{0}(Y,N_{Y/\mathbb{P}^{r-1}}(-2)\otimes\mathcal{O}_{Y}(-Q))\,.\] Since \(Y\cong\Gamma\) is general in \(\mathcal{M}_{\gamma}\), \(\gamma\geq 3\), and \(E\) is a general divisor of degree \(e\geq 4\gamma+5\) on \(\Gamma\), it follows by [13, Proposition 2.1 and Proposition 2.12] that \(h^{0}(Y,N_{Y/\mathbb{P}^{r-1}}(-1))=r\) and \(h^{0}(Y,N_{Y/\mathbb{P}^{r-1}}(-2))=0\). The last implies \(h^{0}(Y,N_{Y/\mathbb{P}^{r-1}}(-2)\otimes\mathcal{O}_{Y}(-Q))=0\). Using that \(r-1=e-\gamma\), we find \[h^{0}(X,\varphi^{*}N_{Y/\mathbb{P}^{r-1}}\otimes\mathcal{O}_{X} (P)) =h^{0}(Y,N_{Y/\mathbb{P}^{r-1}})+h^{0}(Y,N_{Y/\mathbb{P}^{r-1}}(-1))\] \[=er-(r-4)(\gamma-1)+r\] \[=r^{2}+r+4\gamma-4\,.\] The exact sequence (10) then gives \(h^{0}(X,N_{X/\mathbb{P}^{r}})=(6e-3\gamma+8)+(r^{2}+r+4\gamma-4)=r^{2}+r+\gamma+6e+ 4=r^{2}+(e-\gamma+1)+\gamma+6e+4=r^{2}+7e+5\). Therefore, \[\dim T_{[X]}\mathcal{H}=\dim\mathcal{H}+1=r^{2}+7e+5\,. \tag{11}\] This completes Step II. **Step III.** Showing that \(\mathcal{H}\) forms an irreducible component of \(\mathcal{I}_{d,g,r}\). By definition, \(\mathcal{H}\subset\mathcal{I}_{d,g,r}\) is the closure of the set parametrizing smooth integral curves of degree \(d\) and genus \(g\) on cones in \(\mathbb{P}^{r}\) over the curves parametrized by \(\mathcal{I}_{e,\gamma,r-1}\), as a general \([X]\in\mathcal{H}\) is in the linear equivalence class \(3\Gamma_{0}+(3E+q)\mathfrak{f}\) on the desingularization \(S\) of a cone \(F\subset\mathbb{P}^{r}\) over \(Y\subset\mathbb{P}^{r-1}\) for a general \([Y]\in\mathcal{I}_{e,\gamma,r-1}\). The set \(\mathcal{H}\) is clearly irreducible. To show that it is a component, we use that that every flat deformation of a curve from \(\mathcal{F}\) is a again a curve on a cone in \(\mathbb{P}^{r}\) over a curve from \(\mathcal{I}_{e,\gamma,r-1}\). **Lemma 8**.: _Let \(p_{\mathcal{X}}:\mathcal{X}\to T\) be a flat family of projective curves in \(\mathbb{P}^{r}\) for which there exists a closed point \(t_{0}\in T\) such that:_ 1. \(\mathcal{X}_{t_{0}}\) _is a smooth integral projectively normal curve of genus_ \(g=3e+3\gamma\) _and degree_ \(3e+1\)_;_ 2. \(\mathcal{X}_{t_{0}}\) _is contained in a cone_ \(F\) _over a curve_ \(Y\) _corresponding to a general point of_ \(\mathcal{I}_{e,\gamma,r-1}\)_._ _Then there is a neighborhood \(U\) of \(t_{0}\) in \(T\) such that, for all closed points \(t\in U\), \(\mathcal{X}_{t}\) is again a curve on a cone over a smooth integral projectively normal curve of genus \(\gamma\) and degree \(e\) in \(\mathbb{P}^{r-1}\)._ Assuming the validity of Lemma 8, the proof of Step III proceeds as follows. Suppose that \(\tilde{X}\) is a flat deformation of \(X\). Lemma 8 implies that \(\tilde{X}\) is contained in a cone \(\tilde{F}\subset\mathbb{P}^{r}\) over a curve \(\tilde{Y}\), where \([\tilde{Y}]\in\mathcal{I}_{e,\gamma,r-1}\) is general. Let \(\tilde{S}\) be the desingularization of \(\tilde{F}\) and \(\tilde{C}\) be the proper transform of \(\tilde{X}\) on \(\tilde{S}\). Proposition 5 implies that \(\tilde{C}\sim 3\tilde{Y}_{0}+(3\tilde{E}+\tilde{q})\tilde{\mathfrak{f}}\), where \(\tilde{\mathfrak{f}}:\tilde{S}\rightarrow\tilde{Y}\) is the corresponding surjective morphism, \(\tilde{Y}_{0}\) is the section of minimal self-intersection, \(\tilde{Y}_{0}^{2}=-e\), \(\tilde{E}\) is a divisor of \(\tilde{Y}\) of degree \(e\) such that \(\tilde{S}\cong\mathbb{P}(\mathcal{O}_{\tilde{Y}}\oplus\mathcal{O}_{\tilde{Y}} (-\tilde{E}))\) and \(\tilde{q}\) is a point on \(\tilde{Y}\). Also, \(\tilde{X}\) is the image of a curve in the linear series \(|3\tilde{Y}_{0}+(3\tilde{E}+\tilde{q})\tilde{\mathfrak{f}}|\) under the morphism associated to \(|\tilde{Y}_{0}+\tilde{E}\tilde{\mathfrak{f}}|\). Because of the definition of \(\mathcal{F}\), the above means that \(\tilde{X}\) is a curve from the same family. Therefore \(\mathcal{H}\) is a component of \(\mathcal{I}_{3e+1,g,e-\gamma+1}\). To complete Step III in the proof of the theorem it remains to prove the lemma. Proof of Lemma 8.: To a large extent our proof repeats the steps of the proofs of similar statements given in [11, Proposition 1.6, p.354-356] and [11, Proposition 4.1, p.176-178]. For this reason, we refer, whenever possible, to the statements formulated and proved there. The statement is local, so we can assume that \(T=\operatorname{Spec}(A)\) for a Noetherian ring \(A\). Thus, we have a flat family \[\mathcal{X}\subset\operatorname{Proj}A[x_{0},x_{1},\ldots,x_{r}]=:\mathbb{P}^{ r}_{A}\,.\] Since projective normality is an "open property" and \(\mathcal{X}_{t_{0}}\) is supposed to be projectively normal, we can assume further that the family \(\mathcal{X}\) consists of projectively normal curves. By [Har77, Ex. III.9.5, p.267] the family \(\mathcal{X}\) must be _very flat_. In particular, the number of generators in any degree \(n\) of the ideal \(I(\mathcal{X}_{t})\) of a curve \(\mathcal{X}_{t}\subset\mathbb{P}^{r}\) from the family is the same for all \(t\in T\). Consider the homogeneous ideal \(I(\mathcal{X})\) of \(\mathcal{X}\) in the ring \[R:=A[x_{0},x_{1},\ldots,x_{r}]\] and let \(I(\mathcal{X})_{2}\) be the vector space of its elements of degree two, that is, \[I(\mathcal{X})_{2}:=H^{0}(\mathbb{P}^{r}_{A},\mathcal{I}_{\mathcal{X}}(2))\,,\] where \(\mathcal{I}_{\mathcal{X}}\) is the ideal sheaf of \(\mathcal{X}\). Take \(J\subset R\) to be the ideal \[J:=\langle I(\mathcal{X})_{2}\rangle\] generated by the elements of degree two. Consider the closed subscheme \(\mathcal{W}\subset\mathbb{P}^{r}_{A}\) defined as \(\mathcal{W}:=\operatorname{Proj}(R/J)\subset\mathbb{P}^{r}_{A}\,.\) It is indeed a family \(p_{\mathcal{W}}:\mathcal{W}\to T\) parametrized by \(T=\operatorname{Spec}(A)\) and we have a commutative diagram The goal is to show that \(p_{\mathcal{W}}:\mathcal{W}\to T=\operatorname{Spec}(A)\) is a flat family. By assumption \(\mathcal{X}_{t_{0}}\) is a smooth curve of genus \(g=3e+3\gamma\) and degree \(3e+1\) contained in a cone \(F\) over a smooth integral projectively normal curve \(Y\) of genus \(\gamma\) and degree \(e\) in \(\mathbb{P}^{r-1}\). By Proposition 5 this means that the proper transform of \(\mathcal{X}_{t_{0}}\) on the desingularization \(S\) of \(F\) is in the linearly equivalence class of \(3Y_{0}+(3E+q)\mathfrak{f}\), where, just as before, \(f:S\to Y\) is the surjective morphism for the decomposable ruled surface \(S\), \(Y_{0}\) is the section of minimal self-intersection, \(E\) is a divisor of degree \(e\) on \(Y\) such that \(S\cong\mathbb{P}(\mathcal{O}_{Y}\oplus\mathcal{O}_{Y}(-E))\) and \(q\in Y\) is a point. Since \(Y\) is a general curve of genus \(\gamma\geq 3\) and degree \(e\geq 4\gamma+5\) in \(\mathbb{P}^{r-1}\), it follows by [6] that the first several terms of the minimal free resolution of its ideal sheaf \(\mathcal{I}_{Y}\) appear as \[\cdots\xrightarrow{}\bigoplus_{j=1}^{\beta_{3}}\mathcal{O}_{\mathbb{P}^{r-1 }}(-4)\xrightarrow{}\bigoplus_{j=1}^{\beta_{2}}\mathcal{O}_{\mathbb{P}^{r-1}} (-3)\xrightarrow{}\bigoplus_{j=1}^{\beta_{1}}\mathcal{O}_{\mathbb{P}^{r-1}} (-2)\xrightarrow{}\mathcal{I}_{Y}\xrightarrow{}0\] where \(\beta_{1},\beta_{2},\ldots\) are the _Betti numbers_. By [10, Proposition 2, p. 232] it follows that the first several terms of the minimal free resolution of the ideal sheaf \(\mathcal{I}_{\mathcal{X}_{t_{0}}}\) of \(\mathcal{X}_{t_{0}}\subset\mathbb{P}^{r}\) are (12) where * \(\mathcal{P}_{1}=\bigoplus_{1}^{r-1}\mathcal{O}_{\mathbb{P}^{r}}(-4)\oplus \bigoplus_{j=1}^{\beta_{1}}\mathcal{O}_{\mathbb{P}^{r}}(-2)\) * \(\mathcal{P}_{2}=\bigoplus\limits_{1}^{\binom{r-1}{2}}\mathcal{O}_{\mathbb{P}^{r}}(-5 )\oplus\bigoplus\limits_{j=1}^{\beta_{2}}\mathcal{O}_{\mathbb{P}^{r}}(-3)\oplus \bigoplus\limits_{1}^{\beta_{1}}\mathcal{O}_{\mathbb{P}^{r}}(-5)\) * \(\mathcal{P}_{3}=\bigoplus\limits_{1}^{\binom{r-1}{3}}\mathcal{O}_{\mathbb{P}^{ r}}(-6)\oplus\bigoplus\limits_{j=1}^{\beta_{3}}\mathcal{O}_{\mathbb{P}^{r}}(-4) \oplus\bigoplus\limits_{1}^{\beta_{2}}\mathcal{O}_{\mathbb{P}^{r}}(-6)\) To deduce the flatness of the family \(p_{\mathcal{W}}:\mathcal{W}\to T\) we make use of resolutions of the ideal \(I(\mathcal{X})\subset R\) of \(\mathcal{X}\), the ideal \(I(\mathcal{X}_{t_{0}})\) of \(\mathcal{X}_{t_{0}}\) in the localization \(R_{t_{0}}\) of \(R\) at \(t_{0}\), and also of the ideal \(J\) of \(\mathcal{W}\subset\mathbb{P}_{A}^{r}\). Remark that due to (12), the ideal \(I(\mathcal{X}_{t_{0}})\) has a presentation (13) \[P_{2}\xrightarrow{\ an exact sequence \[\begin{CD}\mathscr{Q}_{2}@>{\Delta}>{}>\mathscr{Q}_{1}@>{}>{}>J@>{}>{}>0\,,\end{CD} \tag{16}\] where \(\Delta\) is homogeneous map, such that tensoring (16) with \(k(t_{0})\) we get the first row of (14). This means that the corank of the map \(\Delta\) at each localization at \(k(t)=A/m_{t}\), where \(m_{t}\) is the ideal corresponding to \(t\in T\), is same for all \(t\), or equivalently, that \(\dim(J_{t})_{d}\) is same for all \(t\in T\). This implies that the family \(p_{\mathcal{W}}:\mathcal{W}\to T\) is (very) flat. In particular, it is a family of surfaces in \(\mathbb{P}^{r}\) one of whose fibers, namely \(\mathcal{W}_{t_{0}}\), is a cone over a smooth integral projectively normal curve in \(\mathbb{P}^{r-1}\) of genus \(\gamma\geq 3\) and degree \(e\geq 4\gamma+5\). For the remaining part of the proof of the lemma we refer to [13, Proposition 4.1, p.176-178]. It is proven there that if \(p_{\mathcal{W}}:\mathcal{W}\to T\) is a flat family of surfaces in \(\mathbb{P}^{r}\), one of whose fibers is a cone like \(\mathcal{W}_{t_{0}}\) above, then the remaining fibers of the family are also cones over smooth curves of the same genus and degree in \(\mathbb{P}^{r-1}\). We remark that the proof uses a result of Pinkham, namely [16, Theorem 7.5, p.45] about cones over curves of genus \(\gamma\) and degree \(e\), in which it is required that \(e\geq 4\gamma+5\). Thus the lemma is proved. This completes Step III and thus the proof of the Main Theorem. **Remark 9**.: _The technique used in Step III of the proof can not be applied to prove [13, Theorem B]. In that paper we considered a family of curves on cones such that each curve was a double cover of the base and also passed through the vertex on the cone containing it. Just as here,_ Proposition 1 _could be applied to obtain a resolution of the ideal of a curve from the family, however, the ideal is generated by polynomials of degree two and three, which is insufficient to deduce the existence of a presentation like (16) of the ideal of a similarly defined variety like \(\mathcal{W}\) here. That is, one couldn't conclude that \(\mathscr{M}_{2,1}=0\), like we were able to do here due to \(I(\mathcal{X}_{t})\) being generated by polynomials of degree two and four. In a sense, our present work grew-out from our failure to apply the technique introduced by Ciliberto in [13] and used in [13] to the proof of [13, Theorem B, Step III], where we needed to use different arguments._ **Remark 10**.: _For a component \(\mathcal{D}\) of the Hilbert scheme \(\mathcal{I}_{d,g,r}\) the difference \(\sigma(\mathcal{D}):=\dim\mathcal{D}-\lambda_{d,g,r}\) is called superabundance. It is not difficult to compute about our \(\mathcal{H}\subset\mathcal{I}_{3e+1,3e+3\gamma,e-\gamma+1}\) that \(\sigma(\mathcal{H})=(r-4)e+2(r-5)(e-r)-3\), and using the numerical assumptions in our Main Theorem, \(\sigma(\mathcal{H})\geq 224\)._
2301.09149
Dependence of the kinetic energy absorption capacity of bistable mechanical metamaterials on impactor mass and velocity
Using an alternative mechanism to dissipation or scattering, bistable structures and mechanical metamaterials have shown promise for mitigating the detrimental effects of impact by reversibly locking energy into strained material. Herein, we extend prior works on impact absorption via bistable metamaterials to computationally explore the dependence of kinetic energy transmission on the velocity and mass of the impactor, with strain rates exceeding $10^2$ s$^{-1}$. We observe a large dependence on both impactor parameters, ranging from significantly better to worse performance than a comparative linear material. We then correlate the variability in performance to solitary wave formation in the system and give analytical estimates of idealized energy absorption capacity under dynamic loading. In addition, we find a significant dependence on damping accompanied by a qualitative difference in solitary wave propagation within the system. The complex dynamics revealed in this study offer potential future guidance for the application of bistable metamaterials to applications including human and engineered system shock and impact protection devices.
Ryan Fancher, Ian Frankel, Kyle Chin, Maroun Abi Ghanem, Brianna MacNider, Logan S. Shannahan, James F. Berry, Muge Fermen-Coker, Andrew J. Boydston, Nicholas Boechler
2023-01-22T16:21:55Z
http://arxiv.org/abs/2301.09149v1
Dependence of the kinetic energy absorption capacity of bistable mechanical metamaterials on impactor mass and velocity ###### Abstract Using an alternative mechanism to dissipation or scattering, bistable structures and mechanical metamaterials have shown promise for mitigating the detrimental effects of impact by reversibly locking energy into strained material. Herein, we extend prior works on impact absorption via bistable metamaterials to computationally explore the dependence of kinetic energy transmission on the velocity and mass of the impactor, with strain rates exceeding \(10^{2}\) s\({}^{-1}\). We observe a large dependence on both impactor parameters, ranging from significantly better to worse performance than a comparative linear material. We then correlate the variability in performance to solitary wave formation in the system and give analytical estimates of idealized energy absorption capacity under dynamic loading. In addition, we find a significant dependence on damping accompanied by a qualitative difference in solitary wave propagation within the system. The complex dynamics revealed in this study offer potential future guidance for the application of bistable metamaterials to applications including human and engineered system shock and impact protection devices. ## 1 Introduction The local magnification of mechanical forces as a result of a dynamic collision (impact) and their detrimental effects on natural and engineered systems have been studied extensively [1; 2; 3; 4]. A relatively new approach for the mitigation of damage induced by impact is the use of bistable structures, which in contrast to the more ubiquitous mechanisms of dissipation and scattering [5; 6; 7; 8], reduces the effect of impact by reversibly "locking" some of the energy imparted by a shock or impact into the form of strain energy [9; 10]. Both the performance and reversibility of bistable structures for impact mitigation are attractive, as the energy locking mechanism could be used in conjunction with other mechanisms [11], such as dissipation, and the structures can ostensibly be reset in a controllable fashion for reuse. In addition to studies of the impact absorption characteristics of single bistable structures [9; 10], more recently, the energy absorption properties of bistable structures configured into multi-unit-cell mechanical metamaterials has been explored [11; 12; 13; 14; 15; 16; 17; 18]. In such studies, the material response has been studied in either relatively low rate regimes [12; 13; 14; 15; 16; 17; 18], where in some cases the response was suggested to be rate independent [12], while in other cases the impact response was not related to the material's wave dynamics [11]. This can be placed in contrast with other studies of nonlinear solitary, or "transition", wave propagation in similar materials, which show significant dependence on the system excitation [19; 20; 21; 22; 23; 24]. A notable work that brings these two features (impact absorption via bistability and nonlinear waves) together is that of Ref. [25], which described the effect that the shape of the bistable potential has on energy trapping and solitary wave emission, however, they did not study the dependence of absorption performance on impact conditions. In this work, we computationally study the dependence of kinetic energy transmission in a bistable mechanical metamaterial (shown in Fig. 1) on the velocity and mass of the impactor, in wave dominated regimes (wavelengths less than the absorbing material's reference length) with maximum strain rates of approximately \(195\) s\({}^{-1}\). Kinetic energy (KE) transmission is used as a performance metric herein as it has been previously shown to be closely related to damaging effects, for instance, in the case of behind armor blunt trauma [26; 27]. The computational model of our mechanical metamaterial is a discrete element model (DEM) composed of a one-dimensional (1D) chain of masses connected by bistable nonlinear springs, with linear intersite damping, and a contact spring at the boundary that allows release of the impactor under tension during rebound. We find a large dependence of KE transmission on both impactor mass and velocity, ranging from significantly better to worse performance than a comparative linear material. We correlate said performance to solitary wave formation and give analytical estimates of ideal ized energy absorption capacity under dynamic loading. In addition, a significant effect is found from the inclusion of the intersite damping, accompanied by a qualitative difference in solitary wave propagation within the system. The complex dynamics revealed in this study offer potential future guidance for applications including improved packaging to prevent damage during shipment [12], personal protection equipment [7; 12; 28], and crash mitigation for vehicles [12; 16; 29]. ## 2 Description of metamaterial to be modeled The conceptual setup described herein is the impact of a relatively rigid "impactor" block onto an arbitrarily-sized "absorbing" material block. The dimensions of the absorbing block were chosen as \(2d\) (height) by \(d\) (both width and depth), with \(d=5\) cm. Our designed absorbing material consists of a bistable mechanical metamaterial composed of a periodic array of structured unit cells based upon the geometry from Ref. [12]. The unit cell of our bistable mechanical metamaterial is composed of two elastic beams surrounded by a lower-aspect ratio monolithic "frame". Figure 1(a,b) shows a 3D printed representation (with fewer unit cells and layers than modeled herein) of the bistable mechanical metamaterial, where the beams are composed of a rubber simulant material and the frame composed of an acrylic simulant. The thickness to length aspect ratio of the beam elements \(r=T/L\) (where \(T\) is the beam thickness and \(L\) is its length) and the angle of the beam \(\theta\) was chosen such that \(r=0.14\) and \(\theta=60\) degrees. Large deformation finite element method (FEM) simulations of beam deformation under compression are shown in Fig. 1(c), assuming a Neo-Hookean material model (Lame Parameters: \(\lambda=47.191\) GPa, \(\mu=25\) GPa), fixed boundaries on the bottom end (red), with a prescribed y-direction displacement on the top end (purple, fixed in the x-direction), and free boundaries otherwise. We model the beam separately from the frame, because we approximate the frame itself to undergo minimal deformation as it is much thicker (structurally stiffer) than the beams. Furthermore, we assume that the two beams within the unit cell deform symmetrically with respect to each other. As such the mechanical response of a single beam can be extrapolated to determine the elasticity of the metamaterial. The choice of the beam aspect ratio was chosen to obtain moderately high energy locking without self-contact [12], but without too small of an energy barrier before unsnapping. The size of the frame was chosen to avoid self-contact while maintaining rigidity without excessive density per unit volume of the unit cell. Both the frame and geometry are natural candidates for future optimization studies. ## 3 Measurement of quasi-static bistable material response Figure 1(d) shows the experimentally measured mechanical response of a single 3D printed layer of bistable metamaterial (composed of the same material and geometry as in Fig. 1(a,b), but less depth in the direction going into the page), along with a 3rd order polynomial fit of the Figure 1: Concept and modeling overview. (a) Photograph of a 3D printed model of the bistable mechanical metamaterial (with fewer unit cells than modeled herein). (b) Photograph of a 3D printed unit cell. (c) One beam in the unit cell design, showing the displacement of the beam near \(e_{2}\) simulated using FEM, where the colorscale denotes the von Mises stress (arb.). (d) Non-dimensionalized effective stress and strain of the mechanical metamaterial (for \(r=0.14\) and \(\theta=60\) degrees), based upon experimental compression measurements (solid black line), and the 3rd order polynomial fit of the experimentally measured curve (solid blue line), which is used in the DEM model. The vertical dashed red and blue lines denote \(e_{0}\) and \(e_{2}\), respectively, and the horizontal dotted black line denotes zero stress. (e) Illustration of the DEM model. measurement. The response of the layer was measured using a mechanical test frame in displacement control, where the rigid (acrylic simulant) top and bottom parts of the layer were attached to grips in a mechanical testing frame, allowing measurement of tension, as well as compression, following the snap through event. Herein a "layer" is defined as a row of unit cells where the normal is in the direction of the impactor velocity vector and along the long axis of the absorbing block. In Fig. 1(d), \(\sigma\) and \(\varepsilon\) are the effective bistable mechanical metamaterial stress and strain, respectively, and \(E=8.42e5\) N/m\({}^{2}\) is the measured small strain elastic modulus of the rubber simulant beam material. The transition to negative stiffness can be seen to occur in the fit at \(\varepsilon_{0}=-0.13\) and the second stability point at \(\varepsilon_{2}=-0.46\). ## 4 Discrete element model Figure 1(e) shows a visualization of the DEM used to simulate the dynamics of the metamaterial undergoing impact. Each layer of unit cells in the absorbing material were described as lumped mass layers of mass \(m\) connected by massless springs and dampers, with the impactor of mass \(M\) interacting with the absorber via a contact spring. A DEM was chosen as a reasonable model due to the following key assumptions: i) That the model is designed to describe uniaxial loading of the impactor on the absorber block with minimal off axis loading effects; ii) near zero effective Poisson's ratio of the lattice; and iii) The lattice is composed of stiffer and larger masses lumped within the material, separated by softer, lower mass elements. Regarding the this third assumption, more precisely, the vibrational frequencies of the separate mass and spring components must be much higher than the modal frequency of the two elements combined, where the spring deforms as the mass moves like a rigid body, such that the higher frequencies can be reasonably ignored. The sample mass was divided into equal "mass layers" in number equal to the amount of unit cells along the height of the material (\(N\)). A "contact spring" with nonlinear stiffness, based on the Hertzian contact model [30], between the top unit cell and the impactor mass was modeled to describe the impactor hitting the top of the sample and allowing the impactor to freely bounce rather than stick to the top of the lattice after impact. The equations of motion for the impactor particle (_i.e._ particle \(n=N+1\)) is thus: \[M\ddot{y}_{N+1}=C_{1}([y_{N}-y_{N+1}]_{+})^{3/2}, \tag{1}\] where the \([]_{+}\) denotes spring's inability to support tension (if the value in brackets is negative it equals zero), \(y_{n}\) is the displacement of the \(n\)th layer, and \(C_{1}\) is a fitting parameter (set to \(-8.51e8\)). As such, the contact stiffness (shown in Fig. 1(e)) is \(c(u)=-\frac{3}{2}C_{1}([-u]_{+})^{1/2}\), where \(u\) is the spring stretch (positive in tension for both \(u\) and \(c(u)\)). In addition to allowing the impactor to rebound, the use of a contact spring allowed for a better estimation of real impact conditions, including roughly describing the impactor coming into contact with the sample at a slight relative angle, or having asperities on the two surfaces. We then define the force-displacement relation of an individual layer: \[F_{L}(u)=\beta_{3}(u)^{3}+\beta_{2}(u)^{2}+\beta_{1}(u), \tag{2}\] where \(F_{L}\) is positive in tension, \(\beta_{1}=2.257e4\) N/m, \(\beta_{2}=1.187e7\) N/m, and \(\beta_{3}=1.524e9\) N/m are coefficients from the fit of the mechanical response measured for the 3D printed bistable layer (shown in Fig. 1(d)). The stiffness is thus defined as \(K(u)=\partial F_{L}(u)/\partial u\) (shown in Fig. 1(e)). Equation 2 is used in all other elements of the DEM (corresponding to the metamaterial layers), where the top layer of the absorber (\(n=N\)) is: \[\begin{split} m\ddot{y}_{N}&=C_{1}(y_{N}-y_{N+1})^{ 3/2}-F_{L}(y_{N}-y_{N-1})\\ &-\eta(\dot{y}_{N}-\dot{y}_{N-1}),\end{split} \tag{3}\] where \(\dot{y}_{n}\) and \(\eta\) are the velocity of the \(n\)th particle and \(\eta\) the damping coefficient, respectively. The equations of motion for layers from \(n=2\) to \(n=N-1\) are given by: \[\begin{split} m\ddot{y}_{n}=& F_{L}(y_{n+1}-y_{n})-F_{L}(y_{n}-y_{n-1})\\ +&\eta(\dot{y}_{n+1}-\dot{y}_{n})-\eta(\dot{y}_{n}- \dot{y}_{n-1}).\end{split} \tag{4}\] The equation of motion for the \(n=1\) mass, next to the fixed boundary is then: \[\begin{split} m\ddot{y}_{1}&=F_{L}(y_{2}-y_{1})-F_{L }(y_{1})\\ +&\eta(\dot{y}_{2}-\dot{y}_{1})-\eta\dot{y}_{1}. \end{split} \tag{5}\] The equations of motion were numerically integrated using the ODE45 integrator in MATLAB [31], given an initial velocity \(V\) applied to the impactor mass, to solve for particle displacements and velocities as a function of time. ## 5 Analytical estimate of nominal impact conditions The kinetic energy transmitted through the half-way-point of the absorbing block (with respect to the impactor velocity vector), or unit cell \(n=N/2\), was chosen as the performance metric (so as to avoid boundary effects), which was then compared against that of a linearly coupled (with the exception of the contact spring) absorbing material. We then endeavored to estimate the impactor parameters to minimize KE transmission through \(n=N/2\) of our chosen bistable mechanical metamaterial. The nominal impact velocity was estimated as: \[V_{0}=2c_{0}\varepsilon_{0}, \tag{6}\] where \(c_{0}\) is the long wavelength linear soundspeed of the lattice [2]. For all simulations, a unit cell length of \(a=1\) mm was used. Equation 6 is meant to estimate the minimum velocity threshold such that the snap through to the second stable state is induced. In an ideal scenario, the entire material between the \(n=N/2\) and the impactor would then snap to the second stable state, and have locked the entire KE from the impactor into stored potential energy (PE). The PE absorbed from the first half of the absorber is then: \[\mathrm{PE}=\frac{N}{2}\int_{o}^{ae_{2}}F_{L}(u)du. \tag{7}\] The nominal impactor mass \(M_{0}\) was solved as the only remaining unknown when impactor KE (_i.e._\(MV^{2}/2\)) was set equal to PE. ## 6 Simulated dependence on impact conditions for a conservative material To assess the performance of the bistable mechanical metamaterial, the linear "control" material was simulated using a linear stiffness corresponding to the slope between zero force and the force at \(\varepsilon_{0}\) for the bistable material (equal to \(1.77\beta_{1}\)), identical mass, contact spring and damping parameters to the bistable sample. Simulations with varied impactor mass and velocity using an undamped linear material showed negligible difference in KE transmission at the \(n=N/2\) (approximately 1% maximum), between linear materials of different stiffnesses, when analyzed from impact time until the time the first wavefront hits the bottom of the sample. We thus define a "KE ratio", as the maximum KE of the \(N/2\) mass of the control lattice divided by the maximum KE of the \(N/2\) mass of the bistable lattice, where a value greater than one indicates the bistable material is outperforming the linear material in terms of minimizing KE transmission. We initially simulated the response of \(N=100\) layer conservative (_i.e._\(\eta=0\)) materials for duration \(T_{s}=1.2\tau_{l}\) where \(\tau_{l}\) is the time for the linear wave to travel across the material once (based on \(c_{0}\)). Figure 2(a) shows the KE ratio for varied \(M\) and \(V\), wherein the bistable material outperforms the linear material by up to 21x, while performing worse than the linear material particularly for impactor conditions where \(M>M_{0}\) and \(V>V_{0}\). A reduction in performance above the nominal impact conditions of \(M_{0}\) and \(V_{0}\) as they represent the point where the kinetic energy of the impactor could be equally distributed across the first half of the material and stably locked into strain energy. This is consistent with the approximate iso-energy diagonal threshold that can be observed in Fig. 2(a) that separates KE ratios above and below unity. This however opens the question however, as to why the bistable material would perform less well than the linear material without any energy locking capability. To address this question we proceed to study the spatiotemporal response for both systems. To study the spatiotemporal response of the absorber materials at specific impact conditions, we define a normalized KE \(\kappa=N(\mathrm{KE}_{n}/\mathrm{KE}_{I})\), where \(\mathrm{KE}_{n}\) is the KE of the nth particle and \(\mathrm{KE}_{I}\) is the KE of the impactor, and strain \(\epsilon=u/a\). The simulation time \(t\) is expressed in terms of \(\tau=t\sqrt{\beta_{1}/m}\). Figure 2(b,c) shows \(\kappa\) and \(\epsilon\) at impactor conditions \(M_{0}\), \(V_{0}\), and Fig. 2(d,e) shows the linear sample at the same impact conditions. We note that the impactor is not shown in the plots of \(\epsilon\), although it is included as the \(n=N+1\) layer in the \(\kappa\) diagrams. Figure 2(d,e) shows a minimally dispersive pulse that propagates through the linear sample. In contrast, the bistable material (Fig. 2(b,c)) shows nonlinear effects, demonstrating the richness of the bistable system. In Fig. 2(b,c), we see solitary, transition wave emission from the initial impact, as shown by the straight, yellow lines that progress \(\sim 25\) particles into the sample from the point of impact. The identification of these as transition waves is noted by strains exceeding \(\varepsilon_{2}\). Looking first at each of these emissions, we see that the first wave is the fastest, then the next two are progressively slower, which is consistent with the amplitude dependent wavespeed of many solitary waves [22]. Beyond \(\tau\sim 4\), between the \(80-100\)th particles, we see a complex oscillatory behavior, likely a combination of low amplitude vibrations and dynamic snapping and unsnapping of the bistable layers. Crucially, past \(\sim 25\) particles from the impactor, very little KE is transmitted further into the material. In order to investigate the changing performance shown in Fig. 2(a), we study three different impactor conditions. Figure 3(a) shows a location to the top left of Fig. 2(a), which also corresponds to the worst performance of the sweep. At these impactor conditions (\(M/M_{0}=10^{-1}\) and \(V/V_{0}=10^{0.75}\)), a high amplitude, short wavelength, transition wave travels throughout the sample, at speeds faster than the waves in the linear material, resulting in a KE ratio of \(0.0848\)x. Defining \(M_{R}=M/m\), \(M_{R}=1.5\) and the formation of a single solitary wave in this instance agrees with previous studies, for instance in granular chain impacts [4], favoring single solitary wave formation when impactor and layer mass are closely matched. Within the context of solitary waves, nonlinear self-localization likely contributes to the poor performance seen in such cases. Figure 3(b) shows impactor conditions: \(M/M_{0}=10^{-0.625}\) and \(V/V_{0}=10^{0.25}\) representative of good performance (KE ratio \(=4.44\)x), but not as good as the nominal conditions shown in Fig. 2(b). A side by side comparison reveals qualitative differences in the behavior; namely longer lasting and more initial solitary waves generated in Fig. 2(b), which transition to an oscillatory phase encompassing more particles than Fig. 3(b). Additionally, close inspection at the \(n=N/2\) location in Fig. 3(b) show wave-fronts that travel through particle \(n=N/2\) of higher KE density than any of the crossing wave-fronts in Fig. 2(b). We observe that these wavefronts occur later in time than the time it would take for the first solitary wave to reach \(n=N/2\) and appear to stem from dynamic unsnapping of the unit cells. Figure 3(c) shows an example for high \(M\) and \(V\) (\(M/M_{0}=10^{0.75}\) and \(V/V_{0}=10^{0.75}\)), where \(M_{R}=87\), and parallels could be drawn with known "shock" cases in granular chains [32]). A "train" of solitary waves is emitted after impact, and appear to continue to do so along the right side of Fig. 3(c). The KE density magnitude of subsequent emitted waves appears to decrease, and the highest KE density solitary wave, which is also the wave that triggers the maximum \(\kappa\), is the first wave emitted from the time of initial impact. We note the fastest transition wavespeeds are shown under these high-energy impact conditions. The KE ratio was \(0.246\)x, which we note, does not occur at the first passage of the first transition wave but rather after the reflection of the first wave off the fixed bottom (after \(\tau\approx 5\)). Reflections and interference are representative of the complications of longer simulation duration in undamped simulations. ## 7 Effect of damping on system response The effect of the addition of a predetermined amount of damping to the simulations is assessed by implementing an intersite damping value of \(\eta=0.164\) Ns/m, or a normalized value of \(\eta V_{0}/\beta_{1}\varepsilon_{2}a=5.8e-3\) which represents the ratio of viscous to elastic effects. Otherwise, the simulation setup is identical to the prior section. Figure 4 shows the results of the damped simulations in the same format as shown previously in Fig. 2. As before, Fig. 4(b,c) Figure 3: Normalized KE, \(\kappa\), for the simulated undamped bistable material shown in Fig. 2(a) for the (a) worst performance (low mass and high velocity, \(M/M_{0}=10^{-1}\) and \(V/V_{0}=10^{0.75}\)), (b) good performance (close to nominal mass and velocity, \(M/M_{0}=10^{-0.625}\) and \(V/V_{0}=10^{0.25}\)), and (c) poor performance (high mass and high velocity, \(M/M_{0}=10^{0.75}\) and \(V/V_{0}=10^{0.75}\)). Figure 2: Undamped simulations. (a) KE ratio as a function of impactor conditions. Non-dimensionalized KE, \(\kappa\), for bistable (b) and linear (d) materials at nominal impact conditions (\(M_{0}\), \(V_{0}\)). The colorbar is saturated at KE\({}_{I}/10\). Layer strain, \(\epsilon\), for bistable (c) and linear (e) materials at nominal impact conditions (\(M_{0}\), \(V_{0}\)). shows the bistable material response at impactor conditions \(M_{0}\), \(V_{0}\), and Fig. 4(d,e) shows the linear material response at the same conditions. The maximum KE ratio in the sweep in Fig. 4(a) was 34.58x, which occurred at impactor conditions: \(M/M_{0}=10^{-.25}\) and \(V/V_{0}=10^{.25}\), noting a shift in the optimal impact conditions with the addition of damping. The maximum KE ratio from the sweep in Fig. 4(a) is higher than any values observed from the undamped sweep (Fig. 2(a)). In addition, Fig. 4(a) also shows more regions where the bistable material outperforms the linear material, as indicated by fewer regions of white color in the figure (noting the threshold of white to blue in the plot in Fig. 2(a) and Fig. 4(a) is a KE ratio of one). Figure 4(b) shows four solitary waves that propagate in succession after impact, then transition to the oscillatory region by approximately \(n=80\). In contrast to the undamped case, the wavespeed of each transition wave decreases after the point of impact, as could be expected by the damping-induced reduction of amplitude coupled with the amplitude dependent wavespeed of the solitary waves. Figure 4(c) shows regions with layers that have snapped to their secondary stable state and remained there, ostensibly aided by damping. Figure 4(d,e) show similar behavior of the damped linear sample as was seen in the undamped linear sample in Fig. 2(d,e), with the exception of a lower maximum \(\kappa\) value at \(n=N/2\) when damping is included. Further comparison of \(\kappa\) for three chosen impactor conditions picked from within Fig. 4(a) are shown in Fig. 5. Figure 5(a) shows impactor conditions corresponding to the top left corner and the worst performance within Fig. 4(a), representing impact mass \(M/M_{0}=10^{-1}\) and velocity \(V/V_{0}=10^{0.75}\), and a KE ratio of 0.2073x. It is also the same impactor conditions that resulted in the worst performance from the undamped sweep. Figure 5(b) shows the best performance, a KE ratio of 34.58x, at impactor conditions \(M/M_{0}=10^{-0.25}\) and \(V/V_{0}=10^{0.25}\). Qualitatively, some similarities are observed when Fig. 5(b) is compared with the undamped high performance cases (Fig. 2(b) and Fig. 3(b)). We see an initial solitary wavefront that transitions (in time) to an oscillatory region prior to crossing particle \(n=N/2\) (noting again the slowing wavespeed). Crucially, there appears to be an interaction between the solitary wave fronts and damping such that when multiple solitary wave emissions occur, it is possible that the waves slow as they reduce in amplitude (due to damping) such that they approach, but do not pass, the \(n=N/2\) location. This may further be a synergistic effect in the sense that the damping-induced speed reduction gives the solitary waves even more time to decay in amplitude, and eventually vanish, before reaching the halfway point. Figure 5(c) shows impactor conditions \(M/M_{0}=10^{0.75}\) and \(V/V_{0}=10^{0.75}\), which is a region in the upper right corner of the sweep in Fig. 4(a), showing poor performance. A comparison of Fig. 5(c) to the undamped case (Fig. 3(c)) at the same impactor conditions shows similar behavior with the exception of the slowing transition waves in the damped case. ## 8 Conclusions The kinetic energy transmission performance in response to impact of a simulated bistable mechanical metamaterial was found to be highly dependent on the impactor conditions (mass and velocity). In the undamped simulations, the performance of a \(N=100\) sample ranged from worse (0.08x) to far superior (20.9x) in comparison to the linear control lattice. Similarly, in the damped sweep, the performance again ranged from worse (0.21x) to far superior (34.6x). The presence of damping was seen to have a beneficial, potentially synergistic, effect on the performance of the system, as indicated by both higher maximum KE ratio values and higher minimum KE ratio values for the same set of impactor conditions. One likely reason for this is the higher amount of "permanent" snapping of layers prior to the \(n=N/2\) point. This permanent snapping is aided by damping both by the viscous resistance to unsnapping and reduction of traveling waves that may cause unsnapping at later times after subsequent reflections and constructive interference. Two significant additional implications of the findings regarding impact conditions and damping are the following. First, the bistable mechanism has the potential to yield significant performance benefits in terms of KE abatement if the material and impact conditions are well paired, but if not well paired, the bistable material can underperform a more traditional material, sometimes by a substantial margin. These high performance conditions were found, via simulation, to match well with predictions made using simple analytical estimates (namely, Eqs. 6 and 7). Second, while only a single viscosity was used in the damped examples, the performance improvement as a result of the addition of damping leads the authors to suspect that subsequent "tuning" of the damping may significantly affect the performance of these systems. We further suggest several potential avenues for future study in the context of both the fundamental understanding of energy transmission in bistable systems as well as their application in impact mitigation systems. The simulations considered herein idealized the response of a physically realized bistable mechanical metamaterial by not considering self contact (or full compaction) of the physical unit cells. The future inclusion of self contact will be critical for accurately describing the response of such systems, and can be expected to modify ranges of poor and superior impact performance. In addition, we suggest that computational optimization has a key future role to play, particularly given the strong nonlinearities involved herein. This may take the form of shape optimization of the unit cell aimed at maximizing energy absorption per unit mass density considering the metamaterial's dynamic response. One may also consider optimizing the unit cell to broaden the envelope of impactor conditions (_e.g._ Fig. 2 and Fig. 4) wherein the bistable material gives superior impact absorption response. Finally, we consider there may exist additional rich dynamics and unique capabilities for impact absorption with bistable media in higher dimensions, such as in the case of localized point impacts, instead of the 1D (plate-impact-like) scenarios considered herein. ## 9 Acknowledgments This project was supported by the US Army Research Laboratory and Army Research Office under grant no. W911NF-17-1-0595. I.F. acknowledges support from the Department of Defense (DoD) through the National Defense Science & Engineering Graduate (NDSEG) Fellowship Program. B.M. acknowledges support from the U.S. Department of Energy (DOE) National Nuclear Security Administration (NNSA) Laboratory Graduate Residency Fellowship (LRGF) under Cooperative Agreement DE-NA0003960. The authors declare they have no competing financial interests.
2308.02271
Combinatorial curvature flows with surgery for inversive distance circle packings on surfaces
Inversive distance circle packings introduced by Bowers-Stephenson are natural generalizations of Thurston's circle packings on surfaces. To find piecewise Euclidean metrics on surfaces with prescribed combinatorial curvatures, we introduce the combinatorial Calabi flow, the fractional combinatorial Calabi flow and the combinatorial $p$-th Calabi flow for the Euclidean inversive distance circle packings. Due to the singularities possibly developed by these combinatorial curvature flows, the longtime existence and convergence of these combinatorial curvature flows have been a difficult problem for a long time. To handle the potential singularities along these combinatorial curvature flows, we do surgery along these flows by edge flipping under the weighted Delaunay condition. Using the discrete conformal theory recently established by Bobenko-Lutz for decorated piecewise Euclidean metrics on surfaces, we prove the longtime existence and global convergence for the solutions of these combinatorial curvature flows with surgery. This provides effective algorithms for finding piecewise Euclidean metrics on surfaces with prescribed combinatorial curvatures.
Xu Xu, Chao Zheng
2023-08-04T11:53:48Z
http://arxiv.org/abs/2308.02271v1
# Combinatorial curvature flows with surgery for inversive distance circle packings on surfaces ###### Abstract. Inversive distance circle packings introduced by Bowers-Stephenson are natural generalizations of Thurston's circle packings on surfaces. To find piecewise Euclidean metrics on surfaces with prescribed combinatorial curvatures, we introduce the combinatorial Calabi flow, the fractional combinatorial Calabi flow and the combinatorial \(p\)-th Calabi flow for the Euclidean inversive distance circle packings. Due to the singularities possibly developed by these combinatorial curvature flows, the longtime existence and convergence of these combinatorial curvature flows have been a difficult problem for a long time. To handle the potential singularities along these combinatorial curvature flows, we do surgery along these flows by edge flipping under the weighted Delaunay condition. Using the discrete conformal theory recently established by Bobenko-Lutz for decorated piecewise Euclidean metrics on surfaces, we prove the longtime existence and global convergence for the solutions of these combinatorial curvature flows with surgery. This provides effective algorithms for finding piecewise Euclidean metrics on surfaces with prescribed combinatorial curvatures. Key words and phrases:Combinatorial curvature flows; Surgery; Inversive distance circle packings; Piecewise Euclidean metrics MSC (2020): 52C25, 52C26 ## 1. Introduction ### PE metrics and PE surfaces Suppose \(S\) is a connected closed surface and \(V\) is a finite subset of \(S\) with \(|V|=N\), we call \((S,V)\) a marked surface. A piecewise Euclidean (PE for short) metric \(dist_{S}\) on the marked surface \((S,V)\) is a flat cone metric with the conic singularities contained in \(V\). The marked surface \((S,V)\) endowed with a PE metric \(dist_{S}\) is called a PE surface. Let \(\mathcal{T}=(V,E,F)\) be a triangulation of \((S,V)\), where \(V,E,F\) are the sets of vertices, edges and faces respectively. We use one index to denote a vertex (such as \(i\)), two indices to denote an edge (such as \(\{ij\}\)) and three indices to denote a face (such as \(\{ijk\}\)) in the triangulation \(\mathcal{T}\). The marked surface \((S,V)\) with a fixed triangulation \(\mathcal{T}\) is called a triangulated surface, denoted by \((S,\mathcal{T})\). The triangulation \(\mathcal{T}\) for a PE surface is a geodesic triangulation if the edges are geodesics in the PE metric. The PE metric \(dist_{S}\) on a geodesic triangulated PE surface \((S,\mathcal{T},dist_{S})\) defines a map \(l:E\to\mathbb{R}_{>0}\) such that \(l_{ij},l_{ik},l_{jk}\) satisfy the triangle inequalities for any triangle \(\{ijk\}\in F\). Conversely, given a function \(l:E\to\mathbb{R}_{>0}\) satisfying the triangle inequality, one can construct a PE metric on a triangulated surface \((S,\mathcal{T})\) by isometrically gluing Euclidean triangles along edges in pairs. Therefore, we use \(l:E\to\mathbb{R}_{>0}\) to denote a PE metric on a triangulated surface \((S,\mathcal{T})\). For a PE metric \(l:E\to\mathbb{R}_{>0}\) on a triangulated surface \((S,\mathcal{T})\), the combinatorial curvature \(K:V\to(-\infty,2\pi)\) is used to describe the conic singularities of PE metrics at the vertices. For a vertex \(i\in V\), the combinatorial curvature \(K_{i}\) is defined to be \[K_{i}=2\pi-\sum_{\{ijk\}\in F}\theta^{i}_{jk},\] where the summation is taken over all the triangles with \(i\) as a vertex and \(\theta^{i}_{jk}\) is the inner angle of the triangle \(\{ijk\}\) at the vertex \(i\). Note that the combinatorial curvature \(K\) is independent of the geodesic triangulations of a PE surface. Therefore, the combinatorial curvature \(K\) is an intrinsic geometric invariant of a PE surface. Furthermore, the combinatorial curvature \(K\) satisfies the following discrete Gauss-Bonnet formula ([6], Proposition 3.1) \[\sum_{i=1}^{N}K_{i}=2\pi\chi(S), \tag{1}\] where \(\chi(S)\) is the Euler number of the surface \(S\). Due to the fast developing of computer science, applied mathematics, engineering and medical imaging, PE surfaces with prescribed combinatorial curvatures have been more and more important in theory and applications. Please refer to [25, 42] and others for this. A basic problem in theory and applications is finding PE metrics on surfaces with prescribed combinatorial curvatures. An effective approach to this problem is studying it in discrete conformal geometry and finding the PE metrics with prescribed combinatorial curvatures by combinatorial curvature flows. There are mainly two types of discrete conformal structures on surfaces that have been extensively studied in discrete conformal geometry in the literature. One type is the vertex scalings introduced by Luo [29]. Luo's vertex scaling of a PE metric \(l\) on a triangulated surface \((S,\mathcal{T})\) is defined to be the PE metric \(\widetilde{l}\) on \((S,\mathcal{T})\) such that there exists a discrete conformal factor \(u\in\mathbb{R}^{V}\) with \[\widetilde{l}_{ij}=l_{ij}e^{\frac{u_{i}+u_{j}}{2}},\ \ \forall\{ij\}\in E. \tag{2}\] In the last two decades, there have been lots of important works on Luo's vertex scalings. The rigidity of Luo's vertex scalings was proved locally by Luo [29] and globally by Bobenko-Pinkall-Springborn [2]. One can also refer to [41] for an elementary proof of the rigidity of Luo's vertex scalings. The discrete uniformization theorems for Luo's vertex scalings were established by Gu-Luo-Sun-Wu [23], Gu-Guo-Luo-Sun-Wu [22], Springborn [33] and Izmestiev-Prosanov-Wu [27]. The convergence of Luo's vertex scalings was studied by Gu-Luo-Wu [24], Luo-Sun-Wu [31] and Wu-Zhu [37]. To find PE metrics with prescribed combinatorial curvatures on surfaces, the combinatorial curvature flows for Luo's vertex scalings were extensively studied. Luo [29] first introduced the combinatorial Yamabe flow for the vertex scalings on a triangulated surface and proved its local convergence. Ge-Jiang [12] further proved the global convergence of the combinatorial Yamabe flow for Luo's vertex scalings on a triangulated surface by the constant extension introduced by Bobenko-Pinkall-Springborn [2]. The combinatorial Yamabe flows with surgery for Luo's vertex scalings were introduced by Gu-Luo-Sun-Wu [23] and Gu-Guo-Luo-Sun-Wu [22], where the longtime existence and convergence were also proved. The finiteness of surgeries along the combinatorial Yamabe flow with surgery was proved by Wu [35]. Following Luo's work on combinatorial Yamabe flow [29], Ge [9] introduced the combinatorial Calabi flow for Luo's vertex scalings on triangulated surfaces and proved the short time existence. The longtime existence and convergence of the combinatorial Calabi flow with surgery for Luo's vertex scalings were proved by Zhu-Xu [44]. Feng-Lin-Zhang [8] introduced the combinatorial \(p\)-th Calabi flow for Luo's vertex scalings on surfaces, and proved the corresponding longtime existence and convergence of the combinatorial \(p\)-th Calabi flow by surgery. Recently, Wu-Xu [36] introduced the fractional combinatorial Calabi flow for Luo's vertex scalings on surfaces, and proved the corresponding longtime existence and convergence of the flow by surgery. This generalizes the results for the combinatorial Yamabe flow and the combinatorial Calabi flow with surgery in [22, 23, 44]. Another important type of discrete conformal structures on surfaces is the circle packings. Among different types of circle packings, Thurston's circle packings have been extensively studied in the literature, including the existence, rigidity, convergence and the combinatorial curvature flows. However, not all PE metrics are Thurston's circle packing metrics. The inversive distance circle packings introduced by Bowers-Stephenson [3] are natural generalizations of Thurston's circle packings. Comparing to Luo's vertex scalings, the corresponding theory for the inversive distance circle packings is not well established. ### Inversive distance circle packings and decorated PE metrics The inversive distance circle packing metrics on surfaces are defined as follows. **Definition 1.1** ([3]).: Suppose \((S,\mathcal{T})\) is a triangulated surface with a weight \(I:E\to(-1,+\infty)\). A PE metric \(l:E\to\mathbb{R}_{>0}\) on \((S,\mathcal{T},I)\) is an inversive distance circle packing metric if there exists a function \(r:V\to\mathbb{R}_{>0}\) such that \[l_{ij}=\sqrt{r_{i}^{2}+r_{j}^{2}+2I_{ij}r_{i}r_{j}} \tag{3}\] for any edge \(\{ij\}\in E\). The map \(r:V\to\mathbb{R}_{>0}\) is referred as an _inversive distance circle packing_ on \((S,\mathcal{T},I)\). The map \(r\) is also called a decoration on \((S,\mathcal{T},l)\) by Bobenko-Lutz [1]. The pair \((l,r)\) on a triangulated surface \((S,\mathcal{T})\) is called a decorated PE metric. The weight \(I_{ij}\) is the inversive distance of the two circles centered at \(i\) and \(j\) with radii \(r_{i}\) and \(r_{j}\) respectively. If \(I_{ij}\in(-1,0)\), then the two circles attached to the vertices \(i\) and \(j\) intersect with an obtuse angle. If \(I_{ij}\in[0,1]\), then the two circles intersect with a non-obtuse angle. Taking \(I_{ij}=\cos\Phi_{ij}\) with \(\Phi_{ij}\in[0,\frac{\pi}{2}]\), the inversive distance circle packings are reduced to Thurston's circle packings in [34]. If \(I_{ij}\in(1,+\infty)\), then the two circles are disjoint. Note that the inversive distance between two circles is invariant under Mobius transformations [7]. Two inversive distance circle packing metrics \(l\) and \(\widetilde{l}\) on \((S,\mathcal{T},I)\) and \((S,\mathcal{T},\widetilde{I})\) respectively are discrete conformally equivalent if \(I=\widetilde{I}\). In this case, we set \[\widetilde{r}_{i}=e^{u_{i}}r_{i} \tag{4}\] for \(i\in V\) and call \(u:V\to\mathbb{R}\) a discrete conformal factor. Then the equation (3) is equivalent to \[\widetilde{l}_{ij}^{2}=(e^{2u_{i}}-e^{u_{i}+u_{j}})r_{i}^{2}+(e^{2u_{j}}-e^{u _{i}+u_{j}})r_{j}^{2}+e^{u_{i}+u_{j}}l_{ij}^{2} \tag{5}\] for any edge \(\{ij\}\in E\). Conversely, if two decorated PE metrics \((l,r)\) and \((\widetilde{l},\widetilde{r})\) on \((S,\mathcal{T})\) satisfy the equations (4) and (5) for any edge \(\{ij\}\in E\), then they are discrete conformallly equivalent. Please refer to Bobenko-Lutz's recent work [1] for more information on this. **Remark 1.2**.: By comparing (2) with (5), one can see that they can be written in a unified form \[\widetilde{l}_{ij}^{2}=\alpha_{i}e^{2f_{i}}+\alpha_{j}e^{2f_{j}}+2\eta_{ij}e^{ f_{i}+f_{j}} \tag{6}\] with \(\alpha_{i}:V\to\mathbb{R}\) and \(\eta_{ij}:E\to\mathbb{R}\). The equation (6) characterizes the discrete conformal structures introduced by Glickenstein [19], Glickenstein-Thomas [21] and Zhang-Guo-Zeng-Luo-Yau-Gu [43]. If we set \(\alpha_{i}=0\) and \(2\eta_{ij}=l_{ij}^{2}\), then (6) is equivalent to (2). If we set \(\alpha_{i}=r_{i}^{2}>0\) and \(2\eta_{ij}=l_{ij}^{2}-r_{i}^{2}-r_{j}^{2}\), then (6) is equivalent to (5). Bowers-Stephenson [3] conjectured that the inversive distance circle packings on surfaces are rigid. This conjecture was proved to be true by Guo [26], Luo [30] and Xu [38, 39] for the inversive distance in \((-1,+\infty)\). For the inversive distance in \((1,+\infty)\), the convergence of inversive distance circle packings was studied by Chen-Luo-Xu-Zhang [5]. Recently, Bobenko-Lutz [1] introduced a new definition of discrete conformality for Euclidean inversive distance circle packings, i.e., decorated PE metrics, and proved the corresponding discrete uniformization theorem. However, there are not so much research on the combinatorial curvature flows for inversive distance circle packings on surfaces. ### Combinatorial curvature flows for inversive distance circle packings and the main results To find effective algorithms searching for polyhedral metrics with prescribed combinatorial curvatures on surfaces, Chow-Luo [6] introduced the combinatorial Ricci flow. Motivated by Chow-Luo's original work [6], Ge [9, 10] and Ge-Xu [16] introduced the combinatorial Calabi flow, Lin-Zhang [28] introduced the combinatorial \(p\)-th Calabi flow and Wu-Xu [36] introduced the fractional combinatorial Calabi flow on surfaces. These combinatorial curvature flows were initially introduced for Thurston's circle packings. The longtime existence and convergence of these combinatorial curvature flows for Thurston's circle packings were proved in [6, 9, 10, 11, 16, 28, 36]. However, not all PE metrics are Thurston's circle packing metrics. As natural generalizations of Thurston's circle packings, the inversive distance circle packings have more flexibility than Thurston's circle packings. On the other hand, due to the possible degenerations of triangles generated by inversive distance circle packings, the longtime existence and convergence of the solutions of the combinatorial curvature flows for inversive distance circle packings have been a difficult problem for a long time. The combinatorial Ricci flow for inversive distance circle packings on triangulated surfaces was first introduced by Zhang-Guo-Zeng-Luo-Yau-Gu [43]. The properties of the combinatorial Ricci flow for inversive distance circle packings on triangulated surfaces were studied in [13, 14, 15, 17] by the constant extension introduced by Luo [30]. Recently, Bobenko-Lutz [1] proved the longtime existence and convergence of the combinatorial Ricci flow with surgery for the Euclidean inversive distance circle packings on surfaces with inversive distance in \((1,+\infty)\). The efficiency, efficacy and robustness of this combinatorial Ricci flow with surgery in applications were demonstrated in [4]. In this paper, we introduce the combinatorial Calabi flow, the fractional combinatorial Calabi flow and the combinatorial \(p\)-th Calabi flow for the Euclidean inversive distance circle packings on surfaces with inversive distance in \((1,+\infty)\). Furthermore, we prove the longtime existence and convergence of the solutions of these combinatorial curvature flows by surgery. **Definition 1.3**.: Suppose \((S,\mathcal{T})\) is a triangulated surface with a weight \(I:E\to(1,+\infty)\). Let \(\overline{K}:V\to(-\infty,2\pi)\) be a given function defined on \(V\). The combinatorial Calabi flow for the inversive distance circle packings on \((S,\mathcal{T},I)\) is defined to be \[\begin{cases}\frac{du_{i}}{dt}=\Delta_{\mathcal{T}}(K-\overline{K})_{i},\\ u_{i}(0)=u_{0},\end{cases} \tag{7}\] where \(\Delta_{\mathcal{T}}\) is the discrete Laplace operator defined by \[\Delta_{\mathcal{T}}f_{i}=-\sum_{j=1}^{N}\frac{\partial K_{i}}{\partial u_{j }}f_{j} \tag{8}\] for any function \(f:V\to\mathbb{R}\) defined on the vertices. **Remark 1.4**.: The combinatorial Calabi flow (7) is a negative gradient flow of the combinatorial Calabi energy \[\mathcal{C}(u):=||K-\overline{K}||^{2}=\sum_{i=1}^{N}(K_{i}(u)-\overline{K}_{i})^ {2}.\] This fact was first observed by Ge [9, 10] for Thurston's Euclidean circle packings on surfaces. Set \[L=(L_{ij})_{N\times N}=\frac{\partial(K_{1},...,K_{N})}{\partial(u_{1},...,u_{N})}.\] The equation (8) implies \(\Delta_{\mathcal{T}}=-L\). By Lemma 2.1, the matrix \(L\) is symmetric and positive semi-definite on the admissible space \(\Omega^{\mathcal{T}}\) of discrete conformal factors \(u\) such that the equation (5) defines a PE metric. Then there exists an orthonormal matrix \(P\) such that \[L=P^{\mathrm{T}}\cdot\text{diag}\{\lambda_{1},...,\lambda_{N}\}\cdot P,\] where \(\lambda_{1},...,\lambda_{N}\) are non-negative eigenvalues of the matrix \(L\). For any \(s\in\mathbb{R}\), the \(2s\)-th order fractional discrete Laplace operator \(\Delta_{\mathcal{T}}^{s}\) is defined to be \[\Delta_{\mathcal{T}}^{s}=-L^{s}=-P^{\mathrm{T}}\cdot\text{diag}\{\lambda_{1} ^{s},...,\lambda_{N}^{s}\}\cdot P. \tag{9}\] Therefore, the \(2s\)-th order fractional discrete Laplace operator \(\Delta_{\mathcal{T}}^{s}\) is negative semi-definite on \(\Omega^{\mathcal{T}}\). Furthermore, the \(2s\)-th order fractional discrete Laplace operator \(\Delta_{\mathcal{T}}^{s}\) has the same kernel space as the discrete Laplace operator \(\Delta_{\mathcal{T}}\). Specially, if \(s=0\), then \(\Delta_{\mathcal{T}}^{s}\) is reduced to the minus identity operator; if \(s=1\), then \(\Delta_{\mathcal{T}}^{s}\) is reduced to the discrete Laplace operator \(\Delta_{\mathcal{T}}=-L=-(\frac{\partial K_{i}}{\partial u_{j}})_{N\times N}\). **Definition 1.5**.: Suppose \((S,\mathcal{T})\) is a triangulated surface with a weight \(I:E\rightarrow(1,+\infty)\). Let \(\overline{K}:V\rightarrow(-\infty,2\pi)\) be a given function defined on \(V\). For any \(s\in\mathbb{R}\), the \(2s\)-th order fractional combinatorial Calabi flow for the inversive distance circle packings on \((S,\mathcal{T},I)\) is defined to be \[\begin{cases}\frac{du_{i}}{dt}=\Delta_{\mathcal{T}}^{s}(K-\overline{K})_{i},\\ u_{i}(0)=u_{0},\end{cases} \tag{10}\] where \(\Delta_{\mathcal{T}}^{s}\) is the fractional discrete Laplace operator defined by (9). **Remark 1.6**.: If \(s=0\), the \(2s\)-th order fractional combinatorial Calabi flow (10) is reduced to the combinatorial Ricci flow introduced by Zhang-Guo-Zeng-Luo-Yau-Gu [43] for the inversive distance circle packings. If \(s=1\), the \(2s\)-th order fractional combinatorial Calabi flow (10) is reduced to the combinatorial Calabi flow (7). By Lemma 2.1, we have \(\sum_{j=1}^{N}\frac{\partial K_{i}}{\partial u_{j}}=0\). As a result, the equation (8) defining the discrete Laplace operator \(\Delta_{\mathcal{T}}\) can be written as \[\Delta_{\mathcal{T}}f_{i}=-\sum_{j=1}^{N}\frac{\partial K_{i}}{\partial u_{j}} f_{j}=\sum_{j\sim i}(-\frac{\partial K_{i}}{\partial u_{j}})(f_{j}-f_{i}).\] For any \(p>1\), we define the discrete \(p\)-th Laplace operator \(\Delta_{p,\mathcal{T}}\) for the inversive distance circle packings on \((S,\mathcal{T},I)\) by the following formula \[\Delta_{p,\mathcal{T}}f_{i}=\sum_{j\sim i}(-\frac{\partial K_{i}}{\partial u _{j}})|f_{j}-f_{i}|^{p-2}(f_{j}-f_{i}), \tag{11}\] where \(f:V\to\mathbb{R}\) is a function defined on the vertices. **Definition 1.7**.: Suppose \((S,\mathcal{T})\) is a triangulated surface with a weight \(I:E\to(1,+\infty)\). Let \(\overline{K}:V\to(-\infty,2\pi)\) be a given function defined on \(V\). For any \(p>1\), the combinatorial \(p\)-th Calabi flow for the inversive distance circle packings on \((S,\mathcal{T},I)\) is defined to be \[\begin{cases}\frac{du_{i}}{dt}=\Delta_{p,\mathcal{T}}(K-\overline{K})_{i},\\ u_{i}(0)=u_{0},\end{cases} \tag{12}\] where \(\Delta_{p,\mathcal{T}}\) is the discrete \(p\)-th Laplace operator defined by (11). **Remark 1.8**.: If \(p=2\), the discrete \(p\)-th Laplace operator \(\Delta_{p,\mathcal{T}}\) is reduced to the discrete Laplace operator \(\Delta_{\mathcal{T}}\) and the combinatorial \(p\)-th Calabi flow (12) is reduced to the combinatorial Calabi flow (7). As the combinatorial Calabi flow (7), the \(2s\)-th order fractional combinatorial Calabi flow (10) and the combinatorial \(p\)-th Calabi flow (12) are ODE systems with smooth coefficients, the solutions of these combinatorial curvature flows always exist locally around the initial time \(t=0\). Furthermore, we have the following results on the longtime existence and convergence for the solutions of the combinatorial Calabi flow (7) and the \(2s\)-th order fractional combinatorial Calabi flow (10). **Theorem 1.9**.: Suppose \((S,\mathcal{T})\) is a triangulated surface with a weight \(I:E\to(1,+\infty)\). Let \(\overline{K}:V\to(-\infty,2\pi)\) be a given function defined on \(V\) satisfying the discrete Gauss-Bonnet formula (1) and \(s\in\mathbb{R}\) be a constant. **(i):**: If the solution \(u(t)\) of the combinatorial Calabi flow (7) or the \(2s\)-th order fractional combinatorial Calabi flow (10) converges, then there exists a discrete conformal factor on \((S,\mathcal{T},I)\) with the combinatorial curvature \(\overline{K}\). **(ii):**: If there exists a discrete conformal factor \(\overline{u}\) with the combinatorial curvature \(\overline{K}\) on \((S,\mathcal{T},I)\), there exists a constant \(\delta>0\) such that if the initial value \(u(0)\) satisfies \(||u(0)-\overline{u}||<\delta\) and \(\sum_{i=1}^{N}u_{i}(0)=\sum_{i=1}^{N}\overline{u}_{i}\), then the solutions of the combinatorial Calabi flow (7) and the \(2s\)-th order fractional combinatorial Calabi flow (10) exist for all time and converge exponentially fast to \(\overline{u}\). For general initial inversive distance circle packing metrics, the combinatorial Calabi flow (7), the \(2s\)-th order fractional combinatorial Calabi flow (10) and the combinatorial \(p\)-th Calabi flow (12) may develop singularities, including that some triangles degenerate and the discrete conformal factors tend to infinity along these combinatorial curvature flows. Specially, it is proved ([39], Remark 2.6) that \(\frac{\partial\theta_{jk}^{i}}{\partial u_{j}}\) tends to infinity if the nondegenerate triangle generated by inversive distance circle packings tends to be a degenerate triangle (the triangle inequality fails). To handle the potential singularities along these combinatorial curvature flows, we do surgery on the flows by edge flipping under the weighted Delaunay condition, the idea of which comes from the recent work of Bobenko-Lutz [1]. Given a decorated triangle \(\{ijk\}\in F\), there is a unique circle \(C_{ijk}\) simultaneously orthogonal to all the three circles attached to the vertices \(i,j,k\). The circle \(C_{ijk}\) is called the face-circle of the decorated triangle \(\{ijk\}\). Denote \(\alpha_{ij}^{k}\) as the interior intersection angle of the face-circle \(C_{ijk}\) and the edge \(\{ij\}\). Please refer to Figure 1. A _weighted Delaunay triangulation_\(\mathcal{T}\) for a decorated PE metric \((l,r)\) on \((S,V)\) is a geodesic triangulation such that \(\alpha_{ij}^{k}+\alpha_{ij}^{l}\leq\pi\) for any adjacent triangles \(\{ijk\}\) and \(\{ijl\}\) sharing a common edge \(\{ij\}\in E\). This definition of weighted Delaunay triangulations comes from Bobenko-Lutz [1]. One can also refer to [18, 20, 40] for other equivalent definitions of weighted Delaunay triangulations. Along these combinatorial curvature flows (7), (10) and (12) on \((S,\mathcal{T})\), if \(\mathcal{T}\) is weighted Delaunay in \((l(t),r(t))\) for \(t\in[0,T]\) and not weighted Delaunay in \((l(t),r(t))\) for \(t\in(T,T+\epsilon),\ \epsilon>0\), there exists an edge \(\{ij\}\in E\) such that \(\alpha_{ij}^{k}+\alpha_{ij}^{l}\leq\pi\) for \(t\in[0,T]\) and \(\alpha_{ij}^{k}+\alpha_{ij}^{l}>\pi\) for \(t\in(T,T+\epsilon)\). Then we replace the triangulation \(\mathcal{T}\) by a new triangulation \(\mathcal{T}^{\prime}\) at time \(t=T\) via replacing two triangles \(\{ijk\}\) and \(\{ijl\}\) adjacent to \(\{ij\}\) by two new triangles \(\{ikl\}\) and \(\{jkl\}\). This procedure is called a **surgery by flipping** on the triangulation \(\mathcal{T}\), which is also an isometry of \((S,V)\) with the decorated PE metric \((l(T),r(T))\). After the surgery by flipping at the time \(t=T\), we run these combinatorial curvature flows on \((S,\mathcal{T}^{\prime})\) with the initial metric \((l(T),r(T))\). Whenever the weighted Delaunay condition is not satisfied along these combinatorial curvature flows, we do surgery on these combinatorial curvature flows by flipping. We have the following result on the longtime existence and convergence for the solutions of the combinatorial Calabi flow with surgery, the \(2s\)-th order fractional combinatorial Calabi flow with surgery and the combinatorial \(p\)-th Calabi flow with surgery for decorated PE metrics on \((S,V)\). **Theorem 1.10**.: Suppose \((S,V)\) is a marked surface with a decorated PE metric \((dist_{g},r)\). Let \(\overline{K}:V\to(-\infty,2\pi)\) be a given function defined on \(V\) satisfying the discrete Gauss-Bonnet formula (1). **(i):**: The solution of the combinatorial Calabi flow with surgery exists for all time and converges exponentially fast to \(\overline{u}\) with the prescribed combinatorial curvature \(\overline{K}\) for any initial \(u(0)\in\mathbb{R}^{N}\) with \(\sum_{i=1}^{N}u_{i}(0)=\sum_{i=1}^{N}\overline{u}_{i}\); **(ii):**: For any \(s\in\mathbb{R}\), the solution of the \(2s\)-th order fractional combinatorial Calabi flow with surgery exists for all time and converges exponentially fast to \(\overline{u}\) with the prescribed combinatorial curvature \(\overline{K}\) for any initial \(u(0)\in\mathbb{R}^{N}\) with \(\sum_{i=1}^{N}u_{i}(0)=\sum_{i=1}^{N}\overline{u}_{i}\); **(iii):**: For any \(p>1\), the solution of the combinatorial \(p\)-th Calabi flow with surgery exists for all time and converges to \(\overline{u}\) with the prescribed combinatorial curvature \(\overline{K}\) for any initial \(u(0)\in\mathbb{R}^{N}\) with \(\sum_{i=1}^{N}u_{i}(0)=\sum_{i=1}^{N}\overline{u}_{i}\). **Remark 1.11**.: If \(s=0\), the convergence for the \(2s\)-th order fractional combinatorial Calabi flow with surgery in Theorem 1.10 is reduced to the convergence of combinatorial Ricci flow Figure 1. Datas for a decorated triangle \(\{ijk\}\in F\) with surgery for decorated PE metrics obtained by Bobenko-Lutz [1]. Different from the combinatorial Calabi flow with surgery (\(p=2\)), we can not get the exponential convergence for the solution of the combinatorial \(p\)-th Calabi flow with surgery for \(p\neq 2\). In applications, it is important that we just need to do finite times of surgeries along these combinatorial curvature flows. This is closely related to the stability of computer algorithms. In the case of Luo's vertex scalings, Gu-Luo-Sun-Wu [23] proved that \(\mathbb{R}^{N}=\bigcup_{i=1}^{m}\mathcal{D}_{i}\) is an analytical decomposition of cells. Hence, this problem is reduced to whether the solutions of the combinatorial curvature flows with surgery go cross the boundaries of these cells only finitely many times. Wu [35] solved this problem for the combinatorial Yamabe flow of Luo's vertex scalings and proved the following theorem. **Theorem 1.12** ([35]).: Suppose \(\mathbb{R}^{N}=\bigcup_{i=1}^{m}\mathcal{D}_{i}\) is an analytic cell decomposition. Let \(f(x)\in C^{1}(\mathbb{R}^{N})\) be analytic on each cell \(\mathcal{D}_{i}\) and have a unique minimum point where \(f\) has positive Hession. Then its gradient flow \(\gamma(t)\) satisfying \(\gamma^{\prime}(t)=-\nabla f(\gamma(t))\) intersects the cell faces \(\mathcal{D}_{i}\) finitely many times. For decorated PE metrics on surfaces, Bobenko-Lutz [1] recently proved similar analytic cell decomposition of \(\mathbb{R}^{N}\). It is convinced that we also just need to do finite times of surgeries along the combinatorial Calabi flow, the \(2s\)-th order fractional combinatorial Calabi flow and the combinatorial \(p\)-th Calabi flow for decorated PE metrics on \((S,V)\). ### Organization of the paper The paper is organized as follows. In Section 2, we study the combinatorial Calabi flow (7), the \(2s\)-th order fractional combinatorial Calabi flow (10) and the combinatorial \(p\)-th Calabi flow (12) on triangulated surfaces and prove Theorem 1.9. In Section 3, we allow the triangulation on a marked surface to be changed by edge flipping and prove Theorem 1.10. ### Acknowledgements The first author thanks Professor Feng Luo for his invitation to the workshop "Discrete and Computational Geometry, Shape Analysis, and Applications" taking place at Rutgers University, New Brunswick from May 19th to May 21st, 2023. The first author also thanks Carl O. R. Lutz for helpful communications during the workshop. ## 2. Combinatorial curvature flows for fixed triangulations Suppose \((S,\mathcal{T},I)\) is a weighted triangulated surface with a decorated PE metric \((l,r)\). The admissible space \(\Omega^{\mathcal{T}}_{ijk}\) of the discrete conformal factors for a triangle \(\{ijk\}\in F\) in \((S,\mathcal{T})\) is defined to be the set of \((u_{i},u_{j},u_{k})\in\mathbb{R}^{3}\) such that the triangle with edge lengths \(\widetilde{l}_{ij},\widetilde{l}_{ik},\widetilde{l}_{jk}\) defined by (5) exists in the 2-dimensional Euclidean space \(\mathbb{E}^{2}\), i.e., \[\Omega^{\mathcal{T}}_{ijk}=\{(u_{i},u_{j},u_{k})\in\mathbb{R}^{3}|\widetilde {l}_{rs}+\widetilde{l}_{rt}>\widetilde{l}_{ts},\{r,s,t\}=\{i,j,k\}\}.\] The admissible space of discrete conformal factors on \((S,\mathcal{T},I)\), denoted by \(\Omega^{\mathcal{T}}\), is defined to be the vectors \(u\in\mathbb{R}^{N}\) such that \((u_{i},u_{j},u_{k})\in\Omega^{\mathcal{T}}_{ijk}\) for every triangle \(\{ijk\}\in F\). **Lemma 2.1** ([26, 38, 39]).: Suppose \((S,\mathcal{T})\) is a triangulated surface with a weight \(I:E\to(1,+\infty)\). **(i):**: The admissible space \(\Omega^{\mathcal{T}}_{ijk}\) is a non-empty simply connected open set whose boundary is analytic. * The matrix \(\frac{\partial(\theta^{\epsilon}_{jk},\theta^{\epsilon}_{jk},\theta^{\epsilon}_{kj})} {\partial(u_{i},u_{j},u_{k})}\) is symmetric and negative semi-definite with rank 2 and kernel \(\{c(1,1,1)^{\mathrm{T}}|c\in\mathbb{R}\}\) on \(\Omega^{\mathcal{T}}_{ijk}\). As a result, the matrix \(L=\frac{\partial(K_{1},\ldots,K_{N})}{\partial(u_{1},\ldots,u_{N})}\) is symmetric and positive semi-definite with rank \(N-1\) and kernel \(\{c\mathbf{1}^{\mathrm{T}}\in\mathbb{R}^{N}|c\in\mathbb{R}\}\) on \(\Omega^{\mathcal{T}}\). By Lemma 2.1, we have the following result. **Lemma 2.2**.: Suppose \((S,\mathcal{T})\) is a triangulated surface with a weight \(I:E\to(1,+\infty)\). Let \(\overline{K}:V\to(-\infty,2\pi)\) be a given function defined on \(V\). If \(\overline{K}\) satisfies the discrete Gauss-Bonnet formula (1), then \(\sum_{i=1}^{N}u_{i}(t)\) is invariant along the combinatorial Calabi flow (7), the \(2s\)-th order fractional combinatorial Calabi flow (10) and the combinatorial \(p\)-th Calabi flow (12). Proof.: By Lemma 2.1 and direct calculations, we have \[\frac{d(\sum_{i=1}^{N}u_{i})}{dt}=\sum_{i=1}^{N}\Delta_{\mathcal{T}}(K- \overline{K})_{i}=-\sum_{i=1}^{N}\sum_{j=1}^{N}(\frac{\partial K_{i}}{ \partial u_{j}})(K-\overline{K})_{j}=0\] along the combinatorial Calabi flow (7). This implies \(\sum_{i=1}^{N}u_{i}\) is invariant along the combinatorial Calabi flow (7). Similarly, by Lemma 2.1, we have \[\frac{d(\sum_{i=1}^{N}u_{i})}{dt}=\sum_{i=1}^{N}\Delta_{\mathcal{T}}^{s}(K- \overline{K})_{i}=-\mathbf{1}^{\mathrm{T}}(\frac{\partial K}{\partial u})^{s }(K-\overline{K})=0\] along the \(2s\)-th order fractional combinatorial Calabi flow (10). This implies \(\sum_{i=1}^{N}u_{i}\) is invariant along the \(2s\)-th order fractional combinatorial Calabi flow (10). Similarly, by direct calculations, we have \[\frac{d(\sum_{i=1}^{N}u_{i})}{dt}=\sum_{i=1}^{N}\Delta_{p,\mathcal{T}}(K- \overline{K})_{i}\] along the combinatorial \(p\)-th Calabi flow (12). By the following formula obtained by Lin-Zhang ([28], Lemma 5.3) \[\sum_{i=1}^{N}\Delta_{p,\mathcal{T}}f_{i}=0\] for any \(f:V\to\mathbb{R}\), we have \(\frac{d(\sum_{i=1}^{N}u_{i})}{dt}=0\) along the combinatorial \(p\)-th Calabi flow (12). This implies \(\sum_{i=1}^{N}u_{i}\) is invariant along the combinatorial \(p\)-th Calabi flow (12). Lemma 2.2 implies that the solutions of the combinatorial Calabi flow (7), the \(2s\)-th order fractional combinatorial Calabi flow (10) and the combinatorial \(p\)-th Calabi flow (12) stay in the hyperplane \(\Sigma_{0}:=\{u\in\mathbb{R}^{N}|\sum_{i=1}^{N}u_{i}=\sum_{i=1}^{N}u_{i}(0)\}\). Proof of Theorem 1.9:.: Suppose the solution \(u(t)\) of the combinatorial Calabi flow (7) converges to \(\overline{u}\) as \(t\to+\infty\), then \(K(\overline{u})=\lim_{t\to+\infty}K(u(t))\) by the \(C^{1}\)-smoothness of \(K\). Furthermore, there exists a sequence \(t_{n}\in(n,n+1)\) such that for any \(i\in V\), \[u_{i}(n+1)-u_{i}(n)=u^{\prime}_{i}(t_{n})=\Delta_{\mathcal{T}}(K(u(t_{n}))- \overline{K})_{i}\to 0,\text{ as }n\to+\infty.\] This implies that \(K(\overline{u})-\overline{K}=\lim_{n\to+\infty}(K(u(t_{n}))-\overline{K})\) is in the kernel of the discrete Laplace operator \(\Delta_{\mathcal{T}}\). Therefore, by Lemma 2.1, we have \(K(\overline{u})-\overline{K}=c\mathbf{1}^{\mathrm{T}}\) for some \(c\in\mathbb{R}\). Note that \(\sum_{i=1}^{N}(K_{i}(\overline{u})-\overline{K}_{i})=2\pi\chi(S)-2\pi\chi(S)=0\). This implies \(K(\overline{u})-\overline{K}=0\) and \(\overline{u}\) is a discrete conformal factor with the combinatorial curvature \(\overline{K}\). Similarly, if the solution of the \(2s\)-th order fractional combinatorial Calabi flow (10) converges, then \[u_{i}(n+1)-u_{i}(n)=u_{i}^{\prime}(t_{n})=\Delta_{\mathcal{T}}^{s}(K(u(t_{n}))- \overline{K})_{i}\to 0,\text{ as }n\to+\infty.\] By the definition of the \(2s\)-th order fractional discrete Laplace operator in (9), \(\Delta_{\mathcal{T}}^{s}\) is symmetric and negative semi-definite with rank \(N-1\) and kernel \(\{c1^{\mathrm{T}}\in\mathbb{R}^{N}|c\in\mathbb{R}\}\) on \(\Omega^{\mathcal{T}}\). This implies that \(K(\overline{u})-\overline{K}=\lim_{n\to+\infty}(K(u(t_{n}))-\overline{K})=c 1^{\mathrm{T}}\) for some \(c\in\mathbb{R}\). The rest of proof is paralleling to the case of the combinatorial Calabi flow, so we omit it. Suppose there exists a discrete conformal factor \(\overline{u}\) with the combinatorial curvature \(\overline{K}\). For the combinatorial Calabi flow (7), set \(\Gamma(u)=\Delta_{\mathcal{T}}(K-\overline{K})\). Then \(D\Gamma|_{u=\overline{u}}=-L^{2}\) is negative semi-definite with kernel \(\{c1^{\mathrm{T}}\in\mathbb{R}^{N}|c\in\mathbb{R}\}\) by Lemma 2.1. Note that the kernel is perpendicular to the hyperplane \(\Sigma_{0}\). By Lemma 2.2, this implies that \(\overline{u}\) is a local attractor of (7). Then the conclusion follows from Lyapunov Stability Theorem ([32], Chapter 5). Similarly, for the \(2s\)-th order fractional combinatorial Calabi flow (10), set \(\Gamma(u)=\Delta_{\mathcal{T}}^{s}(K-\overline{K})\). Then \(D\Gamma|_{u=\overline{u}}=-L^{s+1}\) restricted to the hypersurface \(\Sigma_{0}\) is negative definite, which implies that \(\overline{u}\) is a local attractor of (10). Then the conclusion follows from Lyapunov Stability Theorem ([32], Chapter 5). \(\mathcal{C}_{\mathcal{T}}(dist_{S},r)\) is homeomorphic to a polyhedral cone (with its apex removed) and its interior is homeomorphic to \(\mathbb{R}^{N}\). **Remark 2.3**.: Under the conditions in Theorem 1.9, if the triangulation \(\mathcal{T}\) is weighted Delaunay along the combinatorial \(p\)-th Calabi flow (12), and the solution \(u(t)\) of the combinatorial \(p\)-th Calabi flow (12) converges to \(\overline{u}\), then there exists a discrete conformal factor on \((S,\mathcal{T},I)\) with the combinatorial curvature \(\overline{K}\). Indeed, there exists a sequence \(t_{n}\in(n,n+1)\) such that for any \(i\in V\), \[u_{i}(n+1)-u_{i}(n)=u_{i}^{\prime}(t_{n})=\Delta_{p,\mathcal{T}}(K(u(t_{n}))- \overline{K})_{i}\to 0,\text{ as }n\to+\infty.\] Set \(\widetilde{K}=\lim_{n\to+\infty}(K(u(\xi_{n}))-\overline{K})=K(\overline{u}) -\overline{K}\), then \(\Delta_{p,\mathcal{T}}\widetilde{K}=0\). By the following formula obtained by Lin-Zhang ([28], Lemma 5.5) \[\sum_{i=1}^{N}f_{i}\Delta_{p,\mathcal{T}}f_{i}=\frac{1}{2}\sum_{i=1}^{N}\sum_ {j\sim i}\frac{\partial K_{i}}{\partial u_{j}}|f_{j}-f_{i}|^{p} \tag{14}\] for any \(f:V\to\mathbb{R}\), we have \[0=\widetilde{K}^{\mathrm{T}}\Delta_{p,\mathcal{T}}\widetilde{K}=\sum_{i} \widetilde{K}_{i}\Delta_{p,\mathcal{T}}\widetilde{K}_{i}=\frac{1}{2}\sum_{i=1 }^{N}\sum_{j\sim i}(\frac{\partial K_{i}}{\partial u_{j}})|\widetilde{K}_{i} -\widetilde{K}_{j}|^{p}.\] Since \(\frac{\partial K_{i}}{\partial u_{j}}\leq 0\) under the weighted Delaunay condition, then \(\widetilde{K}_{i}=\widetilde{K}_{j}\) for the edges \(\{ij\}\in E\) with \(\frac{\partial K_{i}}{\partial u_{j}}<0\). Note that the edges with \(\frac{\partial K_{i}}{\partial u_{j}}<0\) correspond to the edges of the canonical weighted Delaunay tessellation of the PE surface determined by \(\overline{u}\). They form a connected graph connecting all the vertices in \(V\). Therefore, we still have \(\widetilde{K}\equiv c\) for some constant \(c\in\mathbb{R}\), which implies \(K(\overline{u})-\overline{K}=c\mathbf{1}^{\mathrm{T}}\). The rest of proof is similar that of Theorem 1.9, so we omit it. ## 3. Combinatorial curvature flows for variable triangulations ### Bobenko-Lutz's work on discrete uniformization theorem for decorated PE metrics To analyze the longtime behavior of the combinatorial Calabi flow with surgery, the \(2s\)-th order fractional combinatorial Calabi flow with surgery and the combinatorial \(p\)-th Calabi flow with surgery, we need the discrete conformal theory for decorated PE metrics recently established by Bobenko-Lutz [1]. The discrete conformal theory established by Bobenko-Lutz [1] also applies to Luo's vertex scalings, and hence generalizes the discrete conformal theory established by Gu-Luo-Sun-Wu [23]. In this subsection, we briefly recall Bobenko-Lutz's discrete conformal theory related to the inversive distance circle packings. Please refer to [1] for more details. **Definition 3.1** ([1], Definition 4.11).: Two decorated PE metrics \((dist_{S},r)\) and \((\widetilde{dist}_{S},\widetilde{r})\) on the marked surface \((S,V)\) are discrete conformal equivalent if and only if there is a sequence of triangulated decorated PE surfaces \((\mathcal{T}^{0},l^{0},r^{0}),...,(\mathcal{T}^{N},l^{N},r^{N})\) such that **(i):**: the decorated PE metric of \((\mathcal{T}^{0},l^{0},r^{0})\) is \((dist_{S},r)\) and the decorated PE metric of \((\mathcal{T}^{N},l^{N},r^{N})\) is \((\widetilde{dist}_{S},\widetilde{r})\), **(ii):**: each \(\mathcal{T}^{n}\) is a weighted Delaunay triangulation of the decorated PE surface \((\mathcal{T}^{n},l^{n},r^{n})\), **(iii):**: if \(\mathcal{T}^{n}=\mathcal{T}^{n+1}\), then there is a discrete conformal factor \(u\in\mathbb{R}^{N}\) such that \((\mathcal{T}^{n},l^{n},r^{n})\) and \((\mathcal{T}^{n+1},l^{n+1},r^{n+1})\) is related by (4) and (5), **(iv):**: if \(\mathcal{T}^{n}\neq\mathcal{T}^{n+1}\), then \(\mathcal{T}^{n}\) and \(\mathcal{T}^{n+1}\) are two different weighted Delaunay triangulations of the same decorated PE surface. Using the new definition of discrete conformality, Bobwnko-Lutz [1] proved the following theorem for decorated PE metrics. **Theorem 3.2** ([1], Theorem A).: Let \((dist_{S},r)\) be a decorated PE metric on the marked surface \((S,V)\). Then **(i):**: there exists a decorated PE metric discrete conformal equivalent to \((dist_{S},r)\) realizing \(\overline{K}:V\to(-\infty,2\pi)\) if and only if \(\overline{K}\) satisfies the discrete Gauss-Bonnet foumula (1). **(ii):**: for each \(\overline{K}:V\to(-\infty,2\pi)\), there exists at most one decorated PE metric discrete conformal equivalent to \((dist_{S},r)\) realizing \(\overline{K}\), up to scaling. For any decorated PE surface, there exists a unique complete hyperbolic surface \(\Sigma_{g}\), i.e., the surface induced by any triangular refinement of its unique weighted Delaunay tessellation. It is homeomorphic to \(S\backslash V\) and called the fundamental discrete conformal invariant of the PE metric \((dist_{S},r)\). The decoration of \(\Sigma_{g}\) is denoted by \(\omega:=e^{h}\) and here the height \(h\) is related to \(u\) by \[dh_{i}=-du_{i}. \tag{15}\] The canonical weighted Delaunay tessellation of \(\Sigma_{g}\) is denoted by \(\mathcal{T}^{\omega}_{\Sigma_{g}}\). Bobenko-Lutz [1] defined the following set \[\mathcal{D}_{\mathcal{T}}(\Sigma_{g})=\{\omega\in\mathbb{R}^{N}_{>0}|\mathcal{ T}\text{~{}refines~{}}\mathcal{T}^{\omega}_{\Sigma_{g}}\}\] and proved the following proposition. **Proposition 3.3** ([1], Proposition 4.3).: Given a complete hyperbolic surface with ends \(\Sigma_{g}\). There is only a finite number of geodesic tessellations \(\mathcal{T}_{1},...,\mathcal{T}_{m}\) of \(\Sigma_{g}\) such that \(\mathcal{D}_{\mathcal{T}_{n}}(\Sigma_{g})\)\((n=1,...,m)\) is non-empty. In particular, \(\mathbb{R}^{N}_{>0}=\bigcup_{n=1}^{m}\mathcal{D}_{\mathcal{T}_{n}}(\Sigma_{g})\). The set of all heights \(h\) of convex polyhedral cusps over the triangulated hyperbolic surface \((\Sigma_{g},\mathcal{T})\) is denoted by \(\mathcal{P}_{\mathcal{T}}(\Sigma_{g})\subseteq\mathbb{R}^{N}\). **Proposition 3.4** ([1], Proposition 4.9).: Given a decorated PE-metric \((dist_{S},r)\) on the marked surface \((S,V)\). Then \(\mathcal{C}_{\mathcal{T}}(dist_{S},r)\), \(\mathcal{P}_{\mathcal{T}}(\Sigma_{g})\) and \(\mathcal{D}_{\mathcal{T}}(\Sigma_{g})\) are homeomorphic. Therefore, \(\mathbb{R}^{N}_{>0}=\bigcup_{n=1}^{m}\mathcal{C}_{\mathcal{T}_{n}}(dist_{S},r)\). A geodesic triangulation \(\mathcal{T}\), which refines the unique canonical weighted Delaunay tessellation \(\mathcal{T}^{\omega}_{\Sigma_{g}}\), is called a canonical weighted Delaunay triangulation of \(\Sigma_{g}\) with respect to the weights \(\omega\). One can extend the combinatorial map \(K\) to \[\mathbf{K}:\mathbb{R}^{N} \to\mathbb{R}^{N}_{>0}\] \[u \mapsto K(u), \tag{16}\] which is independent of the choice of the canonical weighted Delaunay triangulations. Let \(\Sigma_{g}\) be a hyperbolic surface and \(\overline{K}:V\to(-\infty,2\pi)\) satisfies the discrete Gauss-Bonnet formula (1). Bobenko-Lutz [1] defined the following discrete Hilbert-Einstein functional (dHE-functional) \[\mathcal{H}_{\Sigma_{g},\overline{K}}(h)=\mathcal{H}_{\Sigma_{g},\overline{K },\mathcal{T}}(h)=-2\mathrm{Vol}(P_{h})+\sum_{i\in V}(\Theta_{i}-\theta_{i}) h_{i}+\sum_{\{ij\}\in E(\mathcal{T})}(\pi-\alpha_{ij})\lambda_{ij} \tag{17}\] on \(\mathbb{R}^{N}\), where \(P_{h}\) is the convex polyhedral cusp defined by the heights \(h\in\mathbb{R}^{N}\), \(\operatorname{Vol}(P_{h})\) is the volume of \(P_{h}\), \(\mathcal{T}\) is a canonical weighted Delaunay triangulation corresponding to the weights \(e^{h_{i}}\) and \(\alpha_{ij}=\alpha_{ij}^{k}+\alpha_{ij}^{l}\). Note that \(\Theta_{i}=2\pi-\overline{K}_{i}\) and \(\theta_{i}=2\pi-\mathbf{K}_{i}\). The dHE-functional \(\mathcal{H}_{\Sigma_{g},\overline{K}}\) has the following properties. **Proposition 3.5** ([1], Proposition 4.13).: Let \(\Sigma_{g}\) be a hyperbolic surface and \(\overline{K}:V\to(-\infty,2\pi)\) satisfy the discrete Gauss-Bonnet formula (1). **(i):**: The dHE-functional \(\mathcal{H}_{\Sigma_{g},\overline{K}}\) is concave, twice continuously differentiable over \(\mathbb{R}^{N}\) and analytic in each \(\mathcal{P}_{\mathcal{T}}(\Sigma_{g})\). **(ii):**: The first derivative of the dHE-functional \(\mathcal{H}_{\Sigma_{g},\overline{K}}\) is given by \[d\mathcal{H}_{\Sigma_{g},\overline{K}}=\sum_{i=1}^{N}(\Theta_{i}-\theta_{i}) dh_{i}=\sum_{i=1}^{N}(\mathbf{K}_{i}-\overline{K}_{i})dh_{i}. \tag{18}\] **(iii):**: The dHE-functional \(\mathcal{H}_{\Sigma_{g},\overline{K}}\) is shift-invariant, i.e., for any \(t\in\mathbb{R}\), \[\mathcal{H}_{\Sigma_{g},\overline{K}}(h+t\mathbf{1})=\mathcal{H}_{\Sigma_{g}, \overline{K}}(h).\] Furthermore, the restriction of \(\mathcal{H}_{\Sigma_{g},\overline{K}}\) to \(\{h\in\mathbb{R}^{N}|\sum_{i}^{N}h_{i}=0\}\) is strictly concave and coercive, i.e., \[\lim_{||h||\to+\infty}\mathcal{H}_{\Sigma_{g},\overline{K}}(h)=-\infty.\] By (15) and (18), the dHE-functional \(\mathcal{H}_{\Sigma_{g},\overline{K}}\) is equivalent to the following function up to a constant \[F(u)=\int^{u}\sum_{i=1}^{N}(\overline{K}_{i}-\mathbf{K}_{i})du_{i}, \tag{19}\] which is well-defined by Proposition 3.4. Proposition 3.5 implies that \(\mathbf{K}\) defined on \(\mathbb{R}^{N}\) is a \(C^{1}\)-extension of the combinatorial curvature \(K\) defined on the space of discrete conformal factors \(\mathcal{C}_{\mathcal{T}}(dist_{S},r)\). In general, for a marked surface with a decorated PE metric, the discrete Lapalce operator \(\Delta_{\mathcal{T}}\) depends on the geometric triangulation \(\mathcal{T}\) of the PE surface. However, by the fact that different weighted Delaunay triangulations for the same decorated PE metric on a marked surface \((S,V)\) correspond to the same canonical weighted Delaunay tessellation [1], if the triangulation \(\mathcal{T}\) is weighted Delaunay, even though there exists different weighted Delaunay triangulations for the same decorated PE metric on a marked surface \((S,V)\), the discrete Laplace operator \(\Delta_{\mathcal{T}}\) is independent of the weighted Delaunay triangulations of \((l,r)\) on \((S,V)\). In this sense, the discrete Laplace operator \(\Delta_{\mathcal{T}}\) is intrinsic. Then the discrete Laplace operator \(\Delta_{\mathcal{T}}\) could be extend to the following operator \(\Delta\) defined on \(\mathbb{R}^{N}\), which is continuous and piecewise smooth on \(\mathbb{R}^{N}\) as a matrix-valued function of \(u\). **Definition 3.6**.: Suppose \((S,V)\) is a marked surface with a decorated PE metric \((dist_{g},r)\). The discrete Laplace operator \(\Delta\) is defined to be the map \[\Delta:\mathbb{R}^{N}\to\mathbb{R}^{N}\] \[f\mapsto\Delta f,\] with \[\Delta f_{i}=-\sum_{j\in V}\frac{\partial\mathbf{K}_{i}}{\partial u_{j}}f_{j}=- (\widetilde{L}f)_{i}\] for any \(f:V\to\mathbb{R}\), where \(\widetilde{L}_{ij}=\frac{\partial\mathbf{K}_{i}}{\partial u_{j}}\) is an extension of \(L_{ij}=\frac{\partial K_{i}}{\partial u_{j}}\). Similarly, for any \(s\in\mathbb{R}\), the extended \(2s\)-th order fractional discrete Laplace operator \(\Delta^{s}\) is defined to be \[\Delta^{s}=-\widetilde{L}^{s}.\] For any \(p>1\), the extended discrete \(p\)-th Laplace operator \(\Delta_{p}\) is defined by \[\Delta_{p}f_{i}=\sum_{j\sim i}(-\frac{\partial\mathbf{K}_{i}}{\partial u_{j}} )|f_{j}-f_{i}|^{p-2}(f_{j}-f_{i}).\] for \(f:V\to\mathbb{R}\). ### Combinatorial curvature flows with surgery **Definition 3.7**.: Suppose \((S,V)\) is a marked surface with a decorated PE metric \((dist_{g},r)\). Let \(\overline{K}:V\to(-\infty,2\pi)\) be a given function defined on \(V\). The combinatorial Calabi flow with surgery is defined to be \[\begin{cases}\frac{du_{i}}{dt}=\Delta(\mathbf{K}-\overline{K})_{i},\\ u_{i}(0)=u_{0}.\end{cases} \tag{20}\] For any \(s\in\mathbb{R}\), the \(2s\)-th order fractional combinatorial Calabi flow with surgery is defined to be \[\begin{cases}\frac{du_{i}}{dt}=\Delta^{s}(\mathbf{K}-\overline{K})_{i},\\ u_{i}(0)=u_{0}.\end{cases} \tag{21}\] For any \(p>1\), the combinatorial \(p\)-th Calabi flow with surgery is defined to be \[\begin{cases}\frac{du_{i}}{dt}=\Delta_{p}(\mathbf{K}-\overline{K})_{i},\\ u_{i}(0)=u_{0},\end{cases} \tag{22}\] Similar to Lemma 2.2, we have the following result. Here, we omit the proof. **Lemma 3.8**.: Suppose \((S,V)\) is a marked surface with a decorated PE metric \((dist_{g},r)\). Let \(\overline{K}:V\to(-\infty,2\pi)\) be a given function defined on \(V\) satisfying the discrete Gauss-Bonnet formula (1), then \(\sum_{i=1}^{N}u_{i}(t)\) is invariant along the combinatorial Calabi flow with surgery (20), the \(2s\)-th order fractional combinatorial Calabi flow with surgery (21) and the combinatorial \(p\)-th Calabi flow with surgery (22). Lemma 3.8 implies that the solutions of the combinatorial Calabi flow with surgery (20), the \(2s\)-th order fractional combinatorial Calabi flow with surgery (21) and the combinatorial \(p\)-th Calabi flow with surgery (22) stay in the hyperplane \(\Sigma_{0}=\{u\in\mathbb{R}^{N}|\sum_{i=1}^{N}u_{i}=\sum_{i=1}^{N}u_{i}(0)\}\). Proof of Theorem 1.10:.: As the function \(\overline{K}:V\to(-\infty,2\pi)\) satisfies the discrete Gauss-Bonnet formula (1), by Theorem 3.2, there exists a unique decorated PE metric with the combinatorial curvature \(\overline{K}\) and a unique \(\overline{u}\in\Sigma_{0}\) such that \(\mathbf{K}(\overline{u})=\overline{K}\). Set \[W(u)=-F(u)=\int_{\overline{u}}^{u}\sum_{i=1}^{N}(\mathbf{K}_{i}-\overline{K}_{ i})du_{i}. \tag{23}\] By Proposition 3.5, \(W\) is a \(C^{2}\)-smooth convex function defined on \(\mathbb{R}^{N}\). Furthermore, \(W(\overline{u})=0,\ \nabla W(\overline{u})=0\), \({\rm Hess}W\geq 0\) and the kernel of \({\rm Hess}W\) is orthogonal to \(\Sigma_{0}\). This implies \(\lim_{\|u\|\to+\infty}W(u)|_{\Sigma_{0}}=+\infty\). Hence, \(W(u)|_{\Sigma_{0}}\) is proper and \(0=W(\overline{u})\leq W(u)\). **(i):** Along the combinatorial Calabi flow with surgery (20), by Lemma 2.1, we have \[\frac{dW(u(t))}{dt}=\sum_{i=1}^{N}\frac{\partial W}{\partial u_{i}}\frac{du_{i }}{dt}=\sum_{i=1}^{N}({\bf K}-\overline{K})_{i}\Delta({\bf K}-\overline{K})_{ i}=-({\bf K}-\overline{K})^{\rm T}\cdot\widetilde{L}\cdot({\bf K}-\overline{K}) \leq 0.\] This implies \(0\leq W(u(t))\leq W(u(0))\). Combining Lemma 3.8 and the properness of \(W(u)|_{\Sigma_{0}}\), the solution \(\{u(t)\}\) of the combinatorial Calabi flow with surgery (20) lies in a compact subset of \(\Sigma_{0}\), which implies the solution of the combinatorial Calabi flow with surgery (20) exists for all time and \(W(u(t))|_{\Sigma_{0}}\) converges. Moreover, there exists a sequence \(t_{n}\in(n,n+1)\) such that as \(n\to+\infty\), \[W(u(n+1))-W(u(n))=(W(u(t))^{\prime}|_{t_{n}}=\nabla W\cdot\frac{ du_{i}}{dt}|_{t_{n}}\] \[= \sum_{i=1}^{N}({\bf K}-\overline{K})_{i}\Delta({\bf K}-\overline{ K})_{i}|_{t_{n}}=-({\bf K}-\overline{K})^{\rm T}\cdot\widetilde{L}\cdot({\bf K }-\overline{K})|_{t_{n}}\to 0.\] This implies that \({\bf K}(\overline{u})-\overline{K}=\lim_{n\to+\infty}({\bf K}(u(t_{n}))- \overline{K})\) is in the kernel of the discrete Laplace operator \(\Delta\). Therefore, by Lemma 2.1, we have \({\bf K}(\overline{u})-\overline{K}=c{\bf 1}^{\rm T}\) for some \(c\in\mathbb{R}\). Note that \(\sum_{i=1}^{N}({\bf K}_{i}(\overline{u})-\overline{K}_{i})=2\pi\chi(S)-2\pi \chi(S)=0\). This implies \({\bf K}(\overline{u})-\overline{K}=0\). By \(\{u(t)\}\subset\subset\Sigma_{0}\), there exists \(u^{*}\in\mathbb{R}^{N}\) and a convergent subsequence of \(\{u(t_{n})\}\), still denoted as \(\{u(t_{n})\}\) for simplicity, such that \(\lim_{n\to\infty}u(t_{n})=u^{*}\). This implies \({\bf K}(u^{*})=\lim_{n\to+\infty}{\bf K}(u(t_{n}))={\bf K}(\overline{u})\). Then \(u^{*}=\overline{u}\) by Theorem 3.2. Therefore, \(\lim_{n\to\infty}u(t_{n})=\overline{u}\). Set \(\Gamma(u)=\Delta({\bf K}-\overline{K})\), then \(D\Gamma|_{u=\overline{u}}\) restricted to the hypersurface \(\Sigma_{0}\) is negative definite, which implies that \(\overline{u}\) is a local attractor of (20). Then the conclusion follows from Lyapunov Stability Theorem ([32], Chapter 5). **(ii):** Along the \(2s\)-th order fractional combinatorial Calabi flow with surgery (21), by Lemma 2.1, we have \[\frac{dW(u(t))}{dt}=\sum_{i=1}^{N}\frac{\partial W}{\partial u_{i}}\frac{du_{ i}}{dt}=\sum_{i=1}^{N}({\bf K}-\overline{K})_{i}\Delta^{s}({\bf K}-\overline{K})_{ i}=-({\bf K}-\overline{K})^{\rm T}\cdot\widetilde{L}^{s}\cdot({\bf K}- \overline{K})\leq 0.\] This implies \(0\leq W(u(t))\leq W(u(0))\). By the properness of \(W(u)|_{\Sigma_{0}}\), the solution \(\{u(t)\}\) of the \(2s\)-th order fractional combinatorial Calabi flow with surgery (21) lies in a compact subset of \(\Sigma_{0}\). This implies the solution of the fractional combinatorial Calabi flow with surgery (21) exists for all time. By Lemma 2.1, the matrix \(\widetilde{L}^{s+1}\) is strictly positive definite on \(\Sigma_{0}\). By the continuity of the eigenvalues of \(\widetilde{L}^{s+1}\), there exists \(\lambda_{0}>0\) such that the non-zero eigenvalues \(\lambda\) of \(\widetilde{L}^{s+1}\) satisfy \(\lambda>\lambda_{0}\) along the \(2s\)-th order fractional combinatorial Calabi flow with surgery (21). Therefore, for the combinatorial Calabi energy \({\cal C}(t)=||{\bf K}-\overline{K}||^{2}\), we have \[\frac{d{\cal C}(u(t))}{dt}=\sum_{i=1}^{N}\frac{\partial{\cal C}}{\partial u_{i} }\frac{du_{i}}{dt}=-2({\bf K}-\overline{K})^{\rm T}\cdot\widetilde{L}^{s+1} \cdot({\bf K}-\overline{K})\leq-2\lambda_{0}{\cal C}(u(t)),\] which implies \({\cal C}(u(t))\leq e^{-2\lambda_{0}t}{\cal C}(0)\). As \({\bf K}|_{\Sigma_{0}}\) is a \(C^{1}\)-diffeomorphism from \(\Sigma_{0}\) to \({\bf K}(\Sigma_{0})\) by (16) and Lemma 2.1, we have \[\|u(t)-\overline{u}\|^{2}\leq C_{1}||{\bf K}(t)-\overline{K}||^{2}\leq C_{1}e^ {-2\lambda_{0}t}||{\bf K}(0)-\overline{K}||^{2}\leq C_{2}e^{-2\lambda_{0}t}\] for some positive constants \(C_{1},C_{2}\). **(iii):** The equation (13) implies \(\frac{\partial K_{i}}{\partial u_{j}}=-\frac{r_{ij}}{l_{ij}}w_{ij}\leq 0\) under the weighted Delaunay condition. Then \(\widetilde{L}_{ij}=\frac{\partial\mathbf{K}_{i}}{\partial u_{j}}\leq 0\) by Definition 3.6. Along the combinatorial \(p\)-th Calabi flow with surgery (22), we have \[\frac{dW(u(t))}{dt}= \sum_{i=1}^{N}\frac{\partial W}{\partial u_{i}}\frac{du_{i}}{dt}\] \[= \sum_{i=1}^{N}(\mathbf{K}-\overline{K})_{i}\Delta_{p}(\mathbf{K} -\overline{K})_{i}\] \[= \frac{1}{2}\sum_{i=1}^{N}\sum_{j\sim i}(\frac{\partial\mathbf{K} _{i}}{\partial u_{j}})|(\mathbf{K}-\overline{K})_{i}-(\mathbf{K}-\overline{K })_{j}|^{p}\] \[\leq 0,\] where (14) is used in the third line. This implies \(0\leq W(u(t))\leq W(u(0))\). By the properness of \(W(u)|_{\Sigma_{0}}\), the solution \(\{u(t)\}\) of the combinatorial \(p\)-th Calabi flow with surgery (22) lies in a compact subset of \(\Sigma_{0}\). This implies the solution of the combinatorial \(p\)-th Calabi flow with surgery (22) exists for all time and \(W(u(t))\) converges. Moreover, there exists a sequence \(t_{n}\in(n,n+1)\) such that as \(n\to+\infty\), \[W(u(n+1))-W(u(n))=\nabla W\cdot\frac{du_{i}}{dt}|_{t_{n}}=\sum_{i =1}^{N}(\mathbf{K}-\overline{K})_{i}\Delta_{p}(\mathbf{K}-\overline{K})_{i}|_ {t_{n}}\] \[= \frac{1}{2}\sum_{i=1}^{N}\sum_{j\sim i}(\frac{\partial\mathbf{K} _{i}}{\partial u_{j}})|(\mathbf{K}-\overline{K})_{i}-(\mathbf{K}-\overline{K })_{j}|^{p}|_{t_{n}}\to 0.\] By the similar arguments in Remark 2.3, we have \(\mathbf{K}(u(t_{n}))-\overline{K}=c\mathbf{1}^{\mathrm{T}}\) for some constant \(c\in\mathbb{R}\). Since \(\overline{K}\) satisfies the discrete Gauss-Bonnet formula (1), then \(\lim_{n\to+\infty}\mathbf{K}_{i}(u(t_{n}))=\overline{K}_{i}=\mathbf{K}_{i}( \overline{u})\) for all \(i\in V\). By \(\{u(t)\}\subset\subset\Sigma_{0}\), there exists \(u^{*}\in\mathbb{R}^{N}\) and a convergent subsequence \(\{u(t_{n_{k}})\}\) of \(\{u(t_{n})\}\) such that \(\lim_{k\to\infty}u(t_{n_{k}})=u^{*}\). This implies \(\mathbf{K}(u^{*})=\lim_{k\to+\infty}\mathbf{K}(t_{n_{k}})=\mathbf{K}( \overline{u})\). Then \(u^{*}=\overline{u}\) by Theorem 3.2. Therefore, \(\lim_{k\to\infty}u(t_{n_{k}})=\overline{u}\). We use Lin-Zhang's trick in [28] to prove \(\lim_{t\to\infty}u(t)=\overline{u}\). Suppose otherwise, there exists \(\delta>0\) and \(\xi_{n}\to+\infty\) such that \(|u(\xi_{n})-u^{*}|>\delta\). This implies \(\{u(\xi_{n})\}\subseteq\Sigma_{0}\backslash B(u^{*},\delta)\), where \(B(u^{*},\delta)\) is a ball centered at \(u^{*}\) with radius \(\delta\). Hence, for any \(u\in\Sigma_{0}\backslash B(u^{*},\delta)\), \(W(u)\geq C>0\). Then \(W(u(\xi_{n}))\geq C>0\). Since \(W(u(t))\) converges and \(\lim_{k\to\infty}u(t_{n_{k}})=\overline{u}\), then \(W(+\infty)=\lim_{k\to\infty}W(u(t_{n_{k}}))=W(\overline{u})=0\). Hence, \(\lim_{n\to\infty}W(u(\xi_{n}))=W(+\infty)=0\). This is a contradiction. Q.E.D.
2310.18063
"Honey, Tell Me What's Wrong", Global Explanation of Textual Discriminative Models through Cooperative Generation
The ubiquity of complex machine learning has raised the importance of model-agnostic explanation algorithms. These methods create artificial instances by slightly perturbing real instances, capturing shifts in model decisions. However, such methods rely on initial data and only provide explanations of the decision for these. To tackle these problems, we propose Therapy, the first global and model-agnostic explanation method adapted to text which requires no input dataset. Therapy generates texts following the distribution learned by a classifier through cooperative generation. Because it does not rely on initial samples, it allows to generate explanations even when data is absent (e.g., for confidentiality reasons). Moreover, conversely to existing methods that combine multiple local explanations into a global one, Therapy offers a global overview of the model behavior on the input space. Our experiments show that although using no input data to generate samples, Therapy provides insightful information about features used by the classifier that is competitive with the ones from methods relying on input samples and outperforms them when input samples are not specific to the studied model.
Antoine Chaffin, Julien Delaunay
2023-10-27T11:26:27Z
http://arxiv.org/abs/2310.18063v1
Honey, Tell Me What's Wrong", Global Explanation of Textual Discriminative Models through Cooperative Generation ###### Abstract The ubiquity of complex machine learning has raised the importance of model-agnostic explanation algorithms. These methods create artificial instances by slightly perturbing real instances, capturing shifts in model decisions. However, such methods rely on initial data and only provide explanations of the decision for these. To tackle these problems, we propose Therapy, the first global and model-agnostic explanation method adapted to text which requires no input dataset. Therapy generates texts following the distribution learned by a classifier through cooperative generation. Because it does not rely on initial samples, it allows to generate explanations even when data is absent (e.g., for confidentiality reasons). Moreover, conversely to existing methods that combine multiple local explanations into a global one, Therapy offers a global overview of the model behavior on the input space. Our experiments show that although using no input data to generate samples, Therapy provides insightful information about features used by the classifier that is competitive with the ones from methods relying on input samples and outperforms them when input samples are not specific to the studied model. ## 1 Introduction The emergence of machine learning models has led to their adoption in domains spanning from mere recommendations to critical areas such as healthcare (Buch et al., 2018; Karatza et al., 2021) and law (Araszkiewicz et al., 2022). These already complex models keep becoming larger, emphasizing their black-box denomination. This lack of transparency however slows their adoption in various areas since we witness a notable rise of deployed models suffering from bias. For example, some chatbots biased toward religious (Abid et al., 2021) and gender (Lucy and Bamman, 2021) minorities have been released and explaining their inner mechanisms is still an ongoing problem. Among the methods proposed to tackle these problems, model-agnostic approaches are favored since applicable to any machine learning model. Among these, local explanations have obtained strong success by maintaining a good trade-off between accuracy and transparency. These explanations are generated in the proximity of a target instance by tampering this input to create neighbors and study how the model reacts to these changes. This allows them to highlight which features are important for the model and to provide explanations on the decision for this input (e.g., the most important words for each class). According to a recent study (Jacovi, 2023), LIME (Ribeiro et al., 2016), while being the first model-agnostic local explanation method is still the most widely used. However, local explanations have three main flaws when trying to explain a model. First, it obviously requires to have inputs to explain, which might not be possible due to confidentiality or privacy reasons (Amin-Nejad et al., 2020). Second, selecting inputs that are representative of the model or the downstream data distribution is difficult. Finally, it will explain the decision **for this input** and for this input only. This only provides very local information on the model behavior, which represents only a very small piece of the input domain of the model. Therefore, LIME and other local explanation methods have proposed to aggregate the information from multiple samples to provide global explanations. However, these explanations are strongly tied to the input samples and only provide cues about the samples' neighborhood. These methods thus require samples that cover as much of the space as possible. To relax this sample dependency and generate global explanations of the model, we propose **Therap**y, a method that leverages cooperative genera tion Holtzman et al. (2018); Scialom et al. (2020); Bakhtin et al. (2021); Chaffin et al. (2022) to generate texts following the distribution of a classifier. The distribution of the resulting samples can then be used to study which features are important for the model, providing global information on its behavior. In this paper, we first introduce the related work in Section 2 and cooperative text generation in Section 3. We then present Therapy in Section 4 and the experiments conducted to compare its performance to standard explanation methods in Section 5. ## 2 Related work Generating explanations for textual data is challenging since it requires considering both the text semantics and task domains. Moreover, it is frequent that models are already deployed and further evaluations are required (e.g., fairness, bias detection) but the training data is not accessible. This may be caused by data privacy, security, or simply because the dataset is too large to be analyzed. Thus, to fulfil this objective, researchers have focused on post-hoc explanations Jacovi (2023). Following the categorization by Bodria et al. Bodria et al. (2021), we distinguish between example-based and feature-attribution explanations. ### Example-Based Explanations Taking roots from social science Miller (2019), the example-based explanations indicate either the minimum change required to modify the prediction -counterfactual- or illustrate class by showing representative instances -prototypes-. Counterfactual methods answer "what if" questions and have gained interest since being close to human reasoning, perturbing document until the model prediction differs Wachter et al. (2017). Conversely, prototype methods select or generate representative instances for the target class. Among the example-based methods, some leverage on control codes to perturb the input text while others generate realistic sentences based on perturbation in a latent space. Polyjuice Wu et al. (2021) and GYC Madaan et al. (2021) belong to the former and propose control codes varying from changing the sentiment and tense of the sentence to adding or replacing words. On the other hand, xSPELLS S. Punla et al. (2022) and CounterfactualGAN Robeer et al. (2021) are methods that train respectively a Variational Autoencoder and a Generative Adversarial Network to convert input text to a latent space and return realistic sentences from this latent space. These methods hence convert the input document into a latent space and slightly perturb it until the closest counterfactual is found. ### Feature-Attribution Explanations Feature-attribution methods assign weights to input words, indicating the positive or negative impact on the final prediction. Methods such as SHAP Lundberg and Lee (2017), LIME Ribeiro et al. (2016), and their variants Gaudel et al. (2022); Zafar and Khan (2019); Visani et al. (2020); ElShawi et al. (2019); Bramhall et al. (2020) are the most commonly used Jacovi (2023). They are local since they perturb an input instance by slightly modifying it and studying the complex model in a given locality. For textual data, LIME randomly masks the words of the input document and trains a linear model on the collection of perturbed documents to predict the decisions of the complex model. The most important coefficients of the linear model associated with the input words are then returned as the explanation. While most explainability surveys Arrieta et al. (2020); Bodria et al. (2021) differentiated between local and global explanations, LIME also introduced LIME-SP (for submodular pick), a global method that generates \(n\) local explanations for a set of individual instances. These \(n\) instances are selected to cover as much of the input domain as possible and avoid redundancy. ## 3 Text generation ### Cooperative Generation Language Models (LM) such as the GPT family Radford et al. (2018, 2019); Brown et al. (2020) learn the probability distribution of sequences of symbols \(x_{1},x_{2},\cdots,x_{T}\) (most often _tokens_) taken from a vocabulary \(\mathcal{V}\), with variable lengths \(T\). The probability of one sample \(x\) (also called _likelihood_) is defined as the joint probabilities over each of its tokens, which can be factorized using the chain rule: \(p(x_{1:T})=\prod_{t=1}^{T}p(x_{t}\mid x_{1:t-1})\). The LM is trained to output a probability distribution over the dictionary for the next token given the input ones i.e. \(p(x_{t}\mid x_{1:t-1})\) at a given time step \(t\). This results in an auto-regressive LM that can generate sequences by iteratively using those distributions to emit a token \(x_{t}\), and append it to the context \(x_{1:t-1}\) for the next iteration. The generation pro cess -or _decoding_- is often started using a small initial sequence: the _prompt_. Large LMs learn an excellent approximation of the true distribution of their training data, so generating samples that maximize the model likelihood \(p(x)\) allows to generate plausible texts. However, this approach offers very little control over the text being generated besides the initial prompt. Cooperative generation approaches Holtzman et al. (2018); Scialom et al. (2020); Bakhtin et al. (2021), where discriminative models are used to guide the LM during the generation, offer more control. They use the information from the external model to guide the LM to generate texts that have a property it recognizes. In situations where the model is a classifier which learns to output the probability \(D(c\mid x)\) of a sequence \(x\) to belong to a class \(c\), the goal is to generate text that maximizes the probability of belonging to the target class. Evaluating \(D(c\mid x)\) for every sequence possible is intractable due to the size of the space (\(|\mathcal{V}|^{n}\) for a sequence of length \(n\)). Thus, these methods leverage the distribution of the LM to restrict the exploration to plausible sequences. This results in a sequence that is both well written and belongs to the target class since the produced sequence maximizes \(p(x)*D(c\mid x)\propto p(x\mid c)\). ### Monte Carlo Tree Search Guided Decoding Among cooperative approaches, the ones that leverage the Monte Carlo Tree Search (MCTS) to guide the decoding of the LM exhibited very strong results Scialom et al. (2021); Chaffin et al. (2022); Leblond et al. (2021); Lamprier et al. (2022). MCTS is an iterative algorithm that seeks solutions in a tree space too large to be exhaustively searched. It is applicable to text generation because the search space created during decoding corresponds to a tree: the prompt is the root and the children of a node are its parents' sequence with one additional token. MCTS loop is composed of four steps: selection, expansion, simulation and back-propagation. 1. **Selection** An exploration from the root of the tree to an unexplored leaf. The path to the leaf is defined by selecting, at each node, the children that maximize the Polynomial Upper Confidence Trees (PUCT) Rosin (2011); Silver et al. (2017), which is, for a node \(i\): \[PUCT(i)=\frac{s_{i}}{n_{i}}+c_{puct}\ p(x_{i}\mid x_{1:t-1})\frac{\sqrt{N_{i} }}{1+n_{i}}\] with \(n_{i}\) the number of simulations played after the node \(i\), \(s_{i}\) its aggregated score, \(N_{i}\) the number of simulations played after its parent, and \(c_{puct}\) a constant defining the compromise between exploitation (focusing on nodes with already good scores) and exploration (exploring promising nodes). 2. **Expansion.** The creation of the selected node children if it is not terminal (i.e., corresponding to the end-of-sequence token). 3. **Simulation (roll-out).** The sampling of additional tokens (using the LM distribution) until a terminal node. 4. **Back-propagation.** The evaluation of the sequence \(x\) associated with the terminal node and aggregation of its score to each parent until root. In order to guide the generation towards texts that belong to a given class according to a classifier, the score of the sequence \(x\) associated with a given leaf can be defined as \(D(c\mid x)\) given by the classifier. Different aggregation strategies can be used, such as computing the average of the actual score of the node and the terminal node one as in Chaffin et al. (2022) or taking the maximum of the two as in Scialom et al. (2021); Lamprier et al. (2022). This loop is repeated a given number of times (defining the compute budget) and the tree produced is then used to select the token to add for the current decoding step. It can be selected as the most played node among the root's children nodes, or the one with the highest aggregated score. Since we are interested in generating sequences that are as stereotypical of classes of the discriminative model as possible, we choose the node with the highest score. The selected node then becomes the new root and the process is repeated until the final sequence is produced. Contrary to traditional left-to-right decoding strategies that can miss sequences that gets better after some steps or be trapped in sub-optimal sequences, MCTS breaks the myopic decoding by defining the score of a token based on possible continuations of the sequence. In addition to being plug-and-play, i.e, any type of (auto-regressive) language model can be guided during decoding by any type of classifier using MCTS, this approach exhibited state-of-the-art results in the task of constraint generation, that is, generating texts that maximize \(D(c\mid x)\) while maintaining a high quality of writing. We thus experiment with MCTS decoding for Therapy, but the proposed method is compatible with any cooperative generation approach. ## 4 Method In this paper, we introduce **T therapy**, a global and model-agnostic explanation method that does not require input data. In place of these input data, Therapy employs an LM guided by the model to explain. This cooperation generates texts that are representative of the classes learned by the studied discriminative model. To do so, Therapy extracts the most important words for the classifier by employing it to steer an LM through cooperative generation. Texts generated using cooperative generation follow the distribution \(p(x)*D(c\mid x)\). Their distribution can thus be used to study the classifier \(D\): words with high frequencies are likely to be important for the classifier. A logistic regression is then learned on tf-idf representations of generated samples and the weights associated with each term are returned as the explanation. An illustration of the method is proposed in Figure 1. Because \(p(x)\) is the same for every class, by using tf-idf on the whole corpus (i.e., samples from every class), words that are frequent because of \(p(x)\) or in multiple classes will be filtered out. Hence, the logistic regression model learned on the tf-idf score of each feature allows Therapy to study their relative importance and to extract the most important ones for each class. The method thus offers the level of explainability of n-grams based on logistic regression models to any classifier. Indeed, since any type of (auto-regressive) LM can be guided during decoding by any classifier using MCTS, the proposed approach is totally model-agnostic. We call this approach Therapy because its functioning is similar to that of a therapist. This therapist (the LM) queries its patient (the classifier) to understand its behavior and eventually discover pathologic behaviors (some biases). In essence, the method is similar to using LIME jointly with a masked LM to generate neighbors when the number of replaced tokens grows a lot but with two benefits. First, the method does not rely on input examples but creates samples out of nothing using the LM. This is useful for cases where the data cannot be shared because it contains confidential information Amin-Nejad et al. (2020). Moreover, rather than exploring the neighborhood of these examples (and so conditioning the explanations on these examples' context), the domain of the exploration is defined by the domain of the LM, which is significantly broader. Besides, either a general LM can be used to study the model behavior on generic data or an LM specific to the downstream domain to make sure it works well on this specific type of data. Second, the method does not generate **before** classifying the text but employs the classifier **during** the generation. Hence, instead of "randomly" generating texts and hoping for important features to appear, we explicitly query the model for stereotypic features by maximizing \(D(c\mid x)\). This makes the method more efficient and reduces the probability of generating rare features that are not important for the model while reducing the odds of generating "in the middle" texts containing features from various classes that are misleading. Besides, our method directly relies on the distribution learned by the studied model to guide the generation, unlike methods like Polyjuice and GYC, which, in addi Figure 1: Illustration of the Therapy method. Texts from different classes are cooperatively generated using the guidance of the studied model. A logistic regression is then trained to predict the label of the generated texts. The weights of the model associated with each word are then returned as importance weights. tion to requiring input data, count on a distribution learned by the LM to bias the generation towards the desired property (using control codes). Finally, Therapy is distinctive from methods analyzing the frequency of input terms in the training data such as sensitivity analysis since it does not require access to (training) data and directly exploits the distribution effectively learned by the model, whereas nothing guarantees that a model is actually using the terms extracted from training data to make a prediction. Furthermore, our method differs from existing example-based and feature attribution methods since to the best of our knowledge, there exists no global and model-agnostic explanation methods that do not require any input data. ## 5 Experiments In this section, we first give technical details on the experiments conducted to evaluate Therapy (Section 5.1). We then evaluate Therapy through three experiments. The first one (Section 5.2), measures the Spearman correlation of the explanations and the weights of a glass box and studies the influence of the number of generated texts on the quality of the explanation returned by the linear model. We then compare the capacity of the method to correctly identify the most important words of the glass box to the one of LIME and SHAP using precision/recall curves in Section 5.3. Finally, we test whether the terms returned by the different approaches are sufficient to modify the prediction of the classifier in Section 5.4. The code of Therapy and our experiments will be made available upon acceptance. ### Experimental setup Glass-box explanationSince there are no ground truth explanations available to be used as a goal for evaluated methods, we use a glass-box model, that is, a model explainable by design but used as a black box (i.e., without being able to use its inner workings to generate explanations). Following prior work Guidotti (2021), we train a logistic regression using sklearn Pedregosa et al. (2011) and use its weights as tokens importance scores. the logistic regression, the baseline generates texts without constraining the LM and uses the glass-box **after** the generation is done to get the target labels. ### Spearman correlation A good explanation of the glass box is a list of features that contains both its important features (i.e., has good coverage) and links them to a similar relative weight. Hence, we compute the Spearman correlation between the top words of the glass box (having a weight \(>1\)) and their scores attributed by the explainer. We selected Spearman correlation over Pearson because the score returned by LIME and SHAP can be very different from logistic regression weights and so rank correlation results in a fairer comparison. #### 5.2.1 Influence of the number of generated texts One critical parameter of the proposed method is the number of texts to generate since more tokens allow a larger coverage but require more computation. We report the Spearman correlation against the number of generated texts per class in Figure 2. We observe that the correlation quickly rises until plateauing, meaning that only a small amount of text offers a great overview of the model behavior and that the method does not require a lot of computing to perform. We thus fixed the number of generated texts for Therapy to 3000 for each class for the rest of our experiments. #### 5.2.2 Importance of the classifier guidance Cooperative generation allows Therapy to guide the LM during the decoding process and to move away from its distribution toward that of the model studied. To study the importance of this guidance, we report, in addition to the baseline, the results obtained when selecting the most played token during MCTS generation. As mentioned in Section 3.2, the token added to the current context can be selected as the most played node or the one obtaining the highest score. Selecting the highest-scored node generates texts that are the most stereotypical of the studied model, while the most played node is closer to the LM a priori. Results reported in Table 1 show that both the baseline and using the most played node exhibit competitive results on amazon_polarity but struggle more on ag_news. This can be explained by the fact that the LM tends to not generate positive and negative terms at the same time, so the classes are clearly defined even in unconstrained samples. On ag_news, however, there is more overlap between classes, and so using cooperative generation helps to generate texts that are more distinctive of a given class. These results both highlight the contribution of the cooperative generation and motivate the token selection \begin{table} \begin{tabular}{l c c c c c c} Dataset & \multicolumn{2}{c}{amazon\_polarity} & \multicolumn{2}{c}{ag\_news} \\ \cline{2-7} Class & Positive & Negative & World & Sports & Business & Sci/Tech \\ \hline Baseline & 0.49 (6.24e-08) & 0.31 (9.25e-05) & 0.25 (1.67e-06) & 0.32 (6.58e-09) & 0.35 (1.88e-11) & 0.12 (2.33e-02) \\ \hline Therapy - most played & 0.52 (5.79e-09) & 0.32 (7.83e-05) & 0.22 (1.57e-05) & 0.27 (7.66e-07) & 0.32 (2.04e-09) & 0.22 (1.93e-05) \\ \hline Therapy - highest score & 0.49 (3.3e-08) & 0.31 (1.0e-04) & 0.27 (1.6e-07) & 0.37 (4.0e-12) & 0.38 (5.6e-13) & 0.3 (8.9e-09) \\ \end{tabular} \end{table} Table 1: Spearman correlation (p-value) between the top words of a logistic regression glass-box and explanation methods learning a logistic regression over generated texts. Baseline uses unconstrained samples while Therapy generates samples using the MCTS, either selecting the most played or highest scored node. Results are shown per class and dataset. Figure 2: Spearman correlation w.r.t number of generated text per class for amazon_polarity and ag_news. method. #### 5.2.3 Comparison with other methods The Spearman correlations of all the evaluated approaches can be found in Table 2. Results yielded by Therapy are better than those of LIME on ag_news but worse on amazon_polarity whereas SHAP yields better results than both methods on both datasets. Counterintuitively, these are positive results for Therapy because other methods have access to the test set of the studied dataset, ensuring that the target features are found in the input examples. To test the performance when this assumption no longer holds, we resort to two variants of LIME and SHAP, denoted by _-other_. The key distinction between these methods lies in the dataset employed as input data. We use amazon_polarity texts as input to find features in ag_news and vice-versa. The findings from these experiments reveal that existing methods fail to find important features, leading to a significant drop in correlations, substantially lower than those of Therapy. ### Precision Recall Besides assigning correct scores to important features of the model, we also want to make sure that Therapy gives an informative output in practice. That is, making sure that most features returned by the explainer (i.e., its highest-scored features) are indeed important features of the original model and that most of its important features are found. Thus, we report precision/recall curves averaged over every class in Figure 3. Precision is obtained by computing, for different numbers of words returned, the proportion that is in the most important features of the original model. Conversely, recall is the proportion of the original model's top words retrieved. The number of words returned ranges from 10 to 1500. fidence of the model. Thus, we compute an insertion/deletion metric that measures the proportion of texts whose glass-box decision changes when a word listed as important for the original class is removed and replaced by an important word from another class. Figure 4 shows the results on both datasets for Therapy, the baseline method, LIME, SHAP, and their version using the other dataset as input (-other) on 1000 texts. Replacements are done by iterating over the list of the top 250 words returned by each method for the original class until the decision of the model changes. Replacement can only occur if the word is present within the text and multiple replacements of the same word in a given text are counted as multiple replacements. This explains why each method has a different maximum number of words replaced. Methods that leverage generative models seem to achieve more replacements. We hypothesize that this is because they are designed to globally explain the model on the input domain, unlike local methods that can return words that are specific to a given input and not generalize well. We observe that Therapy achieves very similar results to those of LIME and SHAP on amazon_polarity but significantly worse than both on ag_news. However, when compared to the -other versions, Therapy achieves very convincing results showing once again that these methods require very specific data while Therapy is able to find important words without accessing any data nor using any a priori on the model. In this experiment as well, Therapy outperforms the baseline on both datasets, although the difference is more noticeable on ag_news. ## 6 Conclusion Usual explainability methods heavily rely on input data, which is not necessarily available and might not contain model biases or important features. We propose Therapy, a method that leverages cooperative textual generation to create synthetic data that follow the studied model distribution. Thus, the search is driven by a pre-trained LM rather than input samples. The pre-trained LM allows a broader exploration than being restricted to input data neighborhood, relaxing most of the constraints and a priori induced by examples-driven methods. In the extreme case where extremely representative data (such as the test set of a given dataset) of important features of the model is available, Therapy lacks a bit behind state-of-the-art SHAP while being competitive. However, when considering more realistic cases where we do not explicitly Figure 4: Proportion of texts whose glass-box prediction changes w.r.t the number of important words from the original class replaced by important words from other classes. Figure 3: Precision/recall curves of the glass-box top words for the different explanation methods. give the important features to the explainer or do not have any available data, its performances are very good whereas the other methods are collapsing when even applicable. Comparisons with a generate-then-classify baseline highlight the benefits of the cooperative generation when the LM does not generate texts that are representative of a single specific class by itself. Therefore, Therapy is a useful tool to explore the model behavior on a large domain when collecting data that exactly match the downstream distribution is not feasible. Finally, we opposed the proposed approach to LIME and SHAP to highlight the interest of generating representative texts using cooperative generation when input data is lacking. However, an interesting avenue of research would be to use these established explainability methods on cooperatively generated texts, replacing the proposed logistic regression on the tf-idf representations. This potential combination might allow to leverage their performance while alleviating the input data dependency.
2303.17268
Classical-to-quantum non-signalling boxes
Here we introduce the concept of classical input - quantum output (C-Q) non-signalling boxes, a generalisation of the classical input - classical output (C-C) non-signalling boxes. We argue that studying such objects leads to a better understanding of the relation between quantum nonlocality and non-locality beyond quantum mechanics. The main issue discussed in the paper is whether there exist 'genuine' C-Q boxes or all C-Q boxes can be built from objects already known, namely C-C boxes acting on pre-shared entangled quantum particles. We show that large classes of C-Q boxes are non-genuine. In particular, we show that all bi-partite C-Q boxes with outputs that are pure states are non-genuine. We also present various strategies for addressing the general problem, i.e. for multi-partite C-Q boxes which output mixed states, whose answer is still open. Finally, we show that even some very simple non-genuine C-Q boxes require large amounts of C-C nonlocal correlations in order to simulate them.
Carolina Moreira Ferrera, Robin Simmons, James Purcell, Daniel Collins, Sandu Popescu
2023-03-30T10:14:51Z
http://arxiv.org/abs/2303.17268v6
# Classical-to-quantum non-signalling boxes ###### Abstract Here we introduce the concept of classical input - quantum output (C-Q) non-signalling boxes, a generalisation of the classical input - classical output (C-C) non-signalling boxes. We argue that studying such objects leads to a better understanding of the relation between quantum nonlocality and non-locality beyond quantum mechanics. The main issue discussed in the paper is whether there exist "genuine" C-Q boxes or all C-Q boxes can be built from objects already known, namely C-C boxes acting on pre-shared entangled quantum particles. We show that large classes of C-Q boxes are non genuine, and present various strategies for addressing the general problem, whose answer is still open. Results concerning tri-partite quantum entanglement that follow from this approach are also presented. ## I Introduction During the recent years, the existence of long-distance nonlocal correlations, first discovered by J. Bell [2] and experimentally confirmed by S.J. Freedman and J.F. Clauser [4] and A. Aspect, P.Grangier and G.Roger [5] became to be understood as one of the main aspects of nature. Very intensive research has taken place in the subject, from understanding the various aspects of entanglement and Bell inequalities to making use of non-locality in virtually in all of quantum information tasks and even to leading to new insights in quantum gravity. To further compound the surprise of the very existence of nonlocality, it has been later realised that nonlocal correlations even stronger than those allowed by quantum mechanics could in principle exist without entering in conflict with relativity [7]. This has raised fundamental questions about Nature. Perhaps such correlations exist in Nature, only we have not discovered them yet. If discovered, it would mean that quantum mechanics is not a valid description of Nature and needs to be replaced by another theory. On the other hand, if such correlations do not exist, why don't they exist? As they are not in contradiction with relativity, what other fundamental principles of nature rule them out? One fruitful approach to the above question has been to consider "non-signalling boxes", hypothetical boxes that accept classical inputs and yield classical outputs that are non-locally correlated with each other [8]. Then try to find tasks to which they would be useful when the correlations are stronger than those allowed by quantum mechanics. It has been discovered that some tasks, mostly of information processing nature, with no relation whatsoever with quantum mechanics, have qualitatively different behaviour when allowed access to such boxes [12], [13]. Tantilsingly, some tasks underwent a qualitative change precisely at the border of quantum to beyond quantum correlation strengths [9; 10; 11]. This already shows that quantum mechanics is a very special theory from a fundamental point of view, unrelated to "physical" properties such as structure of atoms, etc. Yet not the entire boundary of the set of quantum correlations has been singled out in this way. It is therefore quite important, in order to make progress along this line, to find new tasks and/or different ways to characterise nonlocality. Here we introduce a (potentially) new type of non-signalling, non-local "boxes". The boxes have classical inputs and they output quantum particles in given correlated states, depending on the inputs Fig 1. In order for the box to be non-signalling, the reduced density matrix of each party must be independent on the input of the other party: \[\begin{array}{c}\rho_{A}^{x,y}=\rho_{A}^{x}\\ \rho_{B}^{x,y}=\rho_{B}^{y}\end{array} \tag{1}\] where \(\rho_{A}^{x,y}=\mbox{Tr}_{B}\rho_{A,B}^{x,y}\) and \(\rho_{B}^{x,y}=\mbox{Tr}_{A}\rho_{A,B}^{x,y}\). In the case of multiple parties we generalise these conditions to say that the density matrices of all subgroups of parties could not depend on the inputs to the other parties. Coming now to the crucial question of this paper, we said above that the C-Q boxes are "potentially new" because at the moment we do not know if there exist (theoretically) such _genuine_ classically-quantum (C-Q) non-signalling boxes, or all of them could be decomposed in Figure 1: A CQ box has two parts, taking inputs x and y. It outputs a quantum state \(\left|\Psi^{x,y}\right\rangle_{AB}\), joint across A and B. already known objects. Addressing this question is the core of the present paper. Regardless of the answer, there are several reasons for considering such boxes. Quantum states have properties that are not captured by the formalism of the standard classical input-classical output (C-C) non-signalling boxes. While C-C non-signalling boxes can present correlations stronger than those allowed by classical mechanics, their dynamics is far more limited than that of the quantum states [15]. In particular, while nonlocality swapping (algebraically described by entanglement swapping) is possible in quantum mechanics, it is not possible for C-C boxes [14]. Obviously then, if genuine C-Q non-signalling boxes exist, they will extend the range of non-local phenomena that we thought to be possible in a non-deterministic world, while being consistent with relativity. On the other hand, if genuine C-Q boxes do not exist another set of interesting questions follow. First, what are the resources needed to implement them via C-C boxes and pre-shared entanglement? As we will show, in some cases it seems that these resources need to be extremely large. Second, and more important, why do genuine C-Q boxes not exist? Why is it that the most nonlocal non-signalling device with quantum output is an ordinary pairing of a C-C non-signalling device and pre-shared entanglement. As for the answer, the question of the existence of genuine C-Q boxes is still open. Here we present partial results. They set up the general strategy, and expose basic structures of C-Q boxes. We also show that large classes of C-Q boxes that are non-genuine. In particular we shall show that all bi-partite C-Q boxes whose outputs are pure states are non-genuine. For mixed states we shall present various examples of boxes which are non-genuine, but do not know in general whether all boxes are non-genuine. We also present some surprising results for tri-partite situations. ## II Non-genuine C-Q boxes Obviously, some C-Q boxes could be obtained by a combination of pre-shared entangled quantum states and a classical-classical non-signalling box, which are encapsulated in a bigger box so that from the outside one doesn't see this combination, as illustrated in fig 2. Specifically, the inputs \(x\) and \(y\) are plugged in the C-C box, which gives outputs \(a\) and \(b\). Then Alice applies some unitary transformation \(\hat{U}_{A}^{x,a}\) to her quantum particle depending on her input \(x\) and the C-C box output \(a\) and finally outputs her quantum particle out of the bigger box. Bob follows a similar procedure. Internal ancillas could also be added. From the outside this combination looks as a C-Q box. We call these "non-genuine" classical-quantum boxes. ## III Examples of boxes with pure states outputs We present a few examples of boxes, of increasing complexity, which turn out to be non-genuine. This will show the main ideas of how to create the desired boxes and show that they are non-genuine. ### QM correlations are not enough As a first example consider inputs \(x,y=0,1\) and the C-Q box defined by \[\begin{split} x\cdot y=0&\rightarrow\frac{1}{ \sqrt{2}}(\ket{0}\ket{0}+\ket{1}\ket{1})\\ x\cdot y=1&\rightarrow\frac{1}{\sqrt{2}}(\ket{0} \ket{1}+\ket{1}\ket{0})\end{split} \tag{2}\] where \(\ket{0}\ket{1}\) is short for \(\ket{0}_{A}\ket{1}_{B}\). Note that this box _cannot_ be created with only shared entanglement and no other non-local resources such as C-C boxes, as one can make the PR box \[(a-b)mod\ 2=x\cdot y \tag{3}\] from this by measuring in the 0/1 basis, and the PR box (which one example of a C-C box) is known to give stronger than quantum correlations. This box however is a non-genuine C-Q one since it can be simulated by the PR box above plus the pre-shared maximally entangled state \(\ket{\Phi}=\frac{1}{\sqrt{2}}(\ket{0}\ket{0}+\ket{1}\ket{1})\). Alice simply needs to apply the bit flip operator \(U_{A}\) (which flips \(\ket{0}\) to \(\ket{1}\) and vice versa) to \(A\) whenever the PR box gives \(a=1\). Similarly Bob applies the same bit flip operator \(U_{B}\) to \(B\) whenever \(b=1\). When \(x\cdot y=0\) the PR box gives \(a=b\), so Alice and Bob will either both flip or neither flip their qubits, and both operations leave \(\ket{\Phi}\) unchanged. When \(x\cdot y=1\) only one of Alice or Bob will flip their bits and we will have \(\frac{1}{\sqrt{2}}(\ket{0}\ket{1}+\ket{1}\ket{0})\) as desired. ### Sign flip Next consider inputs \(x,y=0,1\) and the box which outputs states \(\ket{\Psi^{x,y}}\) according to \[\ket{\Psi^{x,y}}=\alpha\ket{0}\ket{0}+\beta e^{i\pi x\cdot y}\ket{1}\ket{1}. \tag{4}\] This can be simulated by a PR box \((a-b)mod2=x\cdot y\). Alice and Bob apply \(\hat{U}_{A}^{a}\) and \(\hat{V}_{B}^{b}\) respectively to the initial state \(\ket{\Psi}=\alpha\ket{0}\ket{0}+\beta\ket{1}\ket{1}\)) where \[\hat{U}_{A}^{a}\ket{0}_{A} =\ket{0}_{A} \hat{V}_{B}^{b}\ket{0}_{B} =\ket{0}_{B} \tag{5}\] \[\hat{U}_{A}^{a}\ket{1}_{A} =e^{i\pi\alpha}\ket{1}_{A} \hat{V}_{B}^{b}\ket{1}_{B} =e^{-i\pi b}\ket{1}_{B}\] which leads to \[\ket{\Psi}\rightarrow\hat{U}_{A}^{a}\hat{U}_{B}^{b}\ket{\Psi} =\alpha\ket{0}\ket{0}+\beta e^{i\pi(a-b)}\ket{1}\ket{1} \tag{6}\] \[=\alpha\ket{0}\ket{0}+\beta e^{i\pi(a-b)mod2}\ket{1}\ket{1}\] \[=\alpha\ket{0}\ket{0}+\beta e^{i\pi x\cdot y}\ket{1}\ket{1}\] which is the desired state. ### Phase change The sign change in the previous example can be generalized to an arbitrary rational phase parameterized by \(\theta\): \[\ket{\Psi^{x,y}}=\alpha\ket{0}\ket{0}+\beta e^{i2\pi\theta x\cdot y}\ket{1} \ket{1}. \tag{7}\] In the previous case \(\theta=1/2\). Suppose now \(\theta=1/4\). It is simple to see that using a standard PR box one cannot implement this C-Q box. Does this mean that this box is genuine C-Q? No. To decide that a C-Q box is genuine we need to show that there is _no_ way to construct it by using a standard classical-classical box and using its outputs to implement appropriate local unitary operations on a pre-shared entangled quantum state. In our case it turns out that this is possible. The desired C-Q box can be constructed by the use of a C-C box which takes inputs \(x,y=0,1\) and gives outputs \(a,b=0,1,2,3\) according to \((a-b)mod\)\(4=x\cdot y\), with all pairs of outcomes that respect this constraints being given with equal probability (\(1/4\) in this case). Alice then performs the rotation \(\hat{U}_{A}^{a}\ket{1}_{A}=e^{i2\pi a/4}\ket{1}_{A}\) and Bob \(\hat{V}_{B}^{b}\ket{1}_{B}=e^{i2\pi(-b)/4}\ket{1}_{B}\). This gives \[\ket{\Psi} \rightarrow\alpha\ket{0}\ket{0}+\beta e^{i2\pi(a-b)/4}\ket{1} \ket{1} \tag{8}\] \[=\alpha\ket{0}\ket{0}+\beta e^{i2\pi x\cdot y}\ket{1}\ket{1}\] as desired. For other rational values of \(\theta\), i.e. \(\theta=\frac{m}{n}\) where \(m\) and \(n\) are integers, we can use a similar box with m dimensional outputs giving \((a-b)mod\)\(n=x\cdot y\) and rotation \(\hat{U}_{A}^{a}\ket{1}_{A}=e^{i2\pi am/n}\ket{1}_{A}\). ### Phase change boxes and use of resources As we have seen before, any phase change box with \(\theta\) equal to a rational number, \(\theta=\frac{m}{n}\), can be realised by a C-C box and unitary transformations, provided that we use a C-C box with \(n\) outcomes. When \(n\) is large, this C-C box is a large amount of non-local resources. One might ask whether there is a more efficient way to implement this C-Q box. However we shall show here that this is the most efficient possible implementation, both in terms of C-C box and entanglement. Proving that to implement a phase change box with \(\theta=\frac{1}{n}\) we require a C-C box with \(a\) and \(b\) each having \(n\) outcomes, defined by \((a-b)mod\)\(n=x\cdot y\), proceeds as follows. First, assume that the procedure is implemented by starting with \(\ket{\Psi}=\alpha\left(\ket{0}\ket{0}+\beta\ket{1}\ket{1}\right)\), and applying unitary operations \(\hat{U}_{A}^{a}\) and \(\hat{V}_{B}^{b}\) when the C-C box outputs \(a\) and \(b\). For simplicity we consider the case when \(\alpha\neq\beta\), so the only operations that are allowed are only phase shifts. Furthermore, we can take the \(\hat{U}_{A}^{a}\) to be different for different \(a\), since otherwise we can use a simplified C-C box by merging them together, and similarly for Bob. For the case \((x,y)=(0,0)\), when \(a=0\), which occurs with some probability \(p_{0}\), Alice applies \(\hat{U}_{A}^{0}\). To generate \(\ket{\Psi^{+}}\) Bob must apply \((\hat{U}_{B}^{0})^{*}\), i.e. the complex conjugate. We can label his C-C box outcome \(b=0\) in this case. The next step is to realise that for the inputs \((x,y)=(0,0)\) the pair of outputs \((a,b)=(0,1)\) is redundant as Bob will need to apply the same unitary \((\hat{U}_{B}^{0})^{*}\) as for \((a,b)=(0,0)\). Hence, for the inputs \((x,y)=0,0\) we will only consider \(a=0\) to be paired with \(b=0\). Similarly for \(a=i\) when Alice applies \(\hat{U}_{A}^{i}\), Bob must apply \((\hat{U}_{B}^{i})^{*}\) and we can label his outcome \(b=i\). So we can describe the C-C box in this \((x,y)=(0,0)\) case as outputting \(a=b=i\) with probability \(p_{i}\). Due to no signalling of the C-C box from Bob to Alice, comparing \((x,y)=(0,1)\) to \((0,0)\) we see that in the case \((0,1)\) Alice must receive outcome \(a=i\) with the same probability \(p_{i}\) as for the \((0,0)\) inputs, upon which she will apply \(\hat{U}_{A}^{i}\). To keep \(\ket{\Psi^{+}}\) unchanged the C-C box must output \(b=a\), so that Bob applies \((\hat{U}_{B}^{i})^{*}\). Similarly due to no signalling from Alice to Bob, comparing \((x,y)=(1,0)\) to \((0,0)\) we see that in the case \((1,0)\) the C-C box must output \(b=i\) with probability \(p_{i}\), and to generate \(\ket{\Psi^{+}}\) give \(b=a\). The important difference comes when considering the inputs \((x,y)=(1,1)\). Comparing \((1,0)\) to \((1,1)\) it is still the case that for \((x,y)=(1,1)\) Alice must receive outcome \(a=i\) with probability \(p_{i}\) and apply \(\hat{U}_{A}^{i}\). And comparing \((0,1)\) to \((1,1)\) we see that for \((1,1)\) Bob must receive outcome \(i\) with probability \(p_{i}\) and apply \((\hat{U}_{B}^{i})^{*}\). However to generate the state \(\ket{\Psi^{1,1}}=(\sigma_{z}^{2/n})_{A}\ket{\Phi^{+}}\) where \(\sigma_{z}\) is the Pauli matrix which does \[\ket{0} \rightarrow\ket{0} \tag{9}\] \[\ket{1} \rightarrow e^{i\pi}\ket{1},\] we need the C-C box to pair up \(a\) and \(b\) so that \[(\hat{U}_{B}^{b})^{*}\hat{U}_{A}^{a}\ket{\Phi^{+}} =\hat{U}_{A}^{a}(\hat{U}_{A}^{b})^{t}\ket{\Phi^{+}} \tag{10}\] \[=(\sigma_{z}^{2/n})_{A}\ket{\Phi^{+}},\] \[\implies\hat{U}_{A}^{a} =(\sigma_{z}^{2/n})_{A}\hat{U}_{A}^{b}.\] Thus for any \(i\), if \(\hat{U}_{A}^{i}\) is one of the unitaries used in implementing the C-Q box, then \((\sigma_{z}^{2/n})_{A}\hat{U}_{A}^{i}\) is another one. The smallest set of \(\hat{U}_{A}^{i}\) which has this property is the one where \(i=0..n-1\) and \[\hat{U}_{A}^{i}=(\sigma_{z}^{2i/n})_{A}. \tag{11}\] Given this set, we can create the desired state by pairing together \(a\) and \(b\) as \[(a-b)mod\ n=x\cdot y \tag{12}\] which is a generalisation of the standard PR box that can be written also as \((a-b)mod\ 2=x\cdot y\). This proves that if we implement the C-Q box in Eq (7) with \(\theta=\frac{1}{n}\), using an initial \(\ket{\Phi^{+}}\), local unitaries and a C-C box, then the C-C box needs to have dimension \(n\) and be of the form \((a-b)mod\ n=x\cdot y\). One could imagine starting with a different initial shared entangled state, but that will not help us to make a simpler box, as e.g. the unitaries we apply for \(a=b=0\) for \(x=y\) must create \(\ket{\Phi^{+}}\) regardless, and the rest of the argument then proceeds as before. Thus any implementation of this C-Q box requires a C-C box with \(n\) outputs. ### Approximate C-C + entanglement of C-Q boxes In the previous subsections we have shown that various C-Q boxes are not genuine, since we were able to implement them via a C-C box and pre-shared entanglement. Here we introduce a new idea, the **approximate** C-C + entanglement implementation of a C-Q box. This is an important idea because it is likely that there are many C-Q boxes that cannot be implemented _exactly_ by C-C + entanglement but can be implemented arbitrarily closely. An example is that of phase change with irrational \(\theta\). It is clear that we cannot implement the C-Q box via a C-C box with a finite number of different outputs. However, we can approximate any irrational \(\theta\) arbitrarily closely by a rational number, and use the procedure described in the previous section to exactly implement this rational phase box. An alternative method which implements the phase change for irrational \(\theta\) exactly is to use a C-C box with \(a\) and \(b\) real numbers in \([0,1)\) satisfying \((a-b)mod\ 1=(x\cdot y)\theta\). This is a similar to going to the limit of of using a rational approximation and letting \(n\rightarrow\infty\). ### C-Q Box with maximally entangled outputs for each input. Consider a C-Q box with maximally entangled outputs for each input. Any arbitrary maximally entangled state can be written by acting on, say, Alice's side with a unitary on a standard maximally entangled state. Using this representation we can define an arbitrary C-Q box with maximally entangled outputs as: \[\ket{\Psi^{0,0}} =\hat{\alpha}_{A}\ket{\Psi^{-}} \tag{13}\] \[\ket{\Psi^{0,1}} =\hat{\beta}_{A}\ket{\Psi^{-}}\] \[\ket{\Psi^{1,0}} =\hat{\gamma}_{A}\ket{\Psi^{-}}\] \[\ket{\Psi^{1,1}} =\hat{\delta}_{A}\ket{\Psi^{-}},\] where \(\ket{\Psi^{-}}=\frac{1}{\sqrt{2}}(\ket{0}\ket{1}-\ket{1}\ket{0})\). To show that this box is non-genuine C-Q we will show how to implement it using two steps, of which only the first uses a C-C box. First we will show how to implement a simplified version of the above box where \(\hat{\alpha}_{A}=\hat{\beta}_{A}=\hat{\gamma}_{A}=\mathbf{1}\) and \(\hat{\delta}_{A}=\hat{U}_{A}\) for an arbitrary \(\hat{U}\). This can be done by noting that any unitary on a 2 dimensional system can be viewed in the Block sphere as a rotation by \(\theta\) around a given axis, and the phase change implemented above is exactly such a rotation around the 0/1 axis. As \(\ket{\Psi^{-}}\) is rotationally symmetric we can always write it as \(\frac{1}{\sqrt{2}}\left(\ket{\vec{n}}\ket{\vec{n}}-\ket{-\vec{n}}\ket{\vec{n}}\right)\) where \(\vec{n}\) is the axis of rotation, and then use the the phase change box from the previous section to apply the rotation around that axis. To implement the more general box in eq. (13) we shall next apply some local unitary operations. Alice applies \(\hat{\alpha}_{A}\) when \(x=0\) and \(\hat{\gamma}_{A}\) when \(x=1\), and Bob does nothing when \(y=0\) and \(\hat{\beta}_{B}^{\dagger}\hat{\alpha}_{B}\) when \(y=1\). This gives us \[\begin{split}\left|\Psi^{0,0}\right\rangle&=\hat{ \alpha}_{A}\left|\Psi^{-}\right\rangle\\ \left|\Psi^{0,1}\right\rangle&=\hat{\alpha}_{A}\hat{ \beta}_{B}^{\dagger}\hat{\alpha}_{B}\left|\Psi^{-}\right\rangle\\ \left|\Psi^{1,0}\right\rangle&=\hat{\gamma}_{A} \left|\Psi^{-}\right\rangle\\ \left|\Psi^{1,1}\right\rangle&=\hat{\gamma}_{A}\hat{ \beta}_{B}^{\dagger}\hat{\alpha}_{B}\hat{U}_{A}\left|\Psi^{-}\right\rangle. \end{split} \tag{14}\] Now note that due to the rotational symmetry of \(\left|\Psi^{-}\right\rangle\) if Alice and Bob act on their particles with the same unitary operator \(\hat{U}\), the state remains unchanged. So, in particular \[\begin{split}\hat{U}_{A}\hat{U}_{B}\left|\Psi^{-}\right\rangle& =\left|\Psi^{-}\right\rangle\\ \hat{U}_{B}\left|\Psi^{-}\right\rangle&=\hat{U}_{A }^{\dagger}\left|\Psi^{-}\right\rangle.\end{split} \tag{15}\] Thus \[\begin{split}\left|\Psi^{0,1}\right\rangle&=\hat{ \alpha}_{A}(\hat{\beta}_{A}^{\dagger}\hat{\alpha}_{A})^{\dagger}\left|\Psi^{- }\right\rangle\\ &=\hat{\alpha}_{A}\hat{\alpha}_{A}^{\dagger}\hat{\beta}_{A} \left|\Psi^{-}\right\rangle\\ &=\hat{\beta}_{A}\left|\Psi^{-}\right\rangle,\end{split} \tag{16}\] and \[\begin{split}\left|\Psi^{1,1}\right\rangle&=\hat{ \gamma}_{A}\hat{U}_{A}\hat{\beta}_{B}^{\dagger}\hat{\alpha}_{B}\left|\Psi^{-} \right\rangle\\ &=\hat{\gamma}_{A}\hat{U}_{A}(\hat{\beta}_{A}^{\dagger}\hat{ \alpha}_{A})^{\dagger}\left|\Psi^{-}\right\rangle\\ &=\hat{\gamma}_{A}\hat{U}_{A}\hat{\alpha}_{A}^{\dagger}\hat{ \beta}_{A}\left|\Psi^{-}\right\rangle.\end{split} \tag{17}\] We can thus achieve \(\left|\Psi^{1,1}\right\rangle=\hat{\delta}_{A}\left|\Psi^{-}\right\rangle\) by setting \(\hat{U}_{A}=\hat{\gamma}_{A}^{\dagger}\hat{\delta}_{A}\hat{\beta}_{A}^{ \dagger}\hat{\alpha}_{A}\). Thus we have shown how to implement our C-Q box with maximally entangled outputs in terms of C-C boxes and local unitary operations. Finally, we note that the protocol involves creating a phase change box to implement \(U\). The resources used are therefore the ones necessary to implement the phase change, which depend on what the phase is as described in section III.4. They could be very large, and we can only implement \(U\) approximately using a finite dimensional C-C box if the phase is irrational. ### Higher Dimensions We will now consider C-Q boxes with more inputs, say \(x=0,1\) and \(y=0,1,2\). This gives the hope to find a genuine C-Q box. The idea is that there have been already many constraints due to non-signalling in implementing via C-C + entanglement a C-Q box on the subset of inputs \(x,y=0,1\) and that the input \(y=2\) when paired with \(x=0,1\) will add supplementary constraints that can no longer be fulfilled. Consider the box \[\begin{split}\left|\Psi^{00}\right\rangle=\left|\Psi^{01}\right \rangle=\left|\Psi^{02}\right\rangle=\left|\Psi^{10}\right\rangle& =\frac{1}{\sqrt{2}}(\left|0\right\rangle\left|0\right\rangle+ \left|1\right\rangle\left|1\right\rangle)\\ \left|\Psi^{11}\right\rangle&=\hat{\sigma}_{z}^{1/2} \left|\Psi^{00}\right\rangle\\ \left|\Psi^{12}\right\rangle&=\hat{\sigma}_{x}\left| \Psi^{00}\right\rangle,\end{split} \tag{18}\] where \(\hat{\sigma}_{z}\) is the usual Pauli operator \[\begin{split}\hat{\sigma}_{z}\left|0\right\rangle&= \left|0\right\rangle\\ \hat{\sigma}_{z}\left|1\right\rangle&=-\left|1 \right\rangle,\end{split} \tag{19}\] and \(\hat{\sigma}_{x}\) flips the bits \[\begin{split}\hat{\sigma}_{x}\left|0\right\rangle&= \left|1\right\rangle\\ \hat{\sigma}_{x}\left|1\right\rangle&=\left|0\right\rangle. \end{split} \tag{20}\] This is the same as phase changes we handled in section III.3 for \((x,y)\in\{(0,0),(0,1),(1,0),(1,1)\}\). Thus we could construct that part from a pre-shared \(\left|\Psi^{00}\right\rangle\) using the C-C box \((a-b)mod\)\(4=x.y\), Alice applying the unitary operator \[\{\mathbb{1},\hat{\sigma}_{z}^{1/2},\hat{\sigma}_{z}^{2/2},\hat{\sigma}_{z}^{ 3/2}\} \tag{21}\] to \(A\) when \(a=0,1,2,3\) respectively, and Bob applying the inverse operator \[\{\mathbb{1},\hat{\sigma}_{z}^{-1/2},\hat{\sigma}_{z}^{-2/2},\hat{\sigma}_{z}^ {-3/2}\} \tag{22}\] to \(B\) when he sees \(b=0,1,2,3\) respectively. However in order to create \(\left|\Psi^{12}\right\rangle\) when \((x,y)=(1,2)\) we need something which allows us to flip the bit, e.g. \(\hat{\sigma}_{x}\). The no-signalling condition means that any C-C box we use must have the same set of outputs \(a\) for the \((x,y)\) cases \((1,1)\) and \((1,2)\). So it seems we have to add \(\hat{\sigma}_{x}\) to the unitaries in eq (21). However since \(\hat{\sigma}_{x}\) doesn't commute with \(\hat{\sigma}_{z}\), it looks likely to break the state we carefully constructed for \((x,y)=(1,1)\). The solution is to use a new C-C box, described below, which has 8 outputs for \(a\) and \(b\) instead of 4. To make the desired states Alice will perform \(\hat{U}_{a}\) when the C-C box outputs a, and Bob will perform \(\hat{U}_{b}^{*}\) (the complex conjugate) when the C-C box outputs b, where \(\hat{U}_{i}\) is defined as \[\begin{split}\hat{U}_{0}=\mathbb{1},&\hat{U}_{1}= \hat{\sigma}_{z}^{1/2},&\hat{U}_{2}=\hat{\sigma}_{z}^{2/2},& \hat{U}_{3}=\hat{\sigma}_{z}^{3/2},\\ \hat{U}_{4}=\hat{\sigma}_{x},&\hat{U}_{5}=\hat{ \sigma}_{z}^{1/2}\hat{\sigma}_{x},&\hat{U}_{6}=\hat{\sigma}_{z}^{2/2} \hat{\sigma}_{x},&\hat{U}_{7}=\hat{\sigma}_{z}^{3/2}\hat{\sigma}_{x}.\end{split} \tag{23}\] When \((x,y)\in\{(0,0),(0,1),(0,2),(1,0)\}\) the C-C box outputs \(a=b\), which gives \[\hat{U}_{a}^{A}(\hat{U}_{a}^{B})^{*}\left|\Psi^{00}\right\rangle=\left|\Psi^{ 00}\right\rangle \tag{24}\] (see appendix A for a more detailed proof). To make \(\left|\Psi^{11}\right\rangle\), the C-C box outputs the pairs \[\{(1,0),(2,1),(3,2),(0,3),(5,4),(6,5),(7,6),(4,7)\}. \tag{25}\] In other words we pair together the cases where both a and b are less than 4, and then the cases where both a and b are at least 4. This works as the state is invariant under \(\hat{\sigma_{x}}\) performed by Alice and Bob simultaneously. To make \(\left|\Psi^{12}\right\rangle\), we instead pair together \(a\) and \(b\) as \[\{(4,0),(7,1),(6,2),(5,3),(0,4),(1,7),(6,2),(5,3)\}. \tag{26}\] It's straightforward to check this works as desired. The box is non-signalling since the outputs \(a\) and \(b\) occur with the same probability independent of \(x\) and \(y\). ## IV General pure state theorem Here we shall show that any C-Q non-signalling boxes which output a set of bi-partite pure states are non genuine. We build the proof by first showing how to deal with the case of outputs that are maximally entangled states, then the case of non-maximally entangled states - the two cases being different in the constraints of the unitaries used when trying to implement them via C-C boxes - and finally to the general C-Q boxes. ### The main idea First we show the main idea applied to a simple case. Suppose we want our box to output: \[\left|\Psi^{x,y}\right\rangle=\hat{\alpha}_{A}^{x\cdot y}\frac{1}{\sqrt{2}}( \left|0\right\rangle\left|0\right\rangle+\left|1\right\rangle\left|1\right\rangle), \tag{27}\] where \(\hat{\alpha}_{A}\) is an arbitrary unitary on A and \(x,y=0,1\). i.e. we want to apply the unitary only when \(x=y=1\). We could achieve this using the method described in Section III.3, however here we present a more powerful method which allows us to handle many more cases. Again we shall start with an entangled state \(\left|\Phi^{+}\right\rangle=\frac{1}{\sqrt{2}}(\left|0\right\rangle\left|0 \right\rangle+\left|1\right\rangle\left|1\right\rangle)\). The group of all unitary operations Alice could apply to her qubit is SU(2). We take a C-C box which takes inputs \(x,y=0,1\) and outputs \(a\) and \(b\) which each label a unitary in SU(2) (in order to parameterize SU(2) \(a\) and \(b\) are now both 3 dimensional and real rather than integers). For any fixed \((x,y)\), the non-local box outputs \(a\) and \(b\) are each independently distributed according to the Harr measure on SU(2), which essentially picks a unitary uniformly at random. The fact that the distribution of \(a\) is independent of \(y\), and the distribution of \(b\) is independent of \(x\), ensures there is no signalling. Since all unitaries on Alice and Bob's sides are possible, the final C-Q box depends upon how \(a\) and \(b\) are correlated. For \(x\cdot y=0\), the non-local box correlates \(a\) and \(b\) so that Alice applies the complex conjugate of Bob's unitary. i.e. when Bob does \(\hat{U}_{B}^{b}\), Alice does \(\hat{U}_{A}^{a}=(\hat{U}_{A}^{b})^{*}\). This leaves \(\left|\Phi^{+}\right\rangle\) unchanged as \((\hat{U}_{A}^{*})^{*}\hat{U}_{B}^{b}\left|\Phi^{+}\right\rangle=\left|\Phi^ {+}\right\rangle\) (see Appendix A). For \(x\cdot y=1\), the non-local box correlates \(a\) and \(b\) so that when Bob does \(\hat{U}_{B}^{b}\), Alice does \(\hat{U}_{A}^{a}=\hat{\alpha}_{A}(\hat{U}_{A}^{b})^{*}\). This gives \(\left|\Phi^{+}\right\rangle\rightarrow\hat{\alpha}_{A}\left|\Phi^{+}\right\rangle\), and so we have implemented the box in Eq. (27). It's worth nothing that the C-C box is able to correlate \(a\) and \(b\) in this way because they are both distributed "uniformly" (according to the Haar measure) across the group, so that for any \(\hat{U}_{a}\) there is a unitary \(\hat{U}_{b}=\hat{\alpha}\hat{U}_{a}^{*}\), and \(\hat{U}_{b}\) has the same distribution as \(\hat{U}_{a}\). In terms of resources this is very expensive: we have 3 real parameters describing the C-C box we use to specify \(a\) and \(b\), but we expect that for any particular \(\hat{\alpha}_{A}\) there will be a simpler C-C box which allows us to implement this C-Q box. ### Maximally entangled pure states Now we generalize the main idea to output states of arbitrary dimension \(n\), and the inputs \(x\) and \(y\) to arbitrary dimension. Recall that in section III.7 we showed an example of a particular C-Q box with higher input dimensions, which had additional constraints which made it more difficult to implement. Nevertheless we found a method to implement it using a C-C box and shared entanglement. Here we shall use the Haar measure idea from the previous section to generalize this to all bi-partite C-Q boxes outputting pure states. We show that all such boxes are non-genuine. Consider the box \[\begin{split}\left|\Psi^{x,y}\right\rangle&=\hat{U} _{A}^{x,y}\left|\Phi^{n+}\right\rangle,\\ where\left|\Phi^{n+}\right\rangle&=\frac{1}{\sqrt{n}} \sum_{i=0}^{n-1}\left|i\right\rangle_{A}\left|i\right\rangle_{B},\end{split} \tag{28}\] and \(n\), \(x\) and \(y\) are non-negative integers. Note that all maximally entangled pure states of dimension \(n\) can be obtained from any one of them by local rotations \(\hat{U}_{A}^{x,y}\) on A, so this covers a large class of non-signalling pure states. To implement this box we follow the same idea: start with the pre-shared state \(\left|\Phi^{n+}\right\rangle\), and use a C-C box which distributes \(a\) and \(b\) according to the Harr measure over SU(n), and which correlates them so that when Bob does \(\hat{U}_{B}^{b}\), Alice does \(\hat{U}_{A}^{a}=\hat{U}_{A}^{x,y}(\hat{U}_{A}^{b})^{*}\). This works as desired. Finally, if we do not require exact implementation of a C-Q box, for any finite set of inputs \(x,y\) we conjecture that it is possible to approximate the desired outputs arbitrarily well using a C-C box with finite dimensional outputs \(a,b\), essentially by taking a representative sample of all rotations distributed according to the Haar measure and pairing them up in a way which is reasonably close to the exact continuous solution. We believe it would be quite useful to have such a protocol in general. ### Non-maximally entangled pure states Next we show how to implement a C-Q box for arbitrary non-maximally entangled pure states, \[\ket{\Psi^{x,y}} =\hat{U}_{A}^{x}\hat{V}_{B}^{y}\hat{W}_{A}^{x,y}\ket{\Phi^{n}} \tag{29}\] \[\text{where}\ket{\Phi^{n}} =\sum_{i=0}^{n-1}\sqrt{p_{i}}\ket{i}_{A}\ket{i}_{B},\] \[\hat{W}_{A}^{x,y}\ket{i}_{A} =e^{i2\pi\alpha_{i}^{x,y}}\ket{i}_{A},\] \[\sum_{i=0}^{n-1}p_{i} =1,\] and \(\hat{U}_{A}^{x}\) and \(\hat{V}_{B}^{y}\) are local unitaries. i.e we can relate the various output states of the box by \(\hat{W}_{A}^{x,y}\) which applies phases parameterized by \(\alpha_{i}^{x,y}\) in the computational basis \(\ket{i}\), and local unitaries \(\hat{U}_{A}^{x}\) and \(\hat{V}_{B}^{y}\). This in fact covers all boxes where \(p_{i}\) are different for all \(i\), because due to no signalling from Bob to Alice once the state \(\ket{\Psi^{00}}\) is fixed then the states \(\ket{\Psi^{0,y}}\) are of the form \(\hat{V}_{B}^{y}\hat{R}_{A}^{0,y}\ket{\Psi^{00}}\) (as Alice's density matrix cannot depend upon \(y\) the only \(y\) dependent unitary she can apply on \(A\) is one which applies phases to \(\ket{i}\), i.e. \(\hat{R}_{A}^{0,y}\)). Fixing \(y\), we can use no signalling from Alice to Bob to determine the form of the states for different \(x\) to be \(\ket{\Psi^{x,y}}=\hat{U}_{A}^{x}\hat{V}_{B}^{y}\hat{W}_{A}^{x,y}\ket{\Phi^{n}}\) as above. We can implement \(\hat{W}_{A}^{x,y}\) easily using the fact that the unitaries applying the different phases commute by following the final protocol in section III.3. After that we just apply the local unitaries. This is in fact easier to implement than the maximally entangled case, as the no-signaling constraint forces \(\hat{W}_{A}^{x,y}\) to only apply phases, whereas in the maximally entangled case it can be an arbitrary unitary. ### General pure states Finally, to cover all possible sets of pure states, we consider what happens when some of the \(p_{i}\) are equal. In that case we can view any subset of \(\ket{i}\) where \(p_{i}\) are equal as forming a maximally entangled subspace, noting that there may be several of these sub-spaces. Then we can create the desired set of states for each of those sub-spaces using the method in section IV.2, then implement the necessary operations for those sub-spaces vs the others using the methods in this section. Thus we have shown how to implement any non-signalling set of pure bipartite states. ## V Mixed states We now look at C-Q boxes which produce quantum states which may be mixed or pure. One might think that this is a simple extension of the pure state case, as every mixed state can be written as a probabilistic mixture of pure states. However that is not the case, as we do not know of any guarantee that a set of non-signalling mixed states can be written as a probabilistic mixture of non-signalling sets of pure states. Thus mixed state C-Q boxes have a lot more freedom, and in general some of them may be genuine. Below, however, we show that a reasonably large class of such boxes are in fact non-genuine. The general case remains open. ### Maximally Disordered Qubits We start by showing how to create any C-Q box, \(\rho_{AB}^{x,y}\), outputting states of 2 qubits whose local density matrices are maximally disordered, i.e. \[\begin{split} Tr_{A}(\rho_{AB}^{x,y})&=\mathbf{1}_ {B}/2\\ Tr_{B}(\rho_{AB}^{x,y})&=\mathbf{1}_{A}/2.\end{split} \tag{30}\] Examples of such states are the maximally entangled Bell states, product states of 2 completely uncertain qubits, and a classical mixture of \(\ket{0}\ket{0}\) and \(\ket{1}\ket{1}\). It is shown in [16] that all 2 qubit density matrices with both single qubit reduced density matrices maximally disordered can be written as a probabilistic mixture of Bell states with local unitary operators applied to \(A\) and \(B\), i.e. \[\rho=\hat{U}_{A}\hat{V}_{B}\left(\sum_{i=0}^{3}p_{i}\ket{\Phi_{i}}\bra{\Phi_{i }}\right)\hat{U}_{A}^{\dagger}\hat{V}_{B}^{\dagger}, \tag{31}\] where \[\ket{\Phi_{0}} =\frac{1}{\sqrt{2}}(\ket{0}\ket{0}+\ket{1}\ket{1}), \tag{32}\] \[\ket{\Phi_{1}} =\frac{1}{\sqrt{2}}(\ket{0}\ket{1}+\ket{1}\ket{0}),\] \[\ket{\Phi_{2}} =\frac{1}{\sqrt{2}}(\ket{0}\ket{0}-\ket{1}\ket{1}),\] \[\ket{\Phi_{3}} =\frac{1}{\sqrt{2}}(\ket{0}\ket{1}-\ket{1}\ket{0}),\] \[\sum_{i}p_{i} =1\text{ and }p_{i}\geq 0.\] Therefore our C-C box can be written as creating states \[\begin{split}\rho^{x,y}&=\hat{U}_{A}^{x,y}\hat{V}_{B}^{x,y }\left(\sum_{i=0}^{3}p_{i}^{x,y}\ket{\Phi_{i}}\bra{\Phi_{i}}\right)(\hat{U}_{A}^ {x,y})^{\dagger}(\hat{V}_{B}^{x,y})^{\dagger}\\ &=\sum_{i=0}^{3}p_{i}^{x,y}\ket{\chi_{i}^{x,y}}\bra{\chi_{i}^{x, y}},\text{where}\\ \ket{\chi_{i}^{x,y}}&=\hat{U}_{A}^{x,y}\hat{V}_{B}^{x,y}\ket{\Phi_{i}}\text{and}\\ \sum_{i}p_{i}^{x,y}&=1.\end{split} \tag{33}\] To implement this in terms of C-Q boxes and pre-shared entanglement first we note that we can produce any C-Q box that for each \((x,y)\) has a single pure state \(\ket{\chi_{i}^{x,y}}\) as output. This is because the \(\ket{\chi_{i}^{x,y}}\) are all related to \(\Phi_{0}\) by a unitary applied on \(A\), and we showed how to implement such boxes in section IV.1. Next we need to probabilistically mix these boxes. There are many ways to do this. One way which can be described graphically is shown in Fig. 3 for \(x,y=0,1\). This displays the set of mixed states as a mixture of sets of pure states, with mixture probabilities \(p_{i}\). With probability \(p_{0}\) we create a C-Q box which outputs the states in the first column in the figure, \(\ket{\chi}_{0}^{0,0}\) for \((x,y)=(0,0)\), \(\ket{\chi}_{0}^{0,1}\) for \((x,y)=(0,1)\), \(\ket{\chi}_{0}^{1,0}\) for \((x,y)=(1,0)\), and \(\ket{\chi}_{0}^{1,1}\) for \((x,y)=(1,1)\). With probability \(p_{1}\) we create a C-Q box which outputs the states from the second column in the figure, which only differ from the first column in the case \((x,y)=(0,1)\), where we create \(\ket{\chi}_{0}^{1,0}\). Similarly we read off the states for the other values of \(p_{i}\). Thus we can create any C-Q box outputting mixed states of 2 maximally disordered qubits. ## VI Multi-partite states C-Q boxes with multi-partite states are more difficult to classify than bi-partite states, due to more parties being involved and the complexity of multi-partite entanglement. On the other hand, the no-signalling condition leads to many constraints on the sets of states which are allowed. In particular, we must not only have that there is no signalling from \(A\) to \(B\), but also that there is no signalling from \(A\) to the joint system \(BC\), and in general from any party to any other set of parties. Therefore even before asking whether a multi-partite C-Q box is genuine or not, even constructing a box while making sure it is non-signalling, is an issue in itself. To see how restrictive this can be, we consider constructing a C-Q box whose outputs are \(\ket{W}\) type states which differ by arbitrary phases in the computational basis, i.e. \[\ket{\Psi^{x,y,z}}_{ABC}=\\ \frac{1}{\sqrt{3}}\left(e^{i\alpha(x,y,z)}\ket{100}+e^{i\beta(x,y, z)}\ket{010}+e^{i\gamma(x,y,z)}\ket{001}\right). \tag{34}\] This tri-partite "W-phase" C-Q box seems similar to the bi-partite phase boxes considered earlier. Yet, while the bi-partite C-Q box allows for correlated phases, we shall show that the only such non-signalling W-phase C-Q boxes are those whose phases are equivalent to local phases: \[\frac{1}{\sqrt{3}}\left(e^{i\alpha(x)}\ket{100}+e^{i\beta(y)}\ket{010}+e^{i \gamma(z)}\ket{001}\right). \tag{35}\] While both the bi-partite phase box and the tri-partite W-phase box are non genuine, they differ significantly in their non-local properties: In general a bi-partite phase box needs a non-local C-C box to be constructed in a non-genuine way, while the tri-partite W-phase box doesn't require any and, in this sense, is purely local. On the other hand, we note that "GHZ-phase" C-Q boxes have similar properties to bi-partite phase C-Q boxes. They both are non-genuine, but are non-local in the sense that their non-genuine implementation requires non-local C-C boxes. ### Phases on a \(\ket{W}\) state We now show that no-signalling conditions reduce the arbitrary phases in eq. (34) reduce to the local ones in eq. (35). We will create the most general no-signalling box based on \(\ket{W}\) with phases by using a no-signalling argument line by line in the following table. In the first line, \(\left|\Psi^{0,0,0}\right\rangle_{ABC}\) can be taken without loss of generality to be equal to the \(\left|W\right\rangle\) state. The second line, \(\left|\Psi^{0,0,1}\right\rangle_{ABC}\), is the output corresponding to a change of the input of C only. Hence, when we group A and B together, the reduced density matrix of AB must be the same for \(\left|\Psi^{0,0,0}\right\rangle_{ABC}\) and \(\left|\Psi^{0,0,1}\right\rangle_{ABC}\). The state \(\left|\Psi^{0,0,0}\right\rangle_{ABC}\) can be written as \[\left|\Psi^{0,0,0}\right\rangle_{ABC}=\frac{1}{\sqrt{3}}(\left|10\right\rangle _{AB}+\left|01\right\rangle_{AB})\left|0\right\rangle_{C}+\frac{1}{\sqrt{3}} \left|00\right\rangle_{AB}\left|1\right\rangle_{C}. \tag{36}\] If change the relative phase between \(\left|100\right\rangle\) and \(\left|010\right\rangle\) it will change the density matrix \(\rho_{AB}\) and hence be observable. So by no-signalling the only phase change we can make (up to an overall phase) between \(\left|\Psi^{0,0,0}\right\rangle_{ABC}\) and \(\left|\Psi^{0,0,1}\right\rangle_{ABC}\) is on \(\left|001\right\rangle\). We call this phase factor \(e^{i\gamma}\). Continuing in a similar way, we see that the only non-signalling W-phase box is (up to overall phases) the one in the above table. Now the phases in the table are all local (\(\alpha\) is present exactly when \(x=1\), \(\beta\) when \(y=1\) etc). So not only are the W-phase C-Q boxes non-genuine, they need no non-local C-C box if we want to create them in a non-genuine way. They can be implemented by local unitaries acting on pre-shared W states. ## VII Conclusion We have considered the issue of non-locality beyond quantum mechanics, and have introduced the concept of Classical input - Quantum output (C-Q) non-signalling boxes. We have investigated the question of whether such boxes are genuine new objects, or all of them could be constructed by objects already known, such as non-local non-signalling classical input - classical output boxes and pre-shared quantum entangled states. We have not been able to fully solve the problem but our study showed that some large class of C-Q boxes are non-genuine and uncovered basic structures of the problem. There are a few questions that follow immediately. The first question concerns the use of resources when C-Q boxes can be be constructed from a C-C box and unitaries acting on pre-shared entanglement. We have shown that even when the C-Q box seems relatively simple, the C-C box needed for its simulation has to have a significant amount of non-locality. Moreover, even small changes to the C-Q box could result in major changes in the non-locality of the C-C needed (see the phase change box considered in sections III-C and III-D). One could presume that this is simply due to the fact that the output of the C-Q box, which is a quantum state, is in some sense "analog" (allows for phases that are given by real numbers), while the C-C box has a discrete number of outcomes, so more outcomes are necessary for simulating the continuous parameters in the definition of the quantum states. The problem, however, is not so simple, since in addition to the C-C box there are also the unitary transformations that are applied to the pre-shared quantum state, and unitaries are analog objects. Yet they are not enough. We found this behavious in a particular case, but expect it to be generic. Second, and most important: What is the class of non-signalling C-Q boxes? We have encountered this problem when we attempted to decide whether _all_ C-Q tripartite boxes with pure-states outputs are are genuine or not. But how can we know that we considered _all_ possible such boxes? If we call the set of output states of a non-signalling C-Q box a "non-signalling" set of quantum states, how can we find all such sets? That is, the nonsignaling condition (1) is very clear, and if we are given a set of states we can easily check if they fulfil it. But how to construct such a general set? What is its general structure? Crucially, the condition does not refer to the structure of each of the individual states separately but on the set as a whole. When dealing with multi-partite mixed states the problem is likely to be very difficult. Incidentally, we also note that there are a few other, and quite important, examples of sets of states whose nonlocal properties are defined globally. For example, a set of orthogonal direct product states that cannot be reliable identified by local measurements and classical communication [17]. Another is that of states of two non-identical spin \(\frac{1}{2}\) particles used to indicate a direction in a 3D space. When dealing with a single direction, a state in which the two spins are parallel and pointing in the desired direction, is as good of indicating that direction as a state in which the spins are anti-parallel, with the first pointing in the desired direction and the second pointing opposite. But if we want to indicate many different directions, the set of anti-parallel spins is better than the set of parallel spins [18]. This type of problems has only received relative little attention; we believe this to be a very important general problem for understanding the structure of quantum mechanics. We believe the above are just the tip of an iceberg, and that considering C-Q boxes will lead to further insights into the issue of non-locality. ## VIII Acknowledgements We thank Paul Skrzypczyk for helpful discussions. Sandu Popescu and Daniel Collins acknowledge support of the ERC Advanced Grant FLQuant.
2302.07352
Reachability-based Trajectory Design with Neural Implicit Safety Constraints
Generating safe motion plans in real-time is a key requirement for deploying robot manipulators to assist humans in collaborative settings. In particular, robots must satisfy strict safety requirements to avoid self-damage or harming nearby humans. Satisfying these requirements is particularly challenging if the robot must also operate in real-time to adjust to changes in its environment.This paper addresses these challenges by proposing Reachability-based Signed Distance Functions (RDFs) as a neural implicit representation for robot safety. RDF, which can be constructed using supervised learning in a tractable fashion, accurately predicts the distance between the swept volume of a robot arm and an obstacle. RDF's inference and gradient computations are fast and scale linearly with the dimension of the system; these features enable its use within a novel real-time trajectory planning framework as a continuous-time collision-avoidance constraint. The planning method using RDF is compared to a variety of state-of-the-art techniques and is demonstrated to successfully solve challenging motion planning tasks for high-dimensional systems faster and more reliably than all tested methods.
Jonathan Michaux, Qingyi Chen, Yongseok Kwon, Ram Vasudevan
2023-02-14T21:32:42Z
http://arxiv.org/abs/2302.07352v1
# Reachability-based Trajectory Design with Neural Implicit Safety Constraints ###### Abstract Generating safe motion plans in real-time is a key requirement for deploying robot manipulators to assist humans in collaborative settings. In particular, robots must satisfy strict safety requirements to avoid self-damage or harming nearby humans. Satisfying these requirements is particularly challenging if the robot must also operate in real-time to adjust to changes in its environment. This paper addresses these challenges by proposing Reachability-based Signed Distance Functions (RDFs) as a neural implicit representation for robot safety. RDF, which can be constructed using supervised learning in a tractable fashion, accurately predicts the distance between the swept volume of a robot arm and an obstacle. RDF's inference and gradient computations are fast and scale linearly with the dimension of the system; these features enable its use within a novel real-time trajectory planning framework as a continuous-time collision-avoidance constraint. The planning method using RDF is compared to a variety of state-of-the-art techniques and is demonstrated to successfully solve challenging motion planning tasks for high-dimensional systems faster and more reliably than all tested methods. ## I Introduction Robotic manipulators will one day assist humans in a variety of collaborative tasks such as mining, farming, and surgery. However, to ensure that they operate robustly in human-centric environments, they must satisfy several important criteria. First, robots should be autonomous and capable of making their own decisions about how to accomplish specific tasks. Second, robots should be safe and only perform actions that are guaranteed to not damage objects in the environment, nearby humans, or even the robot itself. Third, robots should operate in real-time to quickly adjust their behavior as their environment or task changes. Modern model-based motion planning frameworks are typically hierarchical and consist of three levels: a high-level planner, a mid-level trajectory planner, and a low-level tracking controller. The high-level planner generates a sequence of discrete waypoints between the start and goal locations of the robot. The mid-level trajectory planner computes time-dependent velocities and accelerations at discrete time instances that move the robot between consecutive waypoints. The low-level tracking controller attempts to track the trajectory as closely as possible. While variations of this motion planning framework have been shown to work on multiple robotic platforms [1], there are still several limitations that prevent wide-scale, real-world adoption of this method. For instance, this approach can be computationally expensive especially as the robot complexity increases, which can make it impractical for real-time applications. By introducing heuristics such as reducing the number of discrete time instances where velocities and accelerations are computed at the mid-level planner, many algorithms achieve real-time performance at the expense of robot safety. Unfortunately, this increases the potential for the robot to collide with obstacles. To resolve this challenge, this paper proposes Reachability-based Signed Distance Functions (RDFs), a neural implicit representation for robot safety that can be effectively embedded within a mid-level trajectory optimization algorithm. RDF is a novel variation of the signed distance function (SDF) [2, 3, 4] that computes the distance between the swept volume (_i.e._, reachable set) of robot manipulator and obstacles in its environment. We use RDFs within a receding-horizon trajectory planning framework to enforce safety by implicitly encoding obstacle-avoidance constraints. RDF has advantages over traditional model-based representations for obstacle-avoidance. First, by approximating the swept volume RDF learns a continuous-time representation for robot safety. Second, RDF replaces the need to do computationally ex Fig. 1: RDF is a neural implicit safety representation that computes distances between the swept volume of a robotic arm and obstacles within a receding-horizon trajectory planning framework. The top panels show two different orthographic views of a single planning iteration with several intermediate poses of the robot arm in collision with one of the obstacles (red cubes). The bottom panels show orthographic views of the 3D reconstruction of RDFs signed distance field (transparent blue) with one of the obstacle centers (red spheres) interior to RDF’s zero-level set. pensive collision checking at every planning iteration with a rapidly computable forward pass of the network. Third, RDF's fast inference and gradient computations make it ideal as a constraint in trajectory optimization problems. Fourth, as we illustrate in this paper, RDF scales better than existing planning algorithms with respect to the dimension of the system. ### _Related Work_ Our approach lies at the intersection of swept volume approximation, neural implicit shape representation, and robot motion planning. We summarize algorithms in these areas and highlight their computational tractability. Swept volume computation [5, 6, 7, 8, 9] has a rich history in robotics [10] where it has been used for collision detection during motion planning [11]. Because computing exact swept volumes of articulated robots is analytically intractable [10], many algorithms rely on approximations using convex polyhedra, occupancy grids, and CAD models [12, 13, 14]. However, these methods often suffer from high computational costs and are generally not suitable when generating complex robot motions and as a result, are only applied while performing offline motion planning [15]. To address some of these limitations, a recent algorithm was proposed to compute a single probabilistic road map offline whose safety was verified online by performing a parallelized collision check with a precomputed swept volume at run-time [16]. However, this method was not used for real-time trajectory generation. An alternative to computing swept volumes is to buffer the robot and perform collision-checking at discrete time instances along a given trajectory. This is common with state-of-art trajectory optimization approaches such as CHOMP [17] and TrajOpt [18]. Although these methods have been demonstrated to generate robot motion plans in real-time by reducing the number of discrete time instances where collision checking occurs, they cannot be considered safe as they enforce collision avoidance via soft penalties in the cost function. A recent approach called Reachability-based Trajectory Design (RTD) [19] combines both swept volume approximation and trajectory optimization to generate safe motion plans in real-time. At runtime, RTD uses zonotope arithmetic to build a differentiable reachable set representation that overapproximates the swept volume corresponding to a continuum of possible trajectories the robot could follow. It then solves an optimization problem to select a feasible trajectory such that the subset of the swept volume encompassing the robot's motion is not in collision. Importantly, the reachable sets are constructed so that obstacle avoidance is satisfied in continuous-time. While extensions of RTD have demonstrated real-time, certifiably-safe motion planning for robotic arms [1, 20] with seven degrees of freedom, applying RTD to higher dimensional systems is challenging because, as we illustrate in this paper, it is unable to construct reach sets rapidly. A growing trend in machine learning and computer vision is to implicitly represent 3D shapes with learned neural networks. One seminal work in this area is DeepSDF [2], which was the first approach to learn a continuous volumetric field that reconstructs the zero-level set of entire classes of 3D objects. Gropp et. al [3] improved the training of SDFs by introducing an _Eikonal_ term into their loss function that acts as an implicit regularizer to encourage the network to have unit 12-norm gradients. Neural implicit representations have also been applied to robotics problems including motion planning [21, 22], mapping [23, 24], and manipulation [25, 26, 27]. Particularly relevant to our current approach is the work by Koptev et. al [28], which learned an SDF as an obstacle-avoidance constraint for safe reactive motion planning. Similar to approaches described above, [28] only enforces safety constraints at discrete time points. ### _Contributions_ The present work investigates learned neural implicit representations with reachability-based trajectory design for fast trajectory planning. The contributions of this paper are as follows: 1. A neural implicit representation called RDF that computes the distance between a parameterized swept volume trajectory and obstacles; 2. An efficient algorithm to generate training data to construct RDF; and 3. A novel, real-time optimization framework that utilizes RDF to construct a collision avoidance constraint. To illustrate the utility of this proposed optimization framework we apply it to perform real-time motion planning on a variety of robotic manipulator systems and compare it to several state of the art algorithms. The remainder of the paper is organized as follows: Section II summarizes the set representations used throughout the paper; Section III describes the safe motion planning problem of interest; Section IV defines several distance functions that are used to formulate RDF; Section V formulates the safe motion planning problem; Section VI describes how to build RDF and use it within a safe motion planning framework; Sections VII and VIII summarize the evaluation of the proposed method on a variety of different example problems. ## II Preliminaries This section describes the representations of sets and operations on these representations that we use throughout the paper. Note all norms that are not explicitly specified are the 2-norm. ### _Zonotopes and Polynomial Zonotopes_ We begin by defining zonotopes, matrix zonotopes, and polynomial zonotopes. **Definition 1**.: _A zonotope \(Z\subset\mathbb{R}^{n}\) is a convex, centrally-symmetric polytope defined by a center \(c\in\mathbb{R}^{n}\), generator matrix \(G\in\mathbb{R}^{n\times n_{g}}\), and indeterminant vector \(\beta\in\mathbb{R}^{n_{g}}\):_ \[Z=\{z\in\mathbb{R}^{n}\ |\ z=c+G\,\beta,\,||\beta||_{\infty}\leq 1\} \tag{1}\] _where there are \(n_{g}\in\mathbb{N}\) generators. When we want to emphasize the center and generators of a zonotope, we write \(Z=(c,G)\)._ **Definition 2**.: _A matrix zonotope \(Z\subset\mathbb{R}^{n}\) is a convex, centrally-symmetric polytope defined by a center \(c\in\mathbb{R}^{n}\), generator matrix \(G\in\mathbb{R}^{n\times n_{g}}\), and indeterminant vector \(\beta\in\mathbb{R}^{n_{g}}\):_ \[Z=\{z\in\mathbb{R}^{n}\ |\ z=c+G\beta,||\beta||_{\infty}\leq 1\} \tag{2}\] _where there are \(n_{g}\in\mathbb{N}\) generators._ **Definition 3** (Polynomial Zonotope).: _A polynomial zonotope \(\mathbf{P}\subset\mathbb{R}^{n}\) is given by its generators \(g_{i}\in\mathbb{R}^{n}\) (of which there are \(n_{g}\)), exponents \(\alpha_{i}\in\mathbb{N}^{n_{g}}\), and indeterminates \(x\in[-1,1]^{n_{g}}\) as_ \[\mathbf{P}=\mathcal{P}\mathcal{Z}\left(g_{i},\alpha_{i},x\right)=\left\{z\in \mathbb{R}^{n}\ |\ z=\sum_{i=0}^{n_{g}}g_{i}x^{\alpha_{i}},x\in[-1,1]^{n_{g}}\right\}. \tag{3}\] _We refer to \(x^{\alpha_{i}}\) as a monomial. A term \(g_{i}x^{\alpha_{i}}\) is produced by multiplying a monomial by the associated generator \(g_{i}\)._ Note that one can think of a zonotope as a special case of polynomial zonotope where one has an exponent made up of all zeros and the remainder of exponents only have one non-zero element that is equal to one. As a result, whenever we describe operations on polynomial zonotopes they can be extended to zonotopes. When we need to emphasize the generators and exponents of a polynomial zonotope, we write \(\mathbf{P}=\mathcal{P}\mathcal{Z}\left(g_{i},\alpha_{i},x\right)\). Throughout this document, we exclusively use bold symbols to denote polynomial zonotopes. ### _Operations on Zonotopes and Polynomial Zonotopes_ This section describes various set operations. #### Ii-B1 Set-based Arithmetic Given a set \(\Omega\subset\mathbb{R}^{n_{d}}\), let \(\partial\Omega\subset\mathbb{R}^{n_{d}}\) be its boundary and \(\Omega^{c}\subset\mathbb{R}^{n_{d}}\) denote its complement. **Definition 4**.: _The convex hull operator \(\texttt{conv}\colon\mathbb{R}^{n_{d}}\to\mathbb{R}^{n_{d}}\) is defined by_ \[\texttt{conv}(\Omega)=\bigcap C_{\alpha} \tag{4}\] _where \(C_{\alpha}\) is a convex set containing \(\Omega\)._ Let \(U\), \(V\subset\mathbb{R}^{n}\). The _Minkowski Sum_ is \(U\oplus V=\{u+v\ |\ u\in U,v\in V\}\); the _Multiplication_ of \(UV=\{uv\ |\ u\in U,v\in V\}\) where all elements in \(U\) and \(V\) must be appropriately sized to ensure that their product is well-defined. #### Ii-B2 Polynomial Zonotope Operations As described in Tab. I, there are a variety of operations that we can perform on polynomial zonotopes (_e.g._, minkowski sum, multiplication, etc.). The result of applying these operations is a polynomial zonotope that either exactly represents or over approximates the application of the operation on each individual element of the polynomial zonotope inputs. The operations given in Tab. I are rigorously defined in [20]. A thorough introduction to polynomial zonotopes can be found in [29]. One desirable property of polynomial zonotopes is the ability to obtain subsets by plugging in values of known indeterminates. For example, say a polynomial zonotope \(\mathbf{P}\) represented a set of possible positions of a robot arm operating near an obstacle. It may be beneficial to know whether a particular choice of \(\mathbf{P}\)'s indeterminates yields a subset of positions that could collide with the obstacle. To this end, we introduce the operation of "slicing" a polynomial zonotope \(\mathbf{P}=\mathcal{P}\mathcal{Z}\left(g_{i},\alpha_{i},x\right)\) by evaluating an element of the indeterminate \(x\). Given the \(j^{\text{th}}\) indeterminate \(x_{j}\) and a value \(\sigma\in[-1,1]\), slicing yields a subset of \(\mathbf{P}\) by plugging \(\sigma\) into the specified element \(x_{j}\): \[\texttt{slice}(\mathbf{P},x_{j},\sigma)\subset\mathbf{P}=\left\{z\in\mathbf{P }\ |\ z=\sum_{i=0}^{n_{g}}g_{i}x^{\alpha_{i}},x_{j}=\sigma\right\}. \tag{5}\] One particularly important operation that we require later in the document, is an operation to bound the elements of a polynomial zonotope. It is possible to efficiently generate these upper and lower bounds on the values of a polynomial zonotope through overapproximation. In particular, we define the \(\texttt{sup}\) and \(\texttt{inf}\) operations which return these upper and lower bounds, respectively, by taking the absolute values of generators. For \(\mathbf{P}\subseteq\mathbb{R}^{n}\), these return \[\texttt{sup}(\mathbf{P}) =g_{0}+\sum_{i=1}^{n_{g}}\left|g_{i}\right|, \tag{6}\] \[\texttt{inf}(\mathbf{P}) =g_{0}-\sum_{i=1}^{n_{g}}\left|g_{i}\right|. \tag{7}\] Note that for any \(z\in\mathbf{P}\), \(\texttt{sup}(\mathbf{P})\geq z\) and \(\texttt{inf}(\mathbf{P})\leq z\), where the inequalities are taken element-wise. These bounds may not be tight because possible dependencies between indeterminates are not accounted for, but they are quick to compute. ## III Arm and Environment This section summarizes the robot and environmental model that is used throughout the remainder of the paper. ### _Robotic Manipulator Model_ Given an \(n_{q}\) degree of freedom serial robotic manipulator with configuration space \(Q\) and a compact time interval \(T\subset\mathbb{R}\) we define a trajectory for the configuration as \(q:T\to Q\subset\mathbb{R}^{n_{q}}\). The velocity of the robot is \(\dot{q}:T\to\mathbb{R}^{n_{q}}\). Let \(N_{q}=\{1,\ldots,n_{q}\}\). We make the following assumptions about the structure of the robot model: **Assumption 5**.: _The robot operates in an \(n_{q}\)-dimensional workspace, which we denote \(W\subset\mathbb{R}^{n_{q}}\). The robot is composed of only revolute joints, where the \(j^{\text{th}}\) joint actuates the robot's \(j^{\text{th}}\) link. The robot's \(j^{\text{th}}\) joint has position and velocity limits given by \(q_{j}(t)\in[q_{j,\text{lim}}^{-},q_{j,\text{lim}}^{+}]\) and \(\dot{q}_{j}(t)\in[\dot{q}_{j,\text{lim}}^{-},\dot{q}_{j,\text{lim}}^{+}]\) for all \begin{table} \begin{tabular}{c c} Operation & Computation \\ \hline \(\mathbf{P}_{1}\oplus\mathbf{P}_{2}\) (PZ Minkowski Sum) ([20], eq. (19)) & Exact \\ \(\mathbf{P}_{1}\mathbf{P}_{2}\) (PZ Multiplication) ([20], eq. (21)) & Exact \\ \(\texttt{slice}(\mathbf{P},x_{j},\sigma)\) ([20], eq. (23)) & Exact \\ \(\texttt{inf}(\mathbf{P})\) ([20], eq. (24)) & Overapproximative \\ \(\texttt{sup}(\mathbf{P})\) ([20], eq. (25)) & Overapproximative \\ \(f(\mathbf{P}_{1})\subseteq\mathbf{P}_{2}\) (Taylor expansion) ([20], eq. (32)) & Overapproximative \\ \end{tabular} \end{table} TABLE I: Summary of polynomial zonotope operations. \(t\in T\), respectively. Finally, the robot is fully actuated, where the robot's input is given by \(u:T\rightarrow\mathbb{R}^{n_{q}}\)._ One can make the one-joint-per-link assumption without loss of generality by treating joints with multiple degrees of freedom (_e.g._, spherical joints) as links with zero length. Note that we use the revolute joint portion of this assumption to simplify the description of forward kinematics; however, these assumptions can be easily extended to more complex joint using the aforementioned argument or can be extended to prismatic joints in a straightforward fashion. Note that the lack of input constraints means that one could apply an inverse dynamics controller [30] to track any trajectory of of the robot perfectly. As a result, we focus on modeling the kinematic behavior of the manipulator. Note, that the approach presented in this paper could also be extended to deal with input limits using a dynamic model of the manipulator; however, in the interest of simplicity we leave that extension for future work. #### Iii-A1 Arm Kinematics Next, we introduce the robot's kinematics. Suppose there exists a fixed inertial reference frame, which we call the _world_ frame. In addition suppose there exists a _base_ frame, which we denote the \(0^{\text{th}}\) frame, that indicates the origin of the robot's kinematic chain. We assume that the \(j^{\text{th}}\) reference frame \(\{\hat{x}_{j},\hat{y}_{j},\hat{z}_{j}\}\) is attached to the robot's \(j^{\text{th}}\) revolute joint, and that \(\hat{z}_{j}=[0,0,1]^{\top}\) corresponds to the \(j^{\text{th}}\) joint's axis of rotation. Then for a configuration at a particular time, \(q(t)\), the position and orientation of frame \(j\) with respect to frame \(j-1\) can be expressed using homogeneous transformations [30, Ch. 2]: \[H_{j}^{j-1}(q_{j}(t))=\begin{bmatrix}R_{j}^{j-1}(q_{j}(t))&p_{j}^{j-1}\\ 0&1\end{bmatrix}, \tag{8}\] where \(R_{j}^{j-1}(q_{j}(t))\) is a configuration-dependent rotation matrix and \(p_{j}^{j-1}\) is the fixed translation vector from frame \(j-1\) to frame \(j\). With these definitions, we can express the forward kinematics of the robot. Let \(\text{FK}_{j}:Q\rightarrow\mathbb{R}^{4\times 4}\) map the robot's configuration to the position and orientation of the \(j^{\text{th}}\) joint in the world frame: \[\text{FK}_{j}(q(t))=\prod_{l=1}^{j}H_{l}^{l-1}(q_{l}(t))=\begin{bmatrix}R_{j} (q(t))&p_{j}(q(t))\\ 0&1\end{bmatrix}, \tag{9}\] where \[R_{j}(q(t))\coloneqq R_{j}^{0}(q(t))=\prod_{l=1}^{j}R_{l}^{l-1}(q_{l}(t)) \tag{10}\] and \[p_{j}(q(t))=\sum_{l=1}^{j}R_{l}(q(t))p_{l}^{l-1}. \tag{11}\] ### _Arm Occupancy_ Next, we define the forward occupancy of the robot by using the arm's kinematics to describe the volume occupied by the arm in the workspace. Let \(\mathbf{L_{j}}\subset\mathbb{R}^{3}\) denote a polynomial zonotop overapproximation to the volume occupied by the \(j^{\text{th}}\) link with respect to the \(j^{\text{th}}\) reference frame. The forward occupancy of link \(j\) is the map \(\text{FO}_{j}:Q\rightarrow\mathcal{P}(W)\) defined as \[\text{FO}_{j}(q(t))=p_{j}(q(t))\oplus R_{j}(q(t))L_{j}, \tag{12}\] where the first expression gives the position of joint \(j\) and the second gives the rotated volume of link \(j\). The volume occupied by the entire arm in the workspace is given by the function \(\text{FO}:Q\rightarrow\mathcal{P}(W)\) that is defined as \[\text{FO}(q(t))=\bigcup_{j=1}^{n_{q}}\text{FO}_{j}(q(t))\subset W. \tag{13}\] For convenience, we use the notation \(\text{FO}(q(T))\) to denote the forward occupancy over an entire interval \(T\). ### _Environment_ Next, we describe the arm's environment and its obstacles. #### Iii-C1 Obstacles The arm must avoid obstacles in the environment while performing motion planning. These obstacles satisfy the following assumption: **Assumption 6**.: _The transformation between the world frame of the workspace and the base frame of the robot is known, and obstacles are represented in the base frame of the robot. At any time, the number of obstacles \(n_{\mathcal{O}}\in\mathbb{N}\) in the scene is finite. Let \(\mathcal{O}\) be the set of all obstacles \(\{O_{1},O_{2},\ldots,O_{n_{\mathcal{O}}}\}\). Each obstacle is convex, bounded, and static with respect to time. The arm has access to a zonotope that overapproximates the obstacle's volume in workspace. Each zonotope overapproximation of the obstacle has the same volume and is an axis-aligned cube._ A convex, bounded object can always be overapproximated as a zonotope [31]. In addition, if one is given a non-convex bounded obstacle, then one can outerapproximate that obstacle by computing its convex hull. If one has an obstacle that is larger than the pre-fixed axis-aligned cube, then one can introduce several axis-aligned cubes whose union is an overapproximation to the obstacle. Note because we use the zonotope overapproximation during motion planning, we conflate the obstacle with its zonotope overapproximation throughout the remainder of this document. Dynamic obstacles may also be considered within the RDF framework by introducing a more general notion of safety [32, Definition 11], but we omit this case in this paper to ease exposition. Finally, if a portion of the scene is occluded then one can treat that portion of the scene as an obstacle. We say that the arm is _in collision_ with an obstacle if \(\text{FO}_{j}(q(t))\cap O_{\ell}\neq\emptyset\) for any \(j\in N_{q}\) or \(\ell\in N_{\mathcal{O}}\) where \(N_{\mathcal{O}}=\{1,\ldots,n_{\mathcal{O}}\}\). ### _Trajectory Design_ Our goal is to develop an algorithm to compute safe trajectories in a receding-horizon manner by solving an optimization program over parameterized trajectories at each planning iteration. These parameterized trajectories are chosen from a pre-specified continuum of trajectories, with each uniquely determined by a _trajectory parameter_\(k\in K\subset\mathbb{R}^{n_{k}}\), \(n_{k}\in\mathbb{N}\). \(K\) is compact and can be designed in a task dependent or robot morphology-specific fashion [1, 19, 33, 34], as long as it satisfies the following properties. **Definition 7** (Trajectory Parameters).: _For each \(k\in K\), a parameterized trajectory is an analytic function \(q(\,\cdot\,;k):T\to Q\) that satisfies the following properties:_ 1. _The parameterized trajectory starts at a specified initial condition_ \((q_{0},\dot{q}_{0})\)_, so that_ \(q(0;k)=q_{0}\)_, and_ \(\dot{q}(0;k)=\dot{q}_{0}\)_._ 2. \(\dot{q}(t_{!};k)=0\) _(i.e., each parameterized trajectory brakes to a stop, and at the final time has zero velocity)._ The first property allows for parameterized trajectories to be generated online. In particular, recall that RDF performs real-time receding horizon planning by executing a desired trajectory computed at a previous planning iteration while constructing a desired trajectory for the subsequent time interval. The first property allows parameterized trajectories that are generated by RDF to begin from the appropriate future initial condition of the robot. The second property ensures that a fail safe braking maneuver is always available. ## IV Reachability-based Signed Distance Functions This section defines the signed distance function between sets. Signed distance functions are used in robotics in a variety of applications including representing collision avoidance constraints. This section describes how to extend the signed distance function to a distance function between the forward occupancy of a robot and an obstacle. This novel distance function, which we call the reachability-based signed distance function (RDF), enables us to formulate the collision avoidance problem between a parameterized reachable set and an obstacle as an optimization problem. ### _Overview of Signed Distance Fields_ We begin by defining an unsigned distance function: **Definition 8**.: _Given a set \(\Omega\subset\mathbb{R}^{n_{d}}\), the distance function associated with \(\Omega\) is defined by_ \[d(x;\Omega)=\min_{y\in\partial\Omega}\|x-y\|. \tag{14}\] _The distance between two sets \(\Omega_{1},\Omega_{2}\subset\mathbb{R}^{n_{d}}\) is defined by_ \[d(\Omega_{1},\Omega_{2})=\min_{\begin{subarray}{c}x\in\partial\Omega_{1}\\ y\in\partial\Omega_{2}\end{subarray}}\|x-y\|. \tag{15}\] Notice that this distance function is zero for sets that have non-trivial intersection. As a result, this distance function provides limited information for such sets (i.e., it is unclear how much they are intersecting with one another). To address this limitation, we consider the following definition: **Definition 9**.: _Given a subset \(\Omega\) of \(\mathbb{R}^{n_{d}}\), the signed distance function \(s\) between a point is a map \(s:\mathbb{R}^{n_{d}}\rightarrow\mathbb{R}\) defined as_ \[s(x;\Omega)=\begin{cases}d(x,\partial\Omega)&\text{if }x\in\Omega^{c}\\ -d(x,\partial\Omega)&\text{if }x\in\Omega.\end{cases} \tag{16}\] _The signed distance between two sets \(\Omega_{1},\Omega_{2}\subset\mathbb{R}^{n_{d}}\) is defined as_ \[s(\Omega_{1},\Omega_{2})=\begin{cases}d(\Omega_{1},\Omega_{2})&\text{if } \Omega_{1}\cap\Omega_{2}=\emptyset\\ -d(\Omega_{1},\Omega_{2})&\text{otherwise}.\end{cases} \tag{17}\] Note that signed distance functions are continuous [35], differentiable almost everywhere [35, 36], and satisfy the _Eikonal_ equation: **Definition 10**.: _Suppose \(s\) is the signed distance function associated with a set \(\Omega\subset R^{n_{d}}\). Then the gradient of \(s\) satisfies the Eikonal Equation which is defined as_ \[\|\nabla s(x)\|=1. \tag{18}\] We use this property to construct our loss term in VI-C ### _Reachability-Based Signed Distance Functions_ This subsection describes the reachability-based distance function as the signed distance function associated with forward occupancy of a robot. **Definition 11**.: _The reachability-based distance function associated with the forward occupancy reachable set \(\operatorname{FO}(q(T;k))\) is a mapping defined by_ \[r(x,\operatorname{FO}_{j}(q(T;k)))=\min_{j\in\mathcal{N}_{q}}r_{j}(x; \operatorname{FO}_{j}(q(T;k))) \tag{19}\] _where \(r_{j}\) is the signed distance function associated with the \(j^{th}\) forward occupancy \(\operatorname{FO}_{j}\) such that_ \[r_{j}(x;\operatorname{FO}_{j}(q(T;k)))=s(x;\operatorname{FO}_{j}(q(T;k))). \tag{20}\] _The reachability-based distance between an obstacle \(O\subset\mathbb{R}^{d}\) and the robot's forward occupancy reachable set \(\operatorname{FO}\) is defined by_ \[r(O,\operatorname{FO}(q(T;k)))=\min_{j\in\mathcal{N}_{q}}s(O,\operatorname{FO} _{j}(q(T;k))). \tag{21}\] One can use this distance function to formulate trajectory optimization problems as we describe next. ## V Formulating the Motion Planning Problem Using Polynomial Zonotopes To construct a collision free trajectory in a receding-horizon fashion, one could try to solve the following nonlinear optimization problem at each planning iteration: \[\min_{k\in K} \operatorname{cost}(k) \tag{22}\] \[q_{j}(t;k)\subseteq[q_{j,\text{lim}}^{-},q_{j,\text{lim}}^{+}] \forall j\in N_{q},t\in T\] (23) \[\dot{q}_{j}(t;k)\subseteq[q_{j,\text{lim}}^{-},\dot{q}_{j,\text{lim }}^{+}] \forall j\in N_{q},t\in T\] (24) \[r(O_{\ell},\operatorname{FO}(q(T;k)))>0 \forall\ell\in N_{\mathscr{O}} \tag{25}\] The cost function (22) specifies a user-defined objective, such as bringing the robot close to some desired goal. Each of the constraints guarantee the safety of any feasible trajectory parameter. The first two constraints ensure that the trajectory does not violate the robot's joint position and velocity limits. The last constraint ensures that the robot does not collide with any obstacles in the environment. Note in this optimization problem, we have assumed that the robot does not have to deal with self-intersection constraints. Implementing a real-time optimization algorithm to solve this problem is challenging for several reasons. First, the constraints associated with obstacle avoidance are non-convex. Second, the constraints must be satisfied for all time \(t\) in an uncountable set \(T\). To address these challenges, a recent paper proposed to represent the trajectory and the forward occupancy of the robot using a polynomial zonotope representation [20]. We summarize these results below. ### _Time Horizon and Trajectory Parameter PZs_ We first describe how to create polynomial zonotopes representing the planning time horizon \(T\). We choose a timestep \(\Delta t\) so that \(n_{t}:=\frac{T}{\Delta t}\in\mathbb{N}\). Let \(N_{t}:=\{1,\ldots,n_{t}\}\). Divide the compact time horizon \(T\subset\mathbb{R}\) into \(n_{t}\) time subintervals. Consider the \(i^{\text{th}}\) time subinterval corresponding to \(t\in[(i-1)\Delta t,i\Delta t]\). We represent this subinterval as a polynomial zonotope \(\mathbf{T_{i}}\), where \[\mathbf{T_{i}}=\left\{t\in T\mid t=\tfrac{(i-1)+i}{2}\Delta t+\tfrac{1}{2} \Delta tx_{t_{i}},\ x_{t_{i}}\in[-1,1]\right\} \tag{26}\] with indeterminate \(x_{t_{i}}\in[-1,1]\). Now we describe how to create polynomial zonotopes representing the set of trajectory parameters \(K\). In this work, we choose \(K=\sum_{i=1}^{n_{t}}K_{i}\), where each \(K_{j}\) is the compact one-dimensional interval \(K_{j}=[-1,1]\). We represent the interval \(K_{j}\) as a polynomial zonotope \(\mathbf{K_{j}}=x_{k_{j}}\) where \(x_{k_{j}}\in[-1,1]\) is an indeterminate. ### _Parameterized Trajectory and Forward Occupancy PZs_ The parameterized position and velocity trajectories of the robot, defined in Def. 7, are functions of both time \(t\) and the trajectory parameter \(k\). Using the time partition and trajectory parameter polynomial zonotopes described above, we create polynomial zonotopes \(\mathbf{q_{j}}(\mathbf{T_{i}};\mathbf{K})\) that overapproximate \(q_{j}(t;k)\) for all \(t\) in the \(i^{\text{th}}\) time subinterval and \(k\in K\) by plugging the polynomial zonotopes \(\mathbf{T_{i}}\) and \(\mathbf{K}\) into the formula for \(q_{j}(t;k)\). Recall that \(\mathbf{T_{i}}\) and \(\mathbf{K_{j}}\) have indeterminates \(x_{t_{i}}\) and \(x_{k_{j}}\), respectively. Because the desired trajectories only depend on \(t\) and \(k\), the polynomial zonotopes \(\mathbf{q_{j}}(\mathbf{T_{i}};\mathbf{K})\) and \(\mathbf{\dot{q_{j}}}(\mathbf{T_{i}};\mathbf{K})\) depend only on the indeterminates \(x_{t_{i}}\) and \(x_{k}\). By plugging in a given \(k\) for \(x_{k}\) via the slice operation, we obtain a polynomial zonotope where \(x_{t_{i}}\) is the only remaining indeterminate. Because we perform this particular slicing operation repeatedly throughout this document, if we are given a polynomial zonotope, \(\mathbf{q_{d}}_{\mathbf{d}}(\mathbf{T_{i}};\mathbf{K})\), we use the shorthand \(\mathbf{q_{j}}(\mathbf{T_{i}};k)=\texttt{slice}(\mathbf{q_{j}}(\mathbf{T_{i} };\mathbf{K}),x_{k},k)\). Importantly, one can apply [20, Lemma 17] to prove that the sliced representation is over approximative as we restate below: **Lemma 12** (Parmaeterized Trajectory PZs).: _The parameterized trajectory polynomial zonotopes \(\mathbf{q_{d}}_{\mathbf{d}}(\mathbf{T_{i}};\mathbf{K})\) are overapproximative, i.e., for each \(j\in N_{q}\) and \(k\in\mathbf{K},\)_ \[q_{j}(t;k)\in\mathbf{q_{j}}(\mathbf{T_{i}};k)\quad\forall t\in\mathbf{T_{i}} \tag{27}\] _One can similarly define \(\mathbf{\dot{q_{j}}}(\mathbf{T_{i}};\mathbf{K})\) that are also overapproximative._ Next, we describe how to use this lemma to construct an overapproximative representation to the forward occupancy. In particular, because the rotation matrices \(R_{j}^{l-1}(q_{j}(t;k))\) depend on \(\cos{(q_{j}(t;k))}\) and \(\sin{(q_{j}(t;k))}\) one can compute \(\cos{(\mathbf{q_{j}}(\mathbf{T_{i}};\mathbf{K}))}\) and \(\sin{(\mathbf{q_{j}}(\mathbf{T_{i}};\mathbf{K}))}\) using Taylor expansions as in ([20], eq. (32)). By using this property and the fact that all operations involving polynomial zonotopes are either exact or overapproximative, the polynomial zonotope forward occupancy can be computed and proven to be overapproximative: **Lemma 13** (PZ Forward Occupancy).: _Let the polynomial zonotope forward occupancy reachable set for the \(j^{\text{th}}\) link at the \(i^{\text{th}}\) time step be defined as:_ \[\mathbf{FO_{j}}(\mathbf{q}(\mathbf{T_{i}};\mathbf{K}))=\mathbf{p_{j}}( \mathbf{q}(\mathbf{T_{i}};\mathbf{K}))\oplus\mathbf{R_{j}}(\mathbf{q}(\mathbf{ T_{i}};\mathbf{K}))\mathbf{L_{j}}, \tag{28}\] _then for each \(j\in N_{q}\), \(k\in\mathbf{K}\), \(\mathbf{FO_{j}}(q(t;k))\in\mathbf{FO_{j}}(\mathbf{q}(\mathbf{T_{i}};k))\) for all \(t\in\mathbf{T_{i}}\)._ For convenience, let \[\mathbf{FO}(\mathbf{q}(\mathbf{T_{i}};\mathbf{K}))=\bigcup_{j=1}^{n_{q}} \mathbf{FO_{j}}(\mathbf{q}(\mathbf{T_{i}};\mathbf{K})). \tag{29}\] ### _PZ-based Optimization Problem_ Rather than solve the optimization problem described in (22) - (25), [20] uses these polynomial zonotope over approximations to solve the following optimization problem: \[\min_{k\in K} \quad\texttt{cost}(k) \tag{30}\] \[\quad\mathbf{q_{j}}(\mathbf{T_{i}};k)\subseteq[q_{j,\lim}^{-},q_{ j,\lim}^{+}] \qquad\forall j\in N_{q},i\in N_{t}\] (31) \[\quad\mathbf{\dot{q_{j}}}(\mathbf{T_{i}};k)\subseteq[q_{j,\lim}^{ -},q_{j,\lim}^{+}] \qquad\forall j\in N_{q},i\in N_{t}\] (32) \[\quad r(O_{\ell},\mathbf{FO}(\mathbf{q}(\mathbf{T_{i}};k)))>0 \qquad\forall t\in N_{\mathscr{O}},i\in N_{t}. \tag{33}\] This formulation of the trajectory optimization problem has the benefit of being implementable without sacrificing any safety requirements. In fact, as shown in [20, Lemma 22], any feasible solution to this optimization problem can be applied to generate motion that is collision free. Though this method can be applied to 7 degree of freedom systems in real-time, applying this method to perform real-time planning for more complex systems is challenging as we show in Sec. VIII. ## VI Modeling RDF with Neural Networks This section presents RDF, a neural implicit representation that can encode obstacle-avoidance constraints in continuous-time. In particular, RDF predicts the distance between obstacles and the entire _reachable set_ of a robotic arm. To construct this neural implicit representation, we require training data. Unfortunately computing the exact distance to the reachable set of multi-link articulated robotic arm is intractable because that multi-link arm is a non-convex set. To build this training data, we rely on the polynomial zonotope-based representations presented in the previous section. Importantly, we show that we can conservatively approximate the distance between an obstacle and sliced polynomial zonotope-based representation as the solution to a convex program. This allows us to efficiently generate the training data required to construct our neural implicit representation. Subsequently, we give an overview of the neural network architecture and loss function used for training. Finally, we describe how to reformulate the trajectory optimization problem using the neural network representation of the reachability-based signed distance function. ### _Derivation of RDF Approximation_ This subsection derives an approximation to the reachability-based signed distance function defined in Def. 11. The core idea is to approximate the distance between an obstacle and the polynomial zonotope forward occupancy \(\mathbf{FO}(\mathbf{q}(\mathbf{T}_{\!\!i};\!\mathbf{K}))\) (29) over of time for an entire trajectory. Note that slicing a polynomial zonotope of all of its _dependent_ coefficients results in a zonotope [29]. This allows RDF to approximate both positive and negative (_i.e._ signed) distances by leveraging the zonotope arithmetic described in A-A. We now present the main theorem of the paper whose proof can be found in supplementary material Appendix A-A **Theorem 14**.: _Suppose a robot is following a parameterized trajectory \(q(t;k)\) for all \(t\in T\). Consider an obstacle \(O\) with center \(c_{O}\) and generators \(G_{O}\) and \(\mathbf{FO}_{\!\!j}(\mathbf{q}(\mathbf{T}_{\!\!i};\!k))\) with center \(c_{F}\) and generators \(G_{F}\). Let \(\mathcal{P}_{j}:=\bigcup_{j=1}^{\mathcal{H}_{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! an Eikonal loss term similar to [3]. The mean square error loss forces the network to learn to predict the distance while the Eikonal loss regularizes the gradient of the RDF prediction. Given an input, ground-truth RDF distance pair \(x=(q,\dot{q},k,c_{o})\), \(y=(\tilde{r}_{1},\tilde{r}_{2},\cdots,\tilde{r}_{n_{q}})\) from a batched dataset sample \((X_{batch},Y_{batch})\), our network, parameterized by its weights \(\mathbf{\theta}\), computes the output batch \(\hat{Y}_{batch}=\{\hat{y}|\hat{y}=f_{\mathbf{\theta}}(x),x\in X_{batch}\}\) and results in the loss: \[\mathcal{L}=\mathcal{L}_{MSE}+\alpha\cdot\mathcal{L}_{Eikonal} \tag{42}\] where \[\mathcal{L}_{MSE}=\frac{1}{|\hat{Y}|}\sum_{\hat{y}\in\hat{Y}}(\frac{1}{n_{q}} \sum_{i=1}^{n_{q}}(\hat{r}_{i}-\tilde{r}_{i})^{2}) \tag{43}\] \[\mathcal{L}_{Eikonal}=\frac{1}{|\hat{Y}|}\sum_{\hat{y}\in\hat{Y}}(\frac{1}{n_{q }}\sum_{i=1}^{n_{q}}(\|\nabla_{c_{o}}\hat{r}_{i}\|-1)^{2}), \tag{44}\] while \(\alpha\) is a hyperparameter that denotes the coefficient of Eikonal loss \(\mathcal{L}_{Eikonal}\) used in the total loss \(\mathcal{L}\). ### _RDF-based Trajectory Optimization_ After training, we generate a model \(\tilde{r}_{NN|\mathbf{\theta}}\) that takes in \((q_{0},\dot{q}_{0},k,c_{O,\ell})\) and predicts the reachability based distance between the obstacle and the robot's forward occupancy. Using this representation, we can reformulate the motion planning optimization problem described by (30)-(33) into: \[\min_{k\in K}\quad\texttt{cost}(k)\] (45) \[\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad VIII-A and VIII-B. A computer with 12 Intel(R) Core(TM) i7-8700K CPU @ 3.70GHz and an NVIDIA RTX A6000 GPU was used for the motion planning experiment in Sec. VIII-C. ### _Simulation and Simulation Environment_ Each simulation environment has dimensions characterized by the closed interval \([-1,1]^{n_{d}}\) and the base of the robot arm is located at the origin. For each 2D environment, every link of the robotic arm is of same size and is adjusted according to the number of links \(n_{q}\) to fit into the space. We consider planar robot arms with 2, 4, 6, 8, and 10 links, respectively. In each environment all obstacles are static, axis-aligned, and fixed-size where each side has length \(\frac{0.2}{1.2n_{q}}\). For example, the 2D 6-DOF arm has a link length of 0.139m while obstacles are squares with side-length 0.028m. In 3D we use the Kinova Gen3 7-DOF serial manipulator [42]. The volume of each Kinova's link is represented as the smallest bounding box enclosing the native link geometry. We also ensure that each robot's initial configuration is feasible and does not exceed its position limits. Each planning trial is considered a success if the l2-distance between the arm's configuration and goal configuration is within 0.1 rad. A planning trial is considered a failure if the robot arms collides with an obstacle, if the trajectory planner fails to find a feasible trajectory in two successive steps, or if the robot does not reach the goal within 400 planning steps. ### _Desired Trajectory_ We parameterize our trajectory with piece-wise linear velocity. We design _trajectory parameter_\(k=(k_{1},\cdots,k_{n_{q}})\in\mathbb{R}^{n_{q}}\) as a constant acceleration over \([0,t_{p})\). Then, the rest of the trajectory takes a constant braking acceleration over \([t_{p},t_{\text{f}}]\) to reach stationary at \(t_{\text{f}}\). Given an initial velocity \(\dot{q}_{0}\), the parameterized trajectory is given by \[\dot{q}(t;k)=\begin{cases}\dot{q}_{0}+kt,&t\in[0,t_{p})\\ \frac{q_{0}+kt_{p}}{t-t_{p}}(t_{\text{f}}-t),&t\in[t_{p},t_{\text{f}}].\end{cases} \tag{49}\] ### _Dataset_ We compute the dataset for RDF by randomly sampling data points consisting of the initial joint position \(q_{0}\), initial joint velocity \(\dot{q}_{0}\), and trajectory parameter \(k\). For each initial condition, we then randomly sample \(n_{o}=16\) obstacle center positions in \([-1,1]^{n_{d}}\) and compute the ground truth distances between the forward reachable set of the robot and obstacles using Alg. 1. The input to the network is then \(x=(q_{0},\dot{q}_{0},k,c_{O})\) where \((q_{0},\dot{q}_{0},k)\) specifies the desired trajectory and the reachable set and \(c_{O}\) defines position of the center of the obstacle. The corresponding label is \(y=(\bar{r}_{1},\bar{r}_{2},\cdots,\bar{r}_{n_{q}})\) where \(\bar{r}_{j}\) is the the approximation of reachability-based signed distance to each link outlined in Alg. 1. For the 2D tasks, the datasets consist of 2.56 million samples while the 3D datasets consist of 5.12 million sample. 80% of samples in each case are used for training and 20% are used for validation. Another set of the same size as the validation set is generated for testing. ### _Network Hyperparameters_ We train models using all combinations of the following hyperparameters: learning rates \(lr=(0.001,0.0001)\), Eikonal loss coefficients \(\alpha=(0.0,0.001,0.0001)\), and \(\beta=(0.9,0.999)\) and weight decay \(\gamma=0.01\) for the Adam optimizer. We train the 2D and 3D models for 300 and 350 epochs, respectively. The model that performs best on the validation set is chosen for further evaluation. ## VIII Results This section evaluates the performance of the trained RDF network in terms of its accuracy, inference time, the time required to compute its gradient, and its ability to safely solve motion planning tasks. We compare RDF's safety and success rate on motion planning tasks to ARMTD [1], CHOMP [17], and the method presented in [43]. ### _RDF Accuracy and Runtime Compared to Alg. 1_ This section compares RDF's distance prediction accuracy to the distances computed by the Alg. 1. We perform these comparisons for 2D planar multi-link robot arms and the Kinova Gen3 7DOF arm on the test sets that were not used to either train or validate RDF. As shown in Table II, each model has a mean prediction error of \(<1\)cm in the \(l1\)-norm. These results are supported by Fig. 3, which shows RDF's zero-level sets are smooth approximations to Alg.1. We then compared the mean runtime of RDF's inference and gradient computations to the computation time of Alg. 1 and its first-order numerical gradient. These comparisons are done over a random sample of 1000 feasible data points \((q_{0},\dot{q}_{0},k,c_{o})\). As shown in Table III, RDF computes both \begin{table} \begin{tabular}{|c|c|c|} \hline Env. Dim. & DOF & Mean Error (cm) \(\downarrow\) \\ \hline \multirow{4}{*}{2} & 2 & 0.16 \(\pm\) 0.15 \\ \cline{2-3} & 4 & 0.26 \(\pm\) 0.30 \\ \cline{2-3} & 6 & 0.37 \(\pm\) 0.36 \\ \cline{2-3} & 8 & 0.39 \(\pm\) 0.48 \\ \cline{2-3} & 10 & 0.51 \(\pm\) 0.52 \\ \hline 3 & 7 & 0.45 \(\pm\) 0.48 \\ \hline \end{tabular} \end{table} TABLE II: Mean \(l1\)-norm error of each RDF model evaluated on test set Fig. 4: Real-time receding-horizon trajectory planning with RDF in a cluttered environment. (Left) The arm safely moves from the start pose (purple) through intermediate configurations (grey) to reach the goal (green) while avoiding obstacles (red cubes). (Right) During the highlighted planning iteration all obstacle centers (red spheres) remain outside RDF’s zero-level set (blue). distances and gradients at least an order of magnitude faster than Alg. 1. This result holds even when considering only the time required to solve the quadratic program in Alg. 1. Note also that RDF's runtime appears to grow linearly with the DOF of the system, while Alg. 1's grows quadratically. ### _Accuracy & Runtime Comparison with SDF_ We compared RDF's distance prediction accuracy and runtime to that of an SDF-based model similar to [43] over an entire trajectory in 3D. To train a discrete-time, SDF-based model similar to that presented in [43], we generated a dataset of 5.12 million examples. Each input to this SDF takes the form \(x=(q_{d}(t;k),c_{O})\) and the corresponding label is \(y=(\bar{s}_{1},\bar{s}_{2},\cdots,\bar{s}_{n_{q}})\), where \(\bar{s}_{j}\) is the distance between \(c_{O}\) and a polynomial zonotope over approximation of \(j^{\text{th}}\) link of the robot. Note that, in principle, this is equivalent to evaluating RDF at stationary configurations by specifying \(\dot{q}_{0}=0\) and \(k=0\). Following [43], we also ensure that the number of collision and non-collision samples are balanced for each link. For RDF, we generated 1000 samples where the \(i^{\text{th}}\) sample is of the form \((q_{0},\dot{q}_{0},k,c_{O})_{i}\). Because SDF is a discrete-time model, its corresponding \(i^{\text{th}}\) sample is the minimum distance between the obstacle and a set of robot configurations \(\{q_{d}(t_{n};k):\ n\in N_{t}\}\) sampled at timepoints \(t_{n}\) evenly separated by a given \(\Delta t\). Note that for SDF, we considered multiple time discretizations (\(\Delta t=0.01s,0.02,0.1s,0.5s,1.0s\)). During the implementation of SDF, we allow the forward pass through the network to be batched and evaluate all time steps for a given discretization size, simultaneously. As shown in Table IV, RDF has lower mean and max 11-norm error compared to SDF. Similarly, RDF has a lower run time than SDF across all time discretizations. ### _Receding Horizon Motion Planning_ This subsection describes the application of RDF to real-time motion planning and compares its performance to several state of the art algorithms. We evaluate each method's performance on a reaching task where the robot arm is required to move from an initial configuration to a goal configuration while avoiding collision with obstacles and satisfying joint limits. Note that the planner is allowed to perform receding-horizon trajectory planning. We evaluate each planner's success rate, collision rate, and mean planning times under various planning time limits. If the planner was unable to find a safe solution, the arm will execute the fail-safe maneuver from the previous plan. #### V-B1 2D Results In 2D, we compare the performance of RDF to ARMTD [1] across a variety of different arms with varying degrees of freedom from 2-10 DOF to better understand the scalability of each approach. In each instance, the robot is tasked with avoiding 2 obstacles and is evaluated over 500 trials. In the interest of simplicity, we select \(\delta\) in (48) to be 3cm to 3.5cm which is approximately 10 times larger than the mean RDF error prediction as described in Tab. II. Because our goal is to develop a planning algorithm that can operate in real-time, we also evaluate the performance of these algorithms when each planning iteration is restricted to compute a solution within \(5,0.3\) and 0.033s. Note that each planning algorithm can only be applied for 400 planning iterations per trial. Tables V and VI summarize the results. Across all experiments, when both algorithms are given 5s per planning iteration, ARMTD was always able to arrive at the goal more frequently than RDF. A similar pattern seems to hold as the number of degrees of freedom increase when each algorithm is given 0.3s per planning iteration; however, in the 10 DOF case ARMTD's success rate drastically decreases while RDF's performance is mostly unaffected. This is because the computation time of ARMTD grows dramatically as the number of DOFs increases as depicted in Table VII. This observation is more pronounced in the instance where both planning algorithms are only allowed to take 0.033s per planning iteration. In that instance RDF's performance is unaffected as the number of DOFs increases while ARMTD is unable to succeed beyond the 2DOF case. Note across all experiments, none of the computed trajectories ended in collision. #### V-B2 3D Results In 3D, we compare the performance of RDF to ARMTD and an SDF-based version of the obstacle-avoidance constraints within a receding-horizon trajectory planning framework. Note that for RDF and SDF, we buffer their distance predictions with buffer size 3cm, which is approximately five times larger than the mean prediction error in Tab. IV. We also compare the performance of each of the aforementioned methods in 3D to CHOMP [17]. Because our goal is to develop a planning algorithm that can operate in real-time, we also evaluate the performance of these algorithms when each planning iteration is restricted to compute a solution within \(5,0.3\), 0.033, and 0.025s. We also consider the case of avoiding 5 obstacles and 10 obstacles. Each obstacle case was evaluated over 500 trials. Note that each planning algorithm can only be applied for 400 planning iterations per trial. Tables VIII and IX summarize the results of the performance of each algorithm across different time limits for the 5 and 10 obstacles cases, respectively. First observe that as the number of obstacles increases each algorithms performance decreases. Note in particular that for a fixed number of obstacles, the ability of each method to reach the goal decreases as the time limit on planning decreases. Though ARMTD initially performs the best, when the time limit is drastically reduced, RDF begins to perform better. This is because the computation time of ARMTD grows dramatically as the number of DOFs increases as depicted in Table XII. The transition from when ARMTD performs best to when RDF performs best occurs when the time limit is restricted to 0.033s. Note that RDF, ARMTD, and SDF are collision free across all tested trials. However CHOMP has collisions in every instance where it is unable to reach the goal. An example of RDF successfully planning around 5 obstacles is shown in Fig. 4. ## IX Conclusion This paper introduces the Reachability-based signed Distance Function (RDF), a neural implicit representation useful for safe robot motion planning. We demonstrate RDF's viability as a collision-avoidance constraint within a real-time receding-horizon trajectory planning framework. We show that RDF's distance computation is fast, accurate, and unlike model-based methods like ARMTD, scales linearly with the dimension of the system. RDF is also able to solve challenging motion planning tasks for high DOF robotic arms under limited planning horizons. Future work will aim to improve RDF's properties. First, bounding the network's approximation error will ensure that RDF can be used with guarantees on safety. Second, better architecture design and additional implicit regularization will allow a single RDF model to generalize to multiple robot morphologies. Finally, we will aim to extend RDF to handle dynamic obstacles.
2310.08270
Hilbert Space Embedding-based Trajectory Optimization for Multi-Modal Uncertain Obstacle Trajectory Prediction
Safe autonomous driving critically depends on how well the ego-vehicle can predict the trajectories of neighboring vehicles. To this end, several trajectory prediction algorithms have been presented in the existing literature. Many of these approaches output a multi-modal distribution of obstacle trajectories instead of a single deterministic prediction to account for the underlying uncertainty. However, existing planners cannot handle the multi-modality based on just sample-level information of the predictions. With this motivation, this paper proposes a trajectory optimizer that can leverage the distributional aspects of the prediction in a computationally tractable and sample-efficient manner. Our optimizer can work with arbitrarily complex distributions and thus can be used with output distribution represented as a deep neural network. The core of our approach is built on embedding distribution in Reproducing Kernel Hilbert Space (RKHS), which we leverage in two ways. First, we propose an RKHS embedding approach to select probable samples from the obstacle trajectory distribution. Second, we rephrase chance-constrained optimization as distribution matching in RKHS and propose a novel sampling-based optimizer for its solution. We validate our approach with hand-crafted and neural network-based predictors trained on real-world datasets and show improvement over the existing stochastic optimization approaches in safety metrics.
Basant Sharma, Aditya Sharma, K. Madhava Krishna, Arun Kumar Singh
2023-10-12T12:18:19Z
http://arxiv.org/abs/2310.08270v1
Hilbert Space Embedding-based Trajectory Optimization for Multi-Modal Uncertain Obstacle Trajectory Prediction ###### Abstract Safe autonomous driving critically depends on how well the ego-vehicle can predict the trajectories of neighboring vehicles. To this end, several trajectory prediction algorithms have been presented in the existing literature. Many of these approaches output a multi-modal distribution of obstacle trajectories instead of a single deterministic prediction to account for the underlying uncertainty. However, existing planners cannot handle the multi-modality based on just sample-level information of the predictions. With this motivation, this paper proposes a trajectory optimizer that can leverage the distributional aspects of the prediction in a computationally tractable and sample-efficient manner. Our optimizer can work with arbitrarily complex distributions and thus can be used with output distribution represented as a deep neural network. The core of our approach is built on embedding distribution in Reproducing Kernel Hilbert Space (RKHS), which we leverage in two ways. First, we propose an RKHS embedding approach to select probable samples from the obstacle trajectory distribution. Second, we rephrase chance-constrained optimization as distribution matching in RKHS and propose a novel sampling-based optimizer for its solution. We validate our approach with hand-crafted and neural network-based predictors trained on real-world datasets and show improvement over the existing stochastic optimization approaches in safety metrics. ## I Introduction Safety, or more precisely, collision avoidance, is a fundamental requirement in any autonomous driving system. It requires predicting how the world around the ego vehicle will evolve. As a result, trajectory prediction has become an extensively studied problem in the autonomous driving community. Our proposed work is focused on developing trajectory planners that can leverage the outputs of the current trajectory predictors in the best possible manner. To this end, we are inspired by a class of recent approaches that outputs a distribution of trajectories for the neighbouring vehicles (obstacles) instead of a single deterministic prediction. Works like [1, 2], and [3] are a few popular algorithms in this regard. The distributional aspect of the trajectory prediction is crucial to capture the underlying uncertainty stemming from sensors or the unknown/unobserved intentions of the neighboring vehicles. For example, uncertainty in the intent can create complex multi-modal predictions (see Fig.1). In this paper, we adopt a stochastic trajectory optimization perspective for motion planning of ego vehicles under uncertain obstacle trajectory prediction. There are two core challenges in this context. First, the analytical form of prediction distribution may be intractable or unknown. For example, works like [1] characterise the output distribution through a deep generative model, drastically different from the Gaussian form [4]. Thus, the convex approaches proposed in existing literature become unsuitable. Second, suppose we restrict our access to only samples drawn from the obstacle trajectory distribution. In that case, the optimizer should be able to compute high-probability collision avoidance maneuvers while considering only a handful of samples. This work addresses both the above-mentioned challenges. Our proposed optimizer works with only sample-level information and thus is agnostic to the underlying distribution of the obstacle trajectories. We achieve this by building on the concept of embedding distribution into Reproducing Kernel Hilbert Space [5]. In particular, given only drawn samples, we can represent the underlying distribution as a point in RKHS. This, in turn, opens up different possibilities. For example, it becomes straightforward to compute the difference between two distributions by embedding both in RKHS and computing the so-called Maximum Mean Discrepancy (MMD) measure [5]. Our optimizer brings in two core innovations based on RKHS embedding and MMD for stochastic optimization. These are summarized below along with the associated benefits. _Algorithmic Contribution:_ * We present a novel approach for selecting a subset of the most important/probable samples from the obstacle Fig. 1: Figure shows a scenario where an obstacle has multiple intents (lane-change Vs lane-following), each associated with a trajectory distribution. However, both intents have wildly different probabilities. In this particular example, the probability of lane change is higher. For safe navigation, the ego-vehicle needs to consider this multi-modal nature of obstacle trajectories while planning its own motions. Our proposed approach estimates the more likely samples the reduced set) from a set of obstacle trajectories sampled from a black-box distribution. This allows us to plan probabilistically safe motions while appropriately discriminating the low and high-probability obstacle maneuvers. trajectory distribution. This subset is often referred to as the _reduced-set_ and lies at the very heart of our sample efficiency. * We reformulate stochastic optimization as a distribution-matching problem. The cost term is defined by the MMD measure and is conditioned on the trajectories of the ego-vehicle. * We present a custom sampling-based optimization for solving the distribution matching problem. We leverage a low-dimensional encoding of the trajectory sampling process and the introduction of a projection optimization to aid in constraint satisfaction. _State-of-the-art Performance:_ * We show that our one-shot _reduced-set_ selection method performs equal or better than a (near) exhaustive search for possible reduced sets. * We perform extensive validation of our approach on multi-modal trajectory distribution from synthetic and real-world datasets. We show that our approach can handle uncertainty in manoeuvres like lane-change that stem from a complex mixture of discrete and continuous probability distributions. * We outperform recent work [6] in safety metrics when dealing with highly multi-modal trajectory distribution. ## II Problem Formulation, Preliminaries and Related Works #### Symbols and Notations Scalars will be represented by normal-font small-case letters, while bold-faced variants will represent vectors. We will use upper-case bold fonts to represent matrices. Symbols \(t\) and \(T\) will represent the time-stamp and transpose operators, respectively. We will use \(p_{(.)}\) to denote the probability density function of a random variable \((.)\). ### _Motion Planning in Frenet-Frame_ We formulate motion planning of the ego-vehicle in the road-aligned reference known as the Frenet frame. In this setting, the curved roads can be treated as ones with straight-line geometry. In other words, the longitudinal and lateral motions of the ego-vehicle are always with the \(X\) and \(Y\) axes of the Frenet-frame respectively. ### _Feature Map and Kernel Function_ A feature map \(\phi\) maps a feature \(\mathbf{z}\) to the Hilbert Space \(\mathcal{H}\) as \(\phi(\mathbf{z})\). A positive definite kernel function \(k\) is related to the feature map through the so-called kernel trick \(k(\mathbf{z},\mathbf{z}^{\prime})=\langle\phi(\mathbf{z}),\phi(\mathbf{z}^{ \prime})\rangle\). In this paper, we use the Gaussian kernel since they can capture all the possible moments of the underlying distribution [5]. ### _Stochastic Optimization_ Let \((x[k],y[k])\), \((x_{o}[k],y_{o}[k])\) be the ego-vehicle and obstacle trajectory waypoint at time step \(k\). The latter is supposed to be a random variable belonging to some unknown distribution \(p_{o}\). We can formulate stochastic trajectory optimization for ego-vehicle in the following form, wherein \((.)^{(q)}\) represents the \(q^{th}\) derivative of the variable. We use \(P(.)\) to denote the probability of a random variable \((.)\). \[\sum_{k}\tilde{x}[k]^{2}+\tilde{y}[k]^{2}+\left(\dot{x}[k]-v_{des} \right)^{2} \tag{1a}\] \[(x^{(q)}[k_{0}],y^{(q)}[k_{0}],x^{(q)}[k_{f}],y^{(q)}[k_{f}])= \mathbf{b}\] (1b) \[\mathbf{g}(x^{(q)}[k],y^{(q)}[k])\leq 0,\forall k\] (1c) \[P(f(x[k],y[k],x_{o}[k],y_{o}[k])\leq 0)\geq\eta.\forall k \tag{1d}\] \[f=-\frac{(x[k]-x_{o}[k])^{2}}{a^{2}}-\frac{(y[k]-y_{o}[k])^{2}}{b^{2}}+1 \tag{2}\] The first term in the cost function (1a) minimizes the acceleration magnitude while the second term aims to drive the ego-vehicle forward at the desired speed at each time step. The equality constraints (1b) ensure boundary conditions on the \(q^{th}\) derivative of positions. For example, we consider the \(0^{th},1^{st},2^{nd}\) derivatives in our formulation. The inequality constraints (1c) model lane, velocity, and acceleration bounds. The inequalities (1d) are referred to as chance constraints. They are responsible for ensuring that the ego-vehicle trajectory avoids the obstacle trajectory distribution with some lower bound confidence \(\eta\). To this end, \(f(.)\) is a regular collision-avoidance constraint as shown in (2), wherein we have assumed that the ego-vehicle and obstacles are represented as axis-aligned ellipses with size \(\frac{a}{2},\frac{b}{2}\). Extension to more sophisticated models is straightforward. See Section III-D. Beyond simple cases where \(p_{o}\) is Gaussian, optimizations of the form (1a)-(1d) are computationally intractable. In this work, we focus on the case where the form of \(p_{o}\) is not known and we only have access to the samples drawn from it. With this setting in place, this paper's core problem can be summarized as follows. _Problem P: Let **O** be the matrix containing trajectory samples drawn from \(p_{o}\). Let \(\overline{\textbf{O}}\) be a matrix that contains \(m\) out of a total \(n\) obstacle trajectory samples in the matrix **O**. We call \(\overline{\textbf{O}}\), the reduced-set. In the limiting case, \(m=n\). However in general, \(m<<n\). Then :_ * _P.1: How can the_ \(m\) _subset of samples be selected?_ * _P.2: How to reformulate the chance constraints (_1d_) such that stochastic optimization formulated with_ \(m\) _samples from_ \(\overline{\textbf{O}}\) _generalizes well to the unseen samples from_ \(p_{o}\) It is easy to deduce that both parts _P.1_, and _P.2_ are coupled. For example, some reformulations of (1d) may make the choice of reduced-set less critical. Alternately, an informed choice of reduced-set could aid in the generalizability of even basic approximations of (1d). In the next subsection, we discuss some existing approaches for solving _P.1_ and _P.2_. ### _Related Works_ **Chance Constraints Approximation:** We primarily focus on existing works that can work with sample-level descriptions of uncertainty. In this context, the most popular reformulation for chance constraints is the scenario approximation [7, 8]. Here (1d) is replaced with deterministic scenario constraints of the form \(f(;,x_{o,j}[k],y_{o,j}[k])\leq 0\), defined with \(j^{th}\) sample of obstacle trajectory prediction. A naive implementation of scenario approximation can be overly conservative and display poor sample complexity. Both drawbacks, in turn, can be attributed to scenario approximations not considering the likelihood of an obstacle trajectory sample. In other words, collision avoidance constraints formulated with all the obstacle trajectory samples, irrespective of their likelihood, will be given equal importance during trajectory optimization. The concept of Sample Average Approximation [9] reduces the conservativeness as it allows for some of the constraints to be violated. In a recent series of works [10, 11, 12], we motivated the relaxation of (1d) as costs and subsequently expressed in terms of MMD between samples of \(f(;,x_{o,j}[k],y_{o,j}[k])\) and those drawn from a specific proposal distribution. **Reduced Set Selection:** One approach for reducing the conservativeness of vanilla scenario approximation is to perform problem-specific rejection sampling [13]. For example, [14] presents a method where the vanilla scenario approach is first solved for many samples. Subsequently, the obtained solution is used to identify scenarios that can be discarded without affecting the confidence \(\eta\) in a significant way. Although effective, this approach is not suitable for a real-time planning application as it requires repeated optimization calls. An improvement has recently been presented in [6] that performs rejection sampling based on the problem geometry. To be precise, [6] rejects samples of \(f(;,x_{o,j}[k],y_{o,j}[k])\) that are far away from the boundary of the feasible set. This rejection sampling is conceptually simple but requires evaluation of the rejection criteria on many samples. In other words, many obstacle trajectory samples must be drawn from \(p_{o}\). This, however, can be prohibitive if there is a non-negligible computational cost associated with sampling. For example, Table I presents the computation time required to draw various samples from Trajectron++ for a scene with five vehicles on an RTX 3060 i7-laptop with 16GB RAM. As can be seen, for real-time planning, drawing more than 100 samples could be challenging. Other deep neural network-based trajectory predictors show similar inferencing time. The rejection sampling of [6] can also give erroneous results when dealing with highly multi-modal distribution, as we also show latter. **Contribution Over Author's Prior Work:** Our current work extends [12, 10] to handle multi-step predictions of dynamic obstacle trajectories. In [10], the reduced set was formed with random sub-sampling of **O**. In contrast, we present a well-grounded approach that leverages some of the quintessential properties of RKHS embedding. In the next section, we present our main algorithmic results, an improved reduced selection method, and its use in a reformulation of (1a)-(1d) that can better handle multi-modal obstacle trajectory predictions. ## III Main Algorithmic Results ### _Proposed Reduced-Set Selection_ Let \(\boldsymbol{\tau}_{o,j}=(\mathbf{x}_{o,j},\mathbf{y}_{o,j})\) be the \(j^{th}\) obstacle trajectory formed by stacking the waypoints at different time steps \(k\). Furthermore, let each obstacle trajectory be i.i.d samples drawn from \(p_{o}\). Then, \(\sum_{j=1}^{j=n}\frac{1}{n}\phi(\boldsymbol{\tau}_{o,j})\) for some feature map \(\phi(.)\) represents the embedding of the obstacle trajectory distribution in RKHS [5]. There are two key advantages of RKHS embedding. First, it can capture distribution level information with a much smaller sample size \(n\). Second, we can change the weight of each sample from \(\frac{1}{n}\) to some \(\alpha_{j}\) and arrive at the new embedding \(\sum_{j=1}^{j=n}\alpha_{j}\phi(\boldsymbol{\tau}_{o,j})\) with minimal loss of information. The latter forms the backbone of our reduced-set selection. Imagine that in the process of re-weighting, some of the \(\alpha_{j}\)'s assume a much higher value than the rest. For example, any \(10\%\) of the samples of **O** has a substantially larger magnitude than the remaining \(90\%\). We then keep those specific \(10\%\) samples and discard the rest to form our reduced-set \(\overline{\textbf{O}}\). We formalize our idea through the following optimization problem, wherein \(\boldsymbol{\alpha}=(\alpha_{1},\alpha_{2},\ldots,\alpha_{n})\) \[\min_{\boldsymbol{\alpha}}\left\|\sum_{j=1}^{j=n}\frac{1}{n}\phi(\boldsymbol{ \tau}_{o,j})-\sum_{j=1}^{j=n}\alpha_{j}\phi(\boldsymbol{\tau}_{o,j})\right\| _{2}^{2}-\beta\frac{\sum\lvert\overline{\alpha}_{j}(\boldsymbol{\alpha}) \rvert}{\sum\lvert\overline{\alpha}_{j}(\boldsymbol{\alpha})\rvert}, \tag{3}\] where \(\overline{\alpha}_{j}\) and \(\widetilde{\alpha}_{j}\) respectively represent the top \(m\) and bottom \(n-m\) elements of \(\boldsymbol{\alpha}\) in terms of magnitude. Thus, \(\overline{\alpha}_{j}\) and \(\widetilde{\alpha}_{j}\) are (non-analytical) function of \(\boldsymbol{\alpha}\) and the parentheses in the second term of (3) signifies this dependency. Note that the top and bottom samples are not defined beforehand. In contrast, the pattern itself is the output of (3). The first term in cost (3), called the Maximum Mean Discrepancy (MMD), ensures that the re-weighting leads to an embedding close to the original one. The second term ensures that the re-weighting process creates clear differentiation between the more probable samples and the rest by increasing the magnitude of \(\overline{\alpha}_{j}\) and vice-versa for \(\widetilde{\alpha}_{j}\). The constant \(\beta\) balances the two cost terms. From here on, we will refer \(\overline{\boldsymbol{\tau}}_{o,j}=(\overline{\mathbf{x}}_{o,j},\overline{ \mathbf{y}}_{o,j}),\forall j=1,2,\ldots m\) as samples from reduced-set \(\overline{\textbf{O}}\). Consequently, \(\overline{\alpha}_{j}\) will be the weights of these samples. As we show later, these weights explicitly feature in our reformulation of (1d). Thus, we follow the practical suggestion from [15] (pp.554) and we refine the magnitude of \(\overline{\alpha}_{j}\) through (4) to further minimize the information loss due to the discarded samples. Note that (4) is done over the just-formed reduced-set. \[\min_{\overline{\pi}_{j}}\overbrace{\left\|\sum_{j=1}^{j=n}\frac{1}{n}\phi( \boldsymbol{\tau}_{o,j})-\sum_{j=1}^{j=m}\overline{\alpha}_{j}\phi(\overline{ \boldsymbol{\tau}}_{o,j})\right\|_{2}^{2}}^{MMD} \tag{4}\] #### Iii-A1 Solving (3) The proposed optimization (3) is very challenging, mainly because the second term does not have an analytical form. Thus, we use a sampling-based optimization called CEM [16] to minimize (3). To ensure fast computation, we parallelize the cost evaluation over GPUs. Furthermore, we leverage the so-called kernel trick to factor out and pre-store the parts that do not depend on \(\alpha_{j}\). Specifically, the first term reduces to the following form \[\frac{1}{2}\left\|\sum_{j=1}^{j=n}\frac{1}{n}\phi(\boldsymbol{\tau}_{o,j})- \sum_{j=1}^{j=n}\alpha_{j}\phi(\boldsymbol{\tau}_{o,j})\right\|_{2}^{2}=\frac{ 1}{2}\boldsymbol{\alpha}^{T}\textbf{K}\boldsymbol{\alpha}+\frac{1}{n}\textbf {K1}, \tag{5}\] where **K** is the Gram matrix with \(\textbf{K}_{jl}=k(\textbf{p}_{o,i}^{j};\textbf{p}_{o,i}^{l})\) for some kernel function \(k(.)\). **1** represent a vector of ones. As can be seen, the computationally expensive construction of **K** needs to be done only once. #### Iii-A2 Sanity Check We now present a synthetic example to showcase the inner working of our reduced-set selection. We consider a setting where a vehicle can perform lane-change or continue moving along its current lane. To model the uncertainty in the vehicle's motion, we sampled desired lateral offsets and velocity set-points from a discrete Binomial and Gaussian distribution, respectively. These are then passed to a trajectory optimizer to generate a distribution over the vehicle's motions. The results are summarized in Fig.2(a),(b). Due to the discrete nature of lateral offset distribution, we can precisely assign probabilities to each sample. It can be seen from Fig.2(a),(b) that reduced set samples are concentrated predominantly on the more probable manoeuvre. In Fig.2(a), the probability of lane change is higher, while in Fig.2(b), the vehicle is more likely to move along the current lane. For the sake of completeness, Fig.2(c) present the reduced-set selection for a scene using Trajectron++ prediction. Unfortunately, we can't assign probabilities to the samples since the underlying distribution is unknown 1. We provide more validation in Section IV-B. Footnote 1: Trajectron++ and similar algorithms can predict a set of likely samples. But to the best of our knowledge, the exact probabilities of these samples are not known ### _Reformulating Chance Constraints_ In this subsection, we formulate a surrogate for \(P(f(x[k],y[k],x_{o}[k],y_{o}[k])\leq 0)\): an estimate of collision avoidance probability based on obstacle trajectory samples and conditioned on the ego-vehicle trajectory. To this end, we introduce the following relation. \[\overline{f}=\max(0,f) \tag{6}\] Since \(f\) is a random variable due to uncertain obstacle trajectories, so is \(\overline{f}\). Let \(p_{\overline{f}}\) be the probability distribution of \(\overline{f}\). Although we don't know the exact characterization of \(p_{\overline{f}}\), the definition of \(\overline{f}\) guarantees that the entire mass of \(p_{\overline{f}}\) will lie to the right of \(\overline{f}=0\). Moreover, as \(P(f(.)\leq 0)\) increases, \(p_{\overline{f}}\) will tend to the Dirac-delta distribution \(p_{\delta}\). Alternately, the difference between \(p_{\overline{f}}\) and \(p_{\delta}\) can be used as the measure of the probability of collision avoidance [12]. As mentioned earlier, one way to measure the difference between two probability distributions is to embed them into RKHS and compute the MMD between the two. Let \(\mu_{p_{\overline{f}}}\) and \(\mu_{p_{\overline{f}}}\) be the RKHS embedding of \(p_{\overline{f}}\) and \(p_{\delta}\) respectively, then we use \(l_{dist}\) as defined in (7a) as the measure of probability of collision avoidance. \[l_{dist}=\overbrace{\left\|\mu_{p_{\overline{f}}}-\mu_{p_{ \delta}}\right\|_{2}^{2}}^{MMD} \tag{7a}\] \[\mu_{p_{\overline{f}}}=\sum_{j=1}^{j=m}\overline{\alpha}_{j}\phi( \overline{f}(x[k],y[k],\overline{x}_{o,j}[k],\overline{y}_{o,j}[k]))\] (7b) \[\mu_{p_{\delta}}=\sum_{j=1}^{j=m}\phi(\delta_{j})=\sum_{j=1}^{j=m} \phi(0) \tag{7c}\] Few important points about (7a)-(7c) are in order * First, \(l_{dist}\) is a deterministic scalar entity that depends explicitly on the trajectory waypoints of the ego-vehicle. * Second, \(\overline{x}_{o,j}[k],\overline{y}_{o,j}[k]\) are the obstacle trajectory samples from the reduced-set \(\overline{\textbf{O}}\). Similarly, \(\overline{\alpha}_{j}\) is the importance of these samples that were derived in Section III-A. * Finally, (7c) leverages the fact that the samples from \(p_{\delta}\) are all zero We augment (7a) into the cost function (1a) in the following manner \[c_{aug}=\sum_{k=k_{0}}^{k=k_{f}}\tilde{x}[k]^{2}+\tilde{y}[k]^{2}+(\dot{x}[k ]-v_{des})^{2}+w\left\|\mu_{p_{\overline{f}}}-\mu_{p_{\delta}}\right\|_{2}^{2} \tag{8}\] where \(w\) is used to trade-off the primary cost with MMD. ### _Solution Process Using Sampling Based Optimization_ We minimize (8) subject to (1b)-(1c) through sampling-based optimization that combines features from CEM [16], Model Predictive Path Integral (MPPI) [17] and its modern variants [18]. More importantly, our approach leverages the problem structure of autonomous driving by sampling trajectories in the Frenet frame. It also incorporates a projection optimization to push the sampled trajectories towards feasible regions before evaluating their cost. Our proposed optimizer is presented in Alg.1. Instead of directly sampling trajectories, we sample \(\overline{n}_{cem}\) behavioural inputs \(\textbf{d}_{r}\) such as desired lateral offsets and longitudinal velocity set-points from a Gaussian distribution (line 4). These are then fed into a Frenet space planner inspired by [19] in line 6, effectively mapping the distribution over behavioural inputs to that over trajectories. We present the details about the Frenet planner in Appendix VI. The obtained trajectories are then passed in line 8 to a projection optimization that pushes the trajectories towards a feasible region. Our projection problem is a special case of that proposed in [20, 21] and thus can be easily parallelized over GPUs. In line 9, we evaluate the constraint residuals \(c(.)\) over the projection outputs \((\widetilde{x}_{r}[k],\widetilde{y}_{r}[k])\). In line 10, we select top \(n_{cem}(<\overline{n}_{cem})\) projection outputs with the least constraint residuals. We call this set the \(ConstraintEliteSet\) and in line 11, we evaluate \(c_{aug}(.)\) and \(c(.)\) over this set. In line 13, we extract the top \(n_{e}\) trajectory samples that led to the lowest combined cost and residuals. We call this the \(EliteSet\). In line 14, we update the sampling distribution of behavioural inputs based on the cost sample collected in the \(EliteSet\). The exact formula for mean and covariance update is presented in (9a)-(9b). Herein, the constants \(\gamma\) and \(\eta\) are the temperature and learning rate respectively. The notion of \(EliteSet\) is brought from classic CEM, while the mean and covariance update follow from MPPI [17], and [18]. An important feature of Alg.1 is we have effectively encoded long horizon trajectories (e.g \(k=100\)) with a low dimensional behavioural input vector \(\mathbf{d}\). This, in turn, improves the computational efficiency of our approach. \[{}^{l+1}\boldsymbol{\mu}_{d}=(1-\eta)^{l}\boldsymbol{\mu}_{d}+\eta\frac{ \sum_{r=1}^{r=n_{e}}s_{r}\mathbf{d}_{r}}{\sum_{r=1}^{r=n_{e}}s_{r}}, \tag{9a}\] \[{}^{l+1}\boldsymbol{\Sigma}_{d}=(1-\eta)^{l}\boldsymbol{\Sigma}_{d}+\eta \frac{\sum_{r=1}^{r=n_{e}}s_{r}(\mathbf{d}_{r}-{}^{l+1}\boldsymbol{\mu}_{d}) ^{T}}{\sum_{r=1}^{r=n_{e}}s_{r}},\] (9b) \[s_{r}=\exp\frac{-1}{\gamma}(c_{aug}(\widetilde{x}_{r}[k], \widetilde{y}_{r}[k])+c(\widetilde{x}_{r}[k],\widetilde{y}_{r}[k])) \tag{9c}\] ### _Extension to Multiple Obstacles and Complex Shapes_ Our approach presented in previous sections can be trivially extended to multiple obstacles. This is because both reduced-set optimization (3) and the MMD surrogate (7a) can be independently constructed over all the obstacles. For extending to complex shapes, we recommend adopting the approach of covering the obstacle footprint with multiple circles. We can formulate MMD with respect to all the individual circles and stack them up together. This is the approach adopted in our implementation. ``` 1\(N\) = Maximum number of iterations 2 Initiate mean \({}^{l}\boldsymbol{\mu}_{d}\); \(\boldsymbol{\Sigma}_{d}\), at \(l=0\) for sampling Frenet-Frame behavioral inputs 3for\(l=1,l<N,l+\mathbf{d}\)do 4 Draw \(\overline{n}_{cem}\) Samples \((\mathbf{d}_{1},\mathbf{d}_{2},\mathbf{d}_{r},...,\mathbf{d}_{\overline{n}_ {cem}})\) from \(\mathcal{N}({}^{l}\boldsymbol{\mu}_{d};\boldsymbol{\Sigma}_{d})\) 5 Initialize \(CostList\) = [] 6 Query Frenet-planner for \(\forall\mathbf{d}_{r}\): \((x_{r}[k],y_{r}[k])\) = Frenet Planner(\(\mathbf{d}_{r}\)) 7 Project to Constrained Set \[(\widetilde{x}_{r}[k],\widetilde{y}_{r}[k])=\arg\min_{\widetilde{ x}_{r},\widetilde{y}_{r}}\frac{1}{2}\|\widetilde{x}_{r}[k]-x_{r}[k]\|_{2}^{2}\] \[+\frac{1}{2}\|\widetilde{y}_{r}[k]-y_{r}[k]\|_{2}^{2}\] \[(\widetilde{x}_{r}^{(q)}[k_{0}],\widetilde{y}_{r}^{(q)}[k_{0}], \widetilde{x}_{r}^{(q)}[k_{f}],\widetilde{y}_{r}^{(q)}[k_{f}])=\mathbf{b}\] \[\mathbf{g}(\widetilde{x}_{r}^{(q)}[k],\widetilde{y}_{r}^{(q)}[k])\leq 0\] Define constraint residuals: \(c(\widetilde{x}_{r}[k],\widetilde{y}_{r}[k])\) 8\(ConstraintEliteSet\) \(\leftarrow\) Select top \(n_{cem}\) samples of \(\mathbf{d}_{r},(x_{r}[k],y_{r}[k])\) with lowest constraint residual norm. 9\(cost\)\(\leftarrow\)\(c_{aug}(x_{r}[k],y_{r}[k])+r_{q}(x_{r}[k],y_{r}[k])\), over \(ConstraintEliteSet\) 10 append \(cost\) to \(CostList\) 11\(EliteSet\) \(\leftarrow\) Select top \(n_{e}\) samples of \((\mathbf{d}_{r},x_{r}[k],y_{r}[k])\) with lowest cost from \(CostList\). 12\(({}^{l+1}\boldsymbol{\mu}_{d},{}^{l+1}\boldsymbol{\Sigma}_{d})\) \(\leftarrow\) Update distribution based on \(EliteSet\) 13 end for return Frenet parameter \(\boldsymbol{d}_{r}\) and \((x_{r}[k],y_{r}[k])\) corresponding to lowest cost in the \(EliteSet\) ``` **Algorithm 1**Sampling-Based MMD Augmented Trajectory Optimization ## IV Validation and Benchmarking This section aims to answer the following questions * **Q.1** How well our approach works with the commonly occurring multi-modal prediction of obstacle trajectories vis-a-vis the state-of-the-art (SOTA)? * **Q.2** How well our reduced-set selection captures the probable samples from obstacle trajectories? ### _Implementation Details_ We implemented our reduced-set selection optimization (3) and Alg.1 in Python using JAX as our GPU-accelerated Fig. 2: Fig. (a) and (b) presents a multi-modal trajectory distribution that captures the uncertainty in the intent of lane change and how it will be executed. In Fig.(a) the vehicle is more likely to perform lane change while the situation is the opposite in Fig.(b). For safe autonomous driving, it is imperative that the planner be capable of handling these multi-modal uncertainties. In this paper, we build on RKHS embedding and propose a sample-efficient method. A key component of our approach is selecting probable samples (magenta) from just sample-level information. We call this the reduced set. In Fig.(a) and (b) the reduced set is primarily concentrated around the most probable manoeuvres. Fig. (c) shows the reduced-set selection for trajectories predicted from Trajectron++ on NuScenes dataset. numerical algebra library. The hyperparameters of Alg.1 were \(\overline{n}_{cem}=1000,n_{cem}=150,n_{e}=50\). We used \(\gamma=0.9\) and \(\eta=0.6\) in (9a)-(9b). We used a Gaussian kernel with a bandwidth of 30 in (3), (7a). #### Iv-A1 Baselines We compare our approach based on optimal reduced set selection and Alg.1, henceforth referred to as **MMD-Opt** with the scenario Approximation of [6]. This baseline augments a vanilla scenario approach with a reduced set strategy. Essentially it identifies obstacle trajectories that lead to \(f(x[k],y[k],x_{o,j}[k],y_{o,j}[k])\) being zero for some given (initial guess) ego vehicle trajectory. The chosen obstacle trajectory (reduced set) samples are then used in deterministic trajectory optimization. For a fair comparison, we use our sampling-based optimizer 1 to plan with the reduced set samples. We just replace our MMD cost with a deterministic collision cost. #### Iv-A2 Benchmarks and Metrics To evaluate **Q.1**, we used two types of datasets. An example of the first kind is shown in Fig.3 where we hand-crafted a scene with uncertainty in the lane-change maneuver of the obstacle. That is, the obstacle can shift to different lanes in front of the ego-vehicle or can continue moving along its current lane. To model this behavior, we sampled different lane offsets and forward velocity set-points from a discrete Binomial and a Gaussian distribution respectively. We then pass them onto a Frenet frame planner [17]. We varied the probability assigned to each lateral offset and the initial position and velocity of the obstacle to construct 100 different scenes. The second dataset we use is based on Trajectron++ predictions on NuScenes [22] (recall Fig.2). Trajectron++ is also capable of producing multi-modal predictions. However, the number of scenes with clear multi-modality is limited. We evaluated a total of 1300 scenes in this dataset. In each scene, we had access to a reference centerline and a designated ego vehicle. The obstacle was chosen as any other agent in the scene. For both datasets, we sampled 100 trajectories in each scene and further choose 10 samples from them to form the reduced set. We recall that the reduced-set selection differs between ours and the baselines [6]. We further sampled 1000 novel samples of obstacle trajectories to validate the performance of our **MMD-Opt** and [6]. We call this set the **Validation Set**. We use the metric of a number of collision-free trajectories obtained on the **Validation Set** for comparing **MMD-Opt** and [6]. ### _Benchmarking our Reduced Set Selection_ In this section, we evaluate the goodness of our reduced set selection. Since we don't have any ground truth to compare against, we take an indirect evidence-based approach. A good reduced set is one which leads to fewer collisions on the **Validation Set**. Our overall process was as follows. For each scene, we constructed several reduced sets by just randomly sub-sampling from the obstacle trajectory set. In other words, we performed some sort of (near) exhaustive search in the space of reduced sets. We then constructed the MMD (7a) using the \(\overline{\alpha}_{j}\) associated with each of these reduced sets (recall (4)) and subsequently solved Alg.1. The results are summarized in Fig.4(a). It shows the mean and Fig. 3: Comparison of our approach and [6] on synthetic multi-modal dataset. Fig.(a) shows a scene with uncertainty in lane-change intent and its execution. Fig.(b) shows the reduced set selected following the criteria of [6] and the resulting optimal trajectory. As can be seen, the reduced set captured only less probable samples which led to the collision with a novel sample from the validation set in Fig.(c). In contrast, our reduced set optimization (3) selects samples primarily from the high probability trajectory. But, some samples from the low probable regions are also selected. As a result, our MMD-based collision surrogate (7a) gets a correct estimate of the risk. Fig.(c) shows the trajectory computed through our **MMD-Opt** avoids the novel sample that [6] collified with. In Fig.(c)-(e) standard deviation of the number of collision-free trajectories achieved with our optimal reduced set selection and that with exhaustive search. As can be seen, our approach performs as well as an exhaustive search (\(93.25\%\) Vs \(93.13\%\)). It should be noted that in real-world scenarios, an exhaustive search is not possible and is done here solely for benchmarking. Fig.4(b)-(c) presents a fine-grained perspective in three different scenes. As can be seen, different choices for the reduced set offer wildly different performances. Moreover, it is also not possible to know beforehand the performance of a random reduced set unless we have solved Alg.1 with that choice. In contrast, our proposed solution based on minimization of (3) offers a one-shot solution. ### _Handling Multi-Modal Obstacle Trajectories_ #### Iv-C1 A Qualitative Result Let us assume that in the scene shown in Fig.3, there is a \(5\%\) probability of the obstacle merging into the lane of the ego-vehicle and a \(95\%\) percent probability of it choosing the adjacent lane. Interestingly as shown, the less probable obstacle trajectory samples are in direct conflict with the current trajectory of the ego vehicle. In contrast, the high probability maneuver takes the obstacle away from the ego vehicle. In other words, the less probable samples are at the boundary of the feasible set of collision avoidance constraints. Thus, if we apply the reduced set criteria from [6], the most conflicting but low probability samples (golden lines in Fig.3(b)) will be selected. Unfortunately, once fed to the optimizer, the resulting solution will lead the ego vehicle directly in conflict with more probable obstacle trajectory samples. The ensuing collision is documented in Fig.3(c). Our approach operates in a strikingly different manner in this particular example and in general. First, our reduced set selection correctly identifies the high-probability samples (magenta Fig.3(d)). In fact, a small amount of reduced set samples are also chosen from the less probable ones. Furthermore, our MMD optimization leads to the right set of manoeuvres for the ego-vehicle, validated for a novel random sample in Fig.3(e) #### Iv-C2 Quantitative Validation We constructed a total of 100 scenes similar to Fig.3(a). For each of these scenes, we evaluated the computed optimal trajectory (**MMD-Opt** and [6]) on the novel set of 1000 obstacle trajectory samples. The statistics of the number of collision-free trajectories observed across scenes are presented in Fig.5. The mean collision-free trajectory obtained by our approach was 972 as compared to 906 obtained by [6]. Thus, our **MMD-Opt** achieved an improvement of around \(7\%\) on average. However, a deeper insight can be obtained by looking at the variance of the data. As shown in Fig.5, the numbers obtained by **MMD-Opt** is heavily concentrated in the region between \(950-1000\). In fact, our lower quartile number (\(967\)) is almost equal to the upper quartile numbers (\(972\)) obtained by [6]. In other words, our worst-case performance is equal to the best-case performance of [6]. Furthermore, our **MMD-Opt**'s best case number is almost \(25\%\) higher than the worst-case performance of [6] (neglecting the outliers). Fig.6 shows the statistics of collision-free trajectories on the NuScenes dataset with Trajectron++ as the multi-modal trajectory predictor. We see a similar trend as obtained before for the synthetic dataset: lower variance and heavy concentration around the upper bound. Moreover, across 1300 evaluated scenes, there was only one instance where our **MMD-Opt** obtained zero collision-free trajectories. In contrast, for [6], the lower 25 \(\%\) number is concentrated on zero. ### _Computation Time_ Table II shows the computation time required for our reduced set optimization (3) and Alg.1 for different sample sizes on an RTX 3080 Laptop with 16 GB RAM. As shown, the timing of (3) is independent of the number of samples we want to extract for the reduced set. However, Alg.1's computation time increases depending on how many reduced set samples are used to form the MMD (7a). Importantly, the Fig. 4: Fig. (a): Collision-free trajectories on the **Validation Set** obtained with our optimal reduced-set selection vis-a-vis through an exhaustive search over several random reduced set possibilities. As can be seen, our approach performs as well as an exhaustive search but in a tiny fraction of the computation time. Fig.(b)-(c) presents some specific scene examples (\(S_{1},S_{2},S_{3}\)) to showcase a fine-grained perspective. As can be seen, each different random choice for a reduced set offers wildly different performances. The average over all these performances is equal or lower than that obtained with the proposed optimal selection (blue bars in Fig.(b)). Fig. 5: Comparison of our approach and [6] on synthetic multi-modal dataset. We achieve more collision-free trajectories (out of 1000) more consistently than [6]. Fig. 6: Comparison of our approach and [6] on NuScenes dataset with Trajectron++ as the trajectory predictor. The trend in the number of collision-free trajectories is similar to that obtained in Fig.5: lower variance and heavy concentration around the top values. timings are low enough to be considered real-time. ## V Conclusions and Future Work For the first time, we presented a trajectory optimizer that can efficiently handle multi-modal uncertainty including that on discrete intents (lane-change vs lane-keeping) of the dynamic obstacles. We showed how RKHS embedding can provide insights into more probable samples from obstacle trajectory distribution. This proves critical while estimating the likely maneuvers of the obstacles. Second, the same embedding leads to a surrogate for collision probability conditioned on the ego vehicle's trajectory. We proposed a sampling-based optimization for minimizing this collision surrogate while considering the typical kinematic constraints on the vehicle. We extensively compared against a very recent work [6] and showed that our approach outperforms it in safety metrics on both hand-crafted as well as real-world datasets. Our work has certain limitations. The hyperparameters of the kernel function have a very strong effect on the overall performance of both reduced set selection as well as minimizing collision probability. One possible workaround is to use Bayesian optimization for tuning these parameters. ## VI Appendix Let \(\textbf{d}=(y_{d},v_{d})\) be the lateral offset and desired velocity setpoints. Then Frenet planner [19] boils down to solving the following trajectory optimization, \[\min\sum_{k}c_{s}+c_{l}+c_{v} \tag{10a}\] \[(x^{(q)}[k_{0}],y^{(q)}[k_{0}],x^{(q)}[k_{f}],y^{(q)}[k_{f}])= \textbf{b}\] (10b) \[c_{s}(\tilde{x}[k],\tilde{y}[k])=\tilde{x}[k]^{2}+\tilde{y}[k]^{2}\] (11a) \[c_{l}(\tilde{y}[k],\dot{y}[k])=(\ddot{y}[k]-\kappa_{p}(y[k]-y_{d })-\kappa_{v}\dot{y}[k])^{2}\] (11b) \[c_{v}(\dot{x}[k],\ddot{x}[k])=(\ddot{x}[k]-\kappa_{p}(\dot{x}[k] -v_{d}))^{2} \tag{11c}\] The first term \(c_{s}(.)\) in the cost function (10a) ensures smoothness in the planned trajectory by penalizing high accelerations at discrete time instants. The last two terms (\(c_{l}(.),c_{v}(.)\)) model the tracking of lateral offset (\(y_{d}\)) and forward velocity (\(v_{d}\)) set-points respectively with gain \((\kappa_{p},\kappa_{v})\).
2302.08061
Response: Commentary: Is the moon there if nobody looks? Bell inequalities and physical reality
We reject unjustified criticism of our published article [2209.07992] by Gill and Lambare [arXiv:2211.02481, arXiv:2208.09930]. They completely misinterpret the content and conclusions of this article. They construct a counterfactual probabilistic model in which random variables representing outcomes of four experiments performed using incompatible experimental settings are jointly distributed. Thus, CHSH inequalities trivially hold for all finite samples generated by their model. Their model defines a probabilistic coupling for our model describing only the raw data from Bell tests. The existence of this coupling does not invalidate the derivation of the contextual probabilistic model describing the final data from Bell tests. Only these final data are used to test Bell inequalities. Inequalities cannot be derived because our model violates statistical independence. Our contextual model allows to explain in a local and causal way the violation of inequalities and the apparent violation of no-signaling reported in these experiments.
Marian Kupczynski
2023-02-16T03:52:27Z
http://arxiv.org/abs/2302.08061v1
# Response: "Commentary: Is the moon there if nobody looks? Bell inequalities and physical reality" ###### Abstract Bell's theorem, local realism, quantum entanglement, contextuality, Bell-CHSH inequality is the first to prove that the probability of a Bell inequality is a function of the joint probability of a Bell inequality. In this paper, we show that the probability of a Bell inequality is a function of the joint probability of a Bell inequality. ## 2 Locally causal description of Bell tests Statistical inference is based on finite experimental samples. Inequalities can be violated by pseudo-random samples generated using various probabilistic models, including local realistic models (see an excellent review by Larsson[11]). They are violated by experimental data in physics and cognitive science. The important questions we wanted to answer in [3] are 1. Can we explain the data from Bell tests without evoking quantum non-locality and quantum magic? 2. What metaphysical conclusions, if any, may be made if CHSH inequalities are violated in a given experiment? Raw data from Bell tests are obtained by converting two distant time-series of clicks into samples containing paired outcomes (a, b), with a = \(\pm 1\) or 0 and b = \(\pm 1\) or 0, coding clicks in some synchronized time windows. From raw data, final data are extracted with only non-vanishing pairs (a, b), and pairwise expectations of random variables may be described as conditional expectations [5, 9, 10]: \[E\big{(}A,B_{y}\big{|}A,\neq 0,B_{y}\neq 0\big{)}=\sum_{k\in A_{y}}A_{x}( \lambda_{1},\lambda_{x})B_{y}\big{(}\lambda_{2},\lambda_{x}\big{)}p,(\lambda, \lambda_{y})p(\lambda_{x})p(\lambda_{y})p(\lambda_{x})p(\lambda_{x})p(\lambda_{ x})p(\lambda_{x}), \tag{1}\] where \(\Lambda_{xy}=\Lambda_{12}\times\Lambda_{x}\times\Lambda_{y}\) and \(A_{xy}=\big{\{}\lambda e\Lambda_{xy}|A_{x}\big{\{}\lambda_{1},\lambda_{x} \big{\}}\neq 0\big{\}}\). It explains, in a locally causal way, the apparent violation of no-signaling reported in [12, 13, 14, 15, 16]: \[E\big{(}A_{x}[A_{x}B_{y}\neq 0\big{)}\neq E\big{(}A_{x}[A_{x}B_{y}\neq 0 \big{)};E\big{(}B_{y}[A_{x}B_{y}\neq 0\big{)}\neq E\big{(}B_{y}[A_{x}B_{y}\neq 0 \big{)}. \tag{2}\] A procedure for extracting non-vanishing paired outcomes is not unambiguous and is setting-dependent. Therefore, discussing the detection loophole is misleading. One should rather discuss the _photon identification loophole_[17, 18]. Because of an apparent violation of no-signaling (2), in the contextuality-by-default (CDD) approach of Dzhafarov and Kujala [19, 20, 21], final data from Bell tests are described by eight binary random variables (A\({}_{xy}\), B\({}_{xy}\), A\({}_{xy}\), B\({}_{xy}\), A\({}_{xy}\), B\({}_{xy}\)), instead of four variables, and pairwise expectations are evaluated using a new probabilistic model [9]: \[E\big{(}A_{xy}B_{xy}\big{)}=\sum_{k\in A_{xy}}A_{x}(\lambda_{1},\lambda_{x})B_ {y}(\lambda_{2},\lambda_{x})p_{xy}(\lambda_{x},\lambda_{y})p(\lambda_{1}, \lambda_{2}), \tag{3}\] where A\({}_{xy}=\pm 1\) and B\({}_{yy}=\pm 1\). It is clear that neither the GL probabilistic model nor Bell averaging over instrument variables may be used to prove CHSH inequalities for random experiments described by the probabilistic models (1,3). Correlations between distant outcomes in Bell tests, often called _non-local_, may be explained using models (1,3). The experimental protocol used in \((1,2)\) is consistent with the experimental protocol of Weihs et al. [22]. The Delft experiment [23] used a different experimental protocol, but the use of time windows and post-selection could not be avoided [5, 15, 16]. As we explained in [4], _"entanglement swapping"_ may also be understood without evoking quantum magic. Contrary to what Aspect claimed, namely that _"Mixing two photons on a beam splitter and detecting them in coincidence entangles the electron spins on the remote NV centers"_[24], the observation of a particular coincidence signal gives only the information that "correlated signals" in distant laboratories were created and measurements were carried out in specific synchronized time slots [4]. ## 3 Conclusion There are no false mathematical claims and false assertions in our paper [3], _around which our work is built_. Signals arriving at measuring stations are described by setting independent random variables, which are statistically dependent and causally independent. Measuring instruments are described by random variables, which are setting-dependent [8, 10]. They are causally independent, but they may be statistically dependent (1,3). We are not looking for an escape route for local realism. Hidden variables describing measuring instruments are explicitly incorporated in the models (1,3). Thus, they do not suffer from a theoretical _contextual loophole_[25, 26]. Setting dependence of a hidden variable has nothing to do with the lack of free will and should be _contextuality_[8, 9, 10]. Metaphysical conclusions which may be drawn from the violation of inequalities in Bell tests are quite limited [3, 27]. The violation of inequalities does not prove the completeness of QM, which was the subject of the Bohr-Einstein quantum debates [4]. A contextual character of quantum observables and the active role played by measuring instruments were explained by Bohr many years ago. Speculations about _quantum non-locality_ are rooted in incorrect interpretations of QM and/or in incorrect "mental pictures" and models trying to provide a more detailed explanation of quantum phenomena [3, 28, 29, 30, 31, 32]. The violation of inequalities and apparent violation of non-signaling in Bell tests may be explained in a locally causal way without evoking quantum magic. Nevertheless, the research stimulated by Bell-CHSH inequalities [33] and the beautiful experiments designed and performed to test them, rewarded recently with a Nobel Prize, paved the way for important applications of "non-local quantum correlations" in quantum information and quantum technologies. ## Author contributions The author confirms being the sole contributor of this work and has approved it for publication. ## Conflict of interest The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. ## Publisher's note All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors, and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
2307.15188
High School Enrollment Choices -- Understanding the STEM Gender Gap
Students' high school decisions will always impact efforts to achieve gender parity in STEM at the university level and beyond. Without a comprehensive understanding of gendered disparities in high school course selection, it will be impossible to close completely the gender gap in many STEM disciplines. This study examines eleven years of detailed administrative data to determine gendered enrolment trends in university-stream secondary school STEM courses. Male and female enrolments for all publicly funded secondary schools across the province of Ontario (N = 844) were tracked from the 2007/08 academic year to 2017/18. The data reveal a clear trend of growing enrolment in STEM disciplines, with the increase in female students continuing their STEM education significantly outpacing males in almost all courses. However, these results also demonstrate the disparities that persist across STEM disciplines. The existing gender gap in physics remains large - in 2018, the median grade 12 physics class was only $36.5\pm0.05%$ female - with virtually no progress having been made to close this gap. By tracking individual student cohorts, we also demonstrate a newly discovered result showing the continuation rate of male students in biology stream courses has experienced a precipitous drop-off. The proportion of male students continuing from grade 10 science to grade 12 biology two years later has seen an average yearly decline of $-0.44\pm0.08$ percentage points, potentially foreshadowing the emergence of another significant gender gap in STEM. We suggest that researchers and educators cease treating STEM as a monolith when addressing gender disparities, as doing so obscures significant differences between disciplines. Future efforts, particularly those aimed to support women in STEM, must instead adopt a more targeted approach to ensure that they solve existing problems without creating new ones.
Eamonn Corrigan, Martin Williams, Mary A. Wells
2023-07-27T20:34:15Z
http://arxiv.org/abs/2307.15188v1
# High School Enrolment Choices -- Understanding the STEM Gender Gap ###### Abstract **Background:** Students' high school decisions will always impact efforts to achieve gender parity in Science, Technology, Engineering, and Mathematics (STEM) at the university level and beyond. Without a comprehensive understanding of gendered disparities in high school course selection, it will be impossible to close completely the gender gap in many STEM disciplines. **Results:** This study examines eleven years of detailed administrative data to determine gendered enrolment trends in university-stream secondary school STEM courses. Male and female enrolments for all publicly funded secondary schools across the province of Ontario (\(N\approx 844\)) were tracked from the 2007/08 academic year to 2017/18. The data reveal a clear trend of growing enrolment in STEM disciplines, with the increase in female students continuing their STEM education significantly outpacing males in almost all courses. However, these results also demonstrate the significant disparities that persist across STEM disciplines. The existing gender gap in physics remains large - in 2018, the median grade 12 physics class was only \(36.5\pm 0.05\%\) female - with virtually no progress having been made to close this gap. By tracking individual student cohorts, we also demonstrate a newly discovered result showing the continuation rate of male students in biology stream courses has experienced a precipitous drop-off. The proportion of male students continuing from grade 10 science to grade 12 biology two years later has seen an average yearly decline of \(-0.44\pm 0.08\) percentage points, potentially foreshadowing the emergence of another significant gender gap in STEM. **Conclusions:** We suggest that researchers and educators cease treating STEM as a monolith when addressing gender disparities, as doing so obscures significant differences between disciplines. Future efforts, particularly those aimed to support women in STEM, must instead adopt a more targeted approach to ensure that they solve existing problems without creating new ones. STEM Education, Gender Gap in STEM, Women in Physics, Physics Education, Biology Education, High School Enrolment, Longitudinal Analysis, Enrolment Trends ## I Introduction Excellence in Science, Technology, Engineering, and Mathematics (STEM) is built on a diversity of ideas and people. Women have historically been underrepresented in STEM fields and increasing gender diversity is more than a moral imperative, it is essential to maximize innovation, creativity, and competitiveness in Ontario and across Canada. The research is clear, heterogeneous groups are better at problem-solving than groups lacking diversity [1], while organizations with diverse work-forces are more economically productive than those without [2]. Despite decades of focused effort by multiple agencies to promote women in STEM (for a list of some initiatives in Canada see Canada STEM) yet continue to see a significant under-representation of women enrolled in many university STEM programs. Progress to improve outcomes at the undergraduate level is significantly constrained by enrolments in grade 12 high school STEM courses; to apply and gain entry to almost any undergraduate STEM program across Canada, students must complete several grade 12 STEM courses that satisfy mandatory admission prerequisites. For example, almost all engineering programs across Canada require having completed grade 12 Physics and Chemistry, as well as grade 12 Functions and Calculus. If we hope to disrupt our future talent pool and ensure an increased representation of women, we need to understand better the historical enrolment patterns in high school science courses across Ontario and how these differ between genders. ### Gender Disparities in STEM For decades, the gender disparity in STEM, which has historically led to the underrepresentation of women, has been an area of extensive research and intervention (see e.g., [3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16]. Currently, we have a broader and somewhat more complete grasp of this issue and its root causes. As a result, tremendous progress has been made to achieving gender parity in STEM as a whole. In 2020/21, Ontario universities reported that over 48% of STEM majors were female [17], up from 43.6% in 2010 [18], but growth has not been uniform across all STEM disciplines. Since the mid-1990s, female enrolment in STEM undergraduate programs like engineering and physics has plateaued at around 20% [19]. This difference is even more pronounced in the workplace, where only 13% of licensed Canadian engineers are female [20]. In Ontario, the Canadian province with the largest number of licensed professional engineers, women make up 12% of licensed professional engineers [21]. Similarly, a volunteer-based demographic survey undertaken by the Canadian Association of Physicists found that only 35% of physics faculty did not identify as men [22]. However, due to limitations of the survey's snowball distribution method, these results almost certainly over-represent these numbers. In contrast, the 2015 ratio of female to male students majoring in the physical/life sciences was 1.3:1 [15], similar to the ratio found in Canadian medical schools where 63% of students are women [23]. In general, the proportion of Canadian undergraduate women who pursue a degree in STEM is now higher compared with the proportion of undergraduate men, but only for "less math-intensive STEM fields" or health sciences [24]. In mathematics and computer science, the percentage of female undergraduates is below 30% [15; 25]. These trends are mirrored within STEM subdisciplines. In undergraduate engineering programs such as biomedical or bioresources engineering and environmental engineering, the participation of men and women is close to parity [26]. In contrast, fewer than 20% of students in mechanical, electrical, and computer engineering - disciplines that are viewed as being more closely related to physics - are women [26]. Overall, the increase in women's participation in STEM fields has not been uniform. ### Causes of Underrepresentation There are a multitude of factors that contribute to unequal participation between genders. In a 2017 review, Cheryan et al. examined the most common explanations presented in the literature to evaluate the available evidence and understand the gender gaps between the STEM disciplines [13]. They identify three broad categories supported by research: insufficient early experience; gender gaps in self-efficacy; and the masculine culture of some STEM fields, i.e., stereotypes about who participates in a field, assumptions about a woman's ability to succeed, as well as the lack of effective female role models [13]. These stereotypes regarding women and STEM develop relatively early in childhood. In one study, teachers were shown to view fictional 8-year-old students as less academically capable in physics and taught them less scientific material when they thought the student was a girl [27]. Stereotypes about brilliance and intellectual ability have also been shown to appear in children as young as six [12]. A perceived necessity for brilliance to succeed in a given academic field has subsequently been shown to promote masculine cultures within those fields [28], reinforcing these gendered stereotypes about who belongs in STEM. Gendered attitudes and preconceptions of STEM fields can also have a significant effect. For instance, women who interact with a computer science major who conforms to previously held stereotypes, e.g., nerdy, socially awkward etc., are less likely to want to pursue a major in computer science [29]. Women compared with men have also been shown to be more interested in people-oriented vs. thing-oriented occupations [30], a disposition which helps explain the participation gaps in the social or life sciences compared with engineering and physics [31; 32]. Similarly, a study of high school students in the United Kingdom found female students, on average, lack interest in technical details while having the desire to make a positive world impact with their work; both these attributes were negatively correlated with the students' intentions to continue pursuing physics [33] while potentially explaining women's growing engagement with the biological or life sciences. ### The Intent of this Work This paper aims to address two gaps in the existing literature. First, most research to date treats STEM as one homogenous discipline or only focuses on a single subject. Typically, data describing the state of women in STEM across Canada combine multiple subjects into broad categories such as "Physical and Life Sciences" or "Science and Technology" [15; 34]. As outlined above, these groupings conceal important differences between STEM disciplines. Consequently, this research examines trends in the various high school science disciplines and mathematics independently. Most work to date has also focused on post-secondary education, e.g., [32; 35; 36; 37] or the labour force, e.g., [20; 38; 21]. A study of 932 undergraduate physics students found that high school was the most influential time for female students choosing to pursue a degree in physics, even among those who originally had no intention of studying STEM [39]. Other research examining a cohort of Ontario students transitioning from high school to university found that students' STEM readiness, i.e., having taken the necessary STEM courses in high school, accounted for 84% of the gender gap seen in undergraduate STEM programs [40]. To effectively address this issue on a broader scale, it is paramount to develop a better understanding of gendered enrolment patterns in high school. For this work, we have conducted a case study on gendered STEM enrolment in Ontario secondary schools. Ontario was chosen for its large population size (15 million inhabitants; 38% of Canada's population) and its diverse population as measured by various demographics such as racial or ethnic, socioeconomic, and a mix of urban and rural geographies, among others [41; 42; 43; 44]. There is a growing body of research that emphasizes the importance of adopting an intersectional perspective when examining gender gaps in STEM, including the interactions of race or socioeconomic status. For example, research on secondary school students in the US found that African-American women hold weaker gendered stereotypes about STEM participation compared to their European counterparts, with a much smaller disparity observed between men [45]. Moreover, a longitudinal study looking at the academic outcomes of eight grade students found larger racial and gender disparities among students from higher socioeconomic backgrounds compared to those within lower socioeconomic status groups [46]. This finding underscores how economic factors can further exacerbate gendered or race-based stereotypes. Finally, an intersectional examination of participation in Advanced Placement Physics courses in high school reported the largest gender gap in participation for women from traditionally underrepresented ethnic minorities [47]. While our study did not allow for a direct exploration of these nuanced intersectional effects, our decision to focus on Ontario aims to identify broader, general trends across a diverse range of school populations. Through this work, we can begin to better understand gendered differences in high school STEM across Canada, laying the groundwork for more detailed intersectional analysis in the future. ## II Methods ### Ontario Ministry of Education Data We obtained the data used in this work from the Ontario Ministry of Education (OME). The dataset includes the total number of male and female [48] students who enrolled in each university-track science and math course from grade 10 to grade 12, for every public secondary school in the province. The exact coursesbincluded are shown in Fig. 1. The data span eleven academic years, from 2007/08 to 2017/18. For this date range, between 92.9-94.5% of students in Ontario attended a publicly funded school while almost all other students attended private institutions [49]. Analysis by Statistics Canada found higher levels of academic achievement for private school students including increased scores on standardized tests and a higher likelihood to obtain a post-secondary degree. These differences, however, were no longer significant once controlling for socioeconomic characteristics and peer effects, finding no fundamental difference in the students themselves [50]. Thus, this sample of only public schools is likely representative of the province as a whole. To further protect student confidentiality, the OME chose to suppress all low enrolment counts with fewer than 10 students to further protect student privacy. Unless otherwise stated, schools with suppressed data have been removed from all analyses in this paper. This research project was reviewed by the University of Guelph Research Ethics Board which deemed the project did not need full ethics approval as we were only seeking to collect aggregate, not individual data for analysis (REB #19-06-015). The research department at the OME subsequently reviewed the project, coming to the same conclusion. ### Statistical Analysis Methods To characterize gendered enrolment trends in senior-level STEM courses, four different analyses were conducted. This included 1) calculating total enrolment in all senior STEM courses across the province and measuring the change in enrolment over time, 2) calculating the median female participation rate for each STEM course over time, 3) calculating average Student Continuation Rate (\(SCR\)) for male and female students, and 4) examining how \(SCR\) for male and female students has been changing over time. The details for each method are explained below. #### ii.2.1 Total Enrolment in Science Courses The total number of male/female high school students enrolled in STEM courses across Ontario was calculated for each of the eleven years of available data, 2008-2018. To account for suppressed courses with!10 students in a given year, several different imputation methods were evaluated using a similar data set used in previous work [19]. This second data set, which contained overlapping years from 2007/08-2014/15, reported total enrolment rates for entire school boards instead of individual schools. As each board is much larger and was unlikely to have a total enrolment below ten, almost no data were suppressed in this second data set, allowing us to evaluate which imputation technique provided the most accurate estimates. It was determined that imputing a value of 9 for each suppressed school most closely matched the enrolment counts obtained from board-level data. Since courses with extremely low enrolment rates are unlikely to be offered due to limited resources, it is reasonable to assume that the true value of the majority of suppressed cells is significantly closer to ten than zero. Figure 2 plots the total enrolments of male and female students in all STEM courses between grades 10 and 12 using this imputation method to fill in the missing data from suppressed schools. To quantify the average change in total enrolment over time, a simple linear regression to model total enrolment (male and female combined) was calculated using time as a predictor variable. Figure 2 includes the resulting regression lines and estimates for the average annual change in enrolment. #### ii.2.2 Median Female Enrolment Over Time The percentage of students who were female and enrolled in each STEM course at each school from grade 10 to grade 12 was calculated over the eleven years for which data was available. The results were then averaged over all schools for each year to plot the median female proportion in all senior STEM courses vs. time (Fig. 3). In addition to the visual representation, we sought to quantify the change in median female participation over time. A weighted least squares linear regression was performed comparing the median female proportion versus time for each course. The weights used were \(1/SD^{2}\) where \(SD\) is the sample standard deviation of the median female proportion. #### ii.1.3 Average Student Continuation Rates To track the progression of students through the high school STEM courses, each school's enrolment data was grouped into three-year cohorts. Then, these linked cohorts were used to calculate _Student Continuation Rates_ (\(SCR\)). For science courses, \(SCR\) is defined as the number of male and female students enrolled in each of the grade 11 courses divided by the number of male and female students who were enrolled in grade 10 science the previous year. Grade 12 courses were calculated the same way but using grade 10 enrolment from two years prior. In both cases, grade 10 science was used as the reference category as it is the highest-level science course required for graduation. For mathematics courses, \(SCR\) for grade 12 Functions and grade 12 Calculus were defined as the ratio of male and female students enrolled in each course divided by the enrolment in grade 11 Functions from one year prior. This was selected as reference since grade 11 is the highest-level math course required for graduation. It should be noted that this method does not perfectly describe student cohorts - some students will move between schools and others may not take their courses in a linear progression - but this should still provide a good approximation for the vast majority of students. To understand the continuation or attrition of students between the last mandatory STEM course and grade 12, \(SCR\) was calculated for each of the four STEM streams for both male and female students. This was then averaged across all years of data to quantify the average loss of potential students during this pivotal period. #### ii.1.4 Changes in Male and Female Student Continuation Rates The previous 3 analyses helped to characterize how male and female enrolments have or have not been changing over time, but do not explain the underlying mechanisms driving these trends. For example, the median proportion of female students could be increasing because female participation is rising, male participation is falling, etc. To determine the mechanism(s) underlying these trends the average yearly change in \(SCR\) (\(\Delta SCR\)) for different courses was measured using a mixed effects linear regression model. The model includes fixed effects variables for year (\(X_{1}\)), and student sex (\(X_{2}\)), as well as an interaction term between student sex and year (\(X_{1}X_{2}\)). A random effect variable which accounts for fluctuations in \(\Delta SCR\) between schools was also included. The resulting model has the following form: \[Y_{i}=\beta_{0}+\beta_{1}X_{1}+\beta_{2}X_{2}+\beta_{3}X_{1}X_{2}+S_{i}X_{1}+\epsilon. \tag{1}\] Here, \(Y_{i}\) is the mean Student Continuation Rate in one of the senior STEM courses at school \(i\). The fixed effect regression estimates for year and student sex are represented by \(\beta_{1}\) and \(\beta_{2}\) respectively, while the interaction term regression estimate is \(\beta_{3}\). The random effect for school ID is represented by \(S_{i}\), i.e., the average variation in \(\Delta SCR\) from the provincial average for school \(i\). Finally, \(\beta_{0}\) is the estimated intercept. Figure 1: Course pathway for Ontario schools showing all university track, senior STEM courses included in this analysis. The top row is grade 10, the middle grade 11, and the bottom grade 12. Courses in grey represent the highest-level course required for graduation. All other courses are colour coded by stream with blue for physics, green for biology, red for chemistry, and yellow for mathematics. Black arrows indicate a prerequisite transition and a greyed arrow with a dotted outline indicates a course transition which can be taken as either a pre-requisite or co-requisite. For this analysis, we are only interested in measuring \(\Delta SCR\) for both male and female students and if there is a statistically significant difference in \(\Delta SCR\) between the sexes, i.e., the slope estimates \(\beta_{1}\) and \(\beta_{3}\) and their \(p\)-values. The average difference in \(SCR\) between sexes, while included in the model, was not the main focus as this was most previously examined by calculating average \(SCR\). A \(t\)-test was performed to determine an associated \(p\)-value for \(\beta_{1}\) and \(\beta_{3}\). The model was also calculated twice for this analysis, once with the coding (\(X_{2}:=\{male=1,female=0\}\)) and once with the opposite. Mathematically, these are equivalent - the regression estimates for \(\beta_{2}\) do not change except for a switch in sign, but this enabled us to run a \(t\)-test on the regression estimate \(\beta_{1}\) when it represented \(\Delta SCR\) for both male and female students. This regression model was run for all senior STEM courses in our dataset. ## III Results ### Total Enrolment in Science Courses Total enrolment for all senior STEM courses in our dataset along with the regression estimates and associated standard errors quantifying average yearly change in enrolment is shown in Fig. 2. The results of our model show a decreasing trend in the total number of students enrolled in grade 10 Science (\(\beta=(-885\pm 167,p=0.00049)\)), which is likely linked to the known decline in total enrolment across Ontario secondary schools [51, 52]. Despite this overall drop in enrolment, most courses in Ontario were found to have either stable or increasing total enrolment, implying a growing interest in STEM. The sole exception is 11U Biology, the only course other than grade 10 science which has seen a large, yearly decrease in total student enrolment. Fig. 2 also demonstrates a stark contrast between the total enrolment rates in physics stream courses compared with all other STEM disciplines. Both grade 11 and grade 12 physics have the lowest total enrolment in their respective grades, severely limiting the undergraduate talent pool for physics and engineering. Total enrolment in 12U Calculus is comparable to enrolment in 12U Biology and Chemistry, challenging the notion that students' inability to perform advanced mathematics is a substantial deterrent to enrolling in physics. In 12U Calculus, the ratio of male to female students also appears significantly more balanced than in 12U Physics which is largely male-dominated. In contrast, there appears to be a greater proportion of female students in 12U Biology than male students. ### Median Female enrolment Over Time The calculated values of median female proportion over time a plotted in Fig. 3. Note, grade 10 Science and Grade 11 Functions are left off Fig. 3 for readability. For all eleven years, grade 10 Science had a median proportion of female students around 52.5% (min: \(52.3\pm 0.2\%\); max: \(52.7\pm 0.3\%\)) while Grade 11 Functions was just over 50% (min: \(50.0\pm 0.3\%\); max: \(51.2\pm 0.3\%\)). The results presented in Fig. 3 show that Grade 11 and 12 Chemistry both have median female proportions just above 50%, consistent with the female proportion seen in Grade 10 Science. The two mathematics courses are the next closest to parity. Grade 12U Advanced Functions is approximately 47% female (min: \(46.4\pm 0.3\%\); max: \(48.3\pm 0.3\%\)) while 12U Calculus and Vectors sit around 44% (min: \(43.5\pm 0.4\%\); max: \(45.5\pm 0.4\%\)). The participation rate of female students in these courses is lower than that of male students, but the gap is not enormous. This once again contradicts the argument that the gender gap in physics is primarily linked to ability in advanced mathematics. In contrast to chemistry and mathematics, enrolment disparities between male and female students are notably pronounced in biology and physics. The former is predominantly female while the latter is predominantly male, while the size of the disparity is also of roughly equal magnitude in both courses. For example, in 2017/18, grade 11 Physics had a median male proportion of \(58.3\pm 0.4\%\) while grade 11 Biology was \(63.5\pm 0.3\%\) female. In the same year grade 12U Physics was \(63.6\ \pm 0.5\%\) male and grade 12U Biology had a median female proportion of \(66.7\pm 0.4\%\) female. Visually, median female proportions seen also appear to be fairly stable over the eleven years of available data. Except for 12U Biology and 12U Chemistry, most courses appear to show only small changes in the proportion of female students relative to the size of the standard errors. The results of our weighted least squares regressions, which quantify the change in median female proportion over time, are presented in Table 1. Grade 10 Science and Grade 11 Functions are the only courses found to have no statistically significant change in median female proportion. Since both courses fulfill graduation requirements, it is not surprising that these courses appear more resistant to change. For grade 11 Functions, it should also be noted that the p-value is equal to only 0.09. While the evidence for a non-zero change in the median female proportion is not as strong as in the other STEM courses, it would be a mistake to claim with certainty there has been no change over time. Of the remaining STEM courses, all show an _increase_ in the median proportion of female students. The effect size, however, varies widely between STEM courses. Both grade 12 mathematics courses show evidence of a slow increase in the median female participation rate with regression estimates around 0.15. This corresponds to a one percentage point increase in the median female \begin{table} \begin{tabular}{l c c c} Course & Estimate & Standard Error & \(p\)-value \\ \hline 10D Science & \(-\)0.013 & 0.013 &.339 \\ 11U Biology 1 & \(+\)0.272 & 0.054 & \(<\).001 \\ 12U Biology1 & \(+\)0.594 & 0.0901 & \(<\).001 \\ 11U Chemistry1 & \(+\)0.141 & 0.036 &.004 \\ 12U Chemistry1 & \(+\)0.352 & 0.045 &.001 \\ 11U Physics1 & \(+\)0.203 & 0.041 & \(<\).001 \\ 12U Physics1 & \(+\)0.157 & 0.054 &.017 \\ 11U Functions1 & \(+\)0.086 & 0.046 &.093 \\ 12U Functions1 & \(+\)0.153 & 0.032 &.001 \\ 12U Calculus1 & \(+\)0.153 & 0.032 &.001 \\ 12U Calculus1 & \(+\)0.140 & 0.046 &.014 \\ \end{tabular} \end{table} Table 1: Linear Regression Estimates for change in median female proportion over time. Figure 2: Total enrolment in university track STEM courses from grade 10 through grade 12 for all secondary schools across Ontario. Sub-figure a. shows grade 10 and grade 11 courses while sub-figure b. shows grade 12 courses. The lower, red, solid-coloured bars represent the total enrolment of female students while the upper, green, patterned bars represent the total enrolment of male students. The black lines depict the slope estimate from a simple linear regression of total enrolment vs. time. Solid lines indicate the regression estimates were statistically significant \(p<0.05\)), while solid lines were not. The text above each line shows the regression estimate for the average yearly change in total enrolment \(\pm\) the standard error. Sub-figures a and b were plotted so the scale of a. is exactly twice the scale of b. to ease comparison while still selecting scales which are most appropriate for each. proportion every 6-7 years. Grade 11 Chemistry, Grade 11 Physics, and Grade 12 Physics also show comparably small effect sizes (all regression estimates of \(\leq 0.2\)). The slow rate of growth for 12U Physics stands in stark contrast to the other 12U science courses. The average yearly change in median female participation for 12U Biology is 3.8x greater than 12U Physics; for 12U Chemistry the average yearly change is 2.2x greater. This means both chemistry and biology, which already have median female proportions above 50%, are continuing to widen this gap - particularly biology. In addition, while there is evidence for a statistically significant increase in the median female proportion of 12U Physics, the overall effect has been minimal. Extrapolating from the 2018 level of \(36.4\pm 0.5\%\) and assuming a constant annual change of 0.16 percentage points, it would take 85 years - until 2103 - for grade 12U Physics to reach a median female proportion of 50%. ### Average Student Continuation Rates Total enrolments and median female participation rate can only describe the students who _do_ enrol in senior STEM courses - a minority of the student population. Fig. 4 plots male and female students' average \(SCR\) for all four STEM streams, quantifying the levels of attrition from mandatory STEM courses through to the end of high school. The results clearly show that mathematics courses have higher continuation rates than any of the sciences, especially Grade 12U Advanced Functions. More than 77% of female students and 85% of male students who take functions in grade 11 appear to continue to 12U Functions the following year. This is likely due to the ubiquity of 12U Functions as an admission requirement to numerous undergraduate programs. Mathematics courses, as well as chemistry, also have the smallest male/female gaps in \(SCR\), matching the median female proportions (Fig. 3d). Biology and physics possess the largest genders g Figure 3: The median proportion of female students enrolled in science courses averaged across all Ontario secondary schools. The date on the horizontal axis represents the second half of each academic year, e.g., 2008 is the academic year spanning from September 2007 until June 2008. Colour indicates the STEM stream: green for Biology, red for Chemistry, blue for Physics, and yellow for Mathematics. Shape and shade indicate the level of each course relative to the last mandatory STEM course. Light-coloured circles represent the first post-mandatory course (11U for the science courses and 12U Functions for Mathematics); dark-coloured triangles represent the second post-mandatory course (grade 12U for the sciences courses and 12U Calculus for Mathematics). The error bars shown depict standard error. in \(SCR\). Almost twice as many male students do not continue from grade 10 Science to grade 11 Biology compared with female students - a 23 percentage point gap in \(SCR\). A larger proportion of these male students who take grade 11 do appear to remain until grade 12 as the gap in \(SCR\) between male and female students to sixteen percentage points for grade 12. Male and female students in physics show comparable differences in \(SCR\); the gap in \(SCR\) is seventeen percentage points in grade 11 and 15 percentage points in grade 12. In addition, physics has the greatest overall loss of potential students compared with any of the other three STEM disciplines. The data shows that 66% of male students and 81% of female students in grade 10 science will, on average, not continue to grade 12 physics. The closest comparable loss is the 71% student attrition rate for male students continuing to grade 12 Biology. ### Changes in Male and Female Student Continuation Rate The most important results of the mixed effects linear regression model defined in Eq. (1) are plotted in Fig. 5, i.e. \(\Delta SCR\) for male and female students and the gap in \(\Delta SCR\) between the sexes. A table with the full regression results from this analysis is available in the appendix (Table 2). These data show significant growth in \(SCR\) for female students for almost all STEM courses (p \(>\) 0.05; Table 1). The sole exception was grade 12 functions (\(\beta_{1}=0.14\pm 0.8;p=0.066\)). While the t-test conducted comparing the slope associated with \(\Delta SCR\) for grade 12 functions to zero did not have a p-value smaller than the widely used cut-off value of 0.05, there is still some evidence to suggest \(SCR\) for female students increased in this course as well (\(p~{}=~{}0.066\)). These data show that the growth in the median proportion of female students taking STEM courses (Table 1) is primarily attributable to an increase in the \(SCR\) for female students. The rate of growth in the \(SCR\) of female students has also been quite substantial. For example, the regression esti Figure 4: Student Continuation Rate, averaged over all schools in Ontario across all eleven years of available data. Student Continuation Rate tracks student cohorts year to year in each school. Course colour coding is consistent with Fig. 3, where Sub-figure a. (Blue) shows Physics stream courses, b. (Green) shows Biology stream courses, c. (Red) represents Chemistry stream courses, and d. (Yellow) depicts Mathematics courses. For all figures, the dark-coloured, patterned columns represent male students while light-coloured, solid columns represent female students. Arrows indicate student attrition rate, the converse of student continuation rate, i.e., how many students are lost, on average, since the last mandatory STEM course. mates of \(\Delta SCR\) for female students in grades 11 and 12 Chemistry were found to be \(\Delta SCR_{11}=1.11\pm 0.06\) percentage points per year and \(\Delta SCR_{12}=1.04\pm 0.08\) percentage points per year respectively. For context, since \(\sim 50,000\) female students take grade 10 Science every year, a \(\Delta SCR\cong 1\) amounts to an extra five hundred female students taking chemistry, compounding annually. For male students, there has been similar, though smaller yearly growth in \(SCR\) for all STEM courses but three. 12U Functions (\(\beta_{1}=-0.19\pm 0.08;p=0.015\)) as well as 11U (\(\beta_{1}=-0.34\pm 0.06;p=3.36\times 10^{-8}\)) and 12U Biology (\(\beta_{1}=-0.44\pm 0.08;p=1.79\times 10^{-7}\)) have all seen an average decline in male continuation over the eleven years of available data. When comparing between sexes, female students were found to have greater estimates for \(\Delta SCR\) than male students in almost all STEM courses. In only one subject, 12U Physics, male students had a greater regression estimate for \(\Delta SCR\), but the interaction term was not statistically significant (\(p~{}=~{}0.39\)), thus, there is insufficient evidence to conclude that there is a genuine difference between the sexes. Similarly, the interaction term for Calculus 12 was not statistically significant (\(p=0.33\)), while all other courses had non-zero differences in \(\Delta SCR\) between the sexes. The magnitude of the gap in \(\Delta SCR\) between the sexes also varies widely between the different STEM courses with the largest gap appearing in biology stream courses. For example, the interaction term which calculated the average difference in \(\Delta SCR\) between male and female students was \(-1.19\pm 0.09\) percentage points per year. To get a sense of scale for these estimates, we can interpolate by assuming 50,000 male and female students took grade 10 science every year. Over the eleven years of data, assuming constant \(\Delta SCR\) as estimated from the model, 29,040 fewer male students and 49,500 more female students took 12U Biology between 2007 and 2018 compared with a hypothetical scenario where \(\Delta SCR\) was equal to zero for both. This explains the rapidly growing median female proportion seen in Fig. 3. The growth in the median percentage of female students is driven from both directions as more female students are continuing their biology education while more male students are choosing not to continue. In contrast, the largest, positive estimates for \(\Delta SCR\) found for male students appear in the chemistry and physics stream courses, STEM subjects known to already have increased participation of males in post-secondary and beyond [13]. Figure 5: Barbell plot of regression estimates modelling average change in male/female \(SCR\), per year, controlling for the random effect of individual schools, including sex as an interaction term. Red circles indicate female student enrolment and green diamonds male. Filled shapes indicate the regression estimate was found to have a statistically significant difference from zero (\(p<0.5\)); unfilled shapes indicate the estimate was not found to be statistically different from zero. The connecting lines between male/female students are coloured and solid when the interaction term showed a statistically significant \(\Delta SCR\) between male and female students (\(p<0.5\)); points not connected with lines were found to have a \(\Delta SCR\) which was not statistically different from zero. Discussion ### Growing Enrolment in STEM In this study we used enrolment data from the Ontario Ministry of Education, separated by student sex, to quantify differences between male and female students in university-track, high school STEM courses from 2007-2018. We found clear evidence for a general trend of growing engagement in STEM for both male and female students. Despite a total decline in the province's study body, total enrolment in nearly all senior stem courses remained flat or increased. The continuation rate of students from STEM courses which are mandatory for graduation to optional, senior-level courses has also increased in almost all cases. For the eleven years of available data, the results are clear: a larger share of high school students are choosing to study STEM subjects at a senior secondary school level now compared with a decade ago. This mirrors the trend seen in Canada's post-secondary institutions. From 2010 to 2019, the total number of students studying STEM increased by over 40%, outpacing the increase in total post-secondary enrolment [53]. There are many possible explanations for this growth in high school STEM enrolment. As the labour force continues to specialize and advance, STEM training is increasingly required to remain competitive in the job market [53]. As students, teachers, and families become more aware of this growing demand, it is logical this would influence students' course decisions. Students represented in this dataset have also come of age during or after the 2008 recession. Previous work has shown a strong positive correlation between students growing up during periods of economic uncertainty and selecting majors with high potential earnings like STEM [54]. Both the provincial and federal governments have also implemented policies which both implicitly and explicitly further this push for STEM education [55; 56]. ### Uneven Growth in Male and Female Enrolment While high school student enrolment in STEM has increased overall, a closer look at the data reveals considerable variations between fields and between the sexes. At the same grade level, overall enrolment in physics courses is at least 30 percent lower than in biology, chemistry, and mathematics courses. This arises from the extremely low continuation rates of high school students from grade 10 Science: 66% of male students and 81% of female students are lost by grade 12 physics. As grade 12 physics is a prerequisite for nearly all university physics and engineering programs in Canada, this represents the greatest loss of potential talent throughout the education pipeline. Physics also has one of the greatest gender gaps among high school STEM courses. In 2018, the median percentage of female students enrolled in grade 12 Physics courses was only \(36.3\pm 0.05\%\). While there is statistically significant evidence for a modest increase in the proportion of female students enrolled in grade 12 physics as well as an increase in the percentage of female students continuing from grade 10 science to grade 12 physics, both of these effects are minimal. As discussed in the results section, it would take 85 years to close the gender gap in 12U Physics at the current rate of change. These statistics mirror those found in other settings. The American Physical Society reported that from 2016-2020, only 22% of undergraduate degrees in physics or engineering went to women [57]. In Ireland, an analysis of the secondary to post-secondary transition found 24% of male students but only 7% of female students studied physics as part of their final Leaving Certificates [58]. The gap in physics participation between male and female students is a well-known and regularly discussed problem within the existing literature. However, we have been unable to find any direct mention of the opposite problem which we discovered for biology stream courses, though data do indicate similar imbalances elsewhere. In the US, 63% of bachelor's degrees in biology were earned by women [57] while the previously discussed Irish dataset found only 54% of male students but 78% of female students completed Biology at the end of secondary school [58]. We found that in 2018, 12U Biology had a median female proportion of \(66.7\pm 0.4\%\), a gap between male and female students three percentage points greater than what is seen in 12U Physics. And although physics is narrowing the gender gap, albeit slowly, the gender gap in biology has been expanding; the average yearly increase in the median female proportion was 3.8 times higher for 12U Biology than for 12U Physics. Previous work looking at Ontario data tracked the cohort of students entering Grade 9 in 2005 and calculated the percentage of these students who took a Grade 12 science course within the next 5 years. Their results found 25.7% of female and 15.3% of male students took biology while 9.1% of female and 16.6% of male students took physics [40]. These rates are all smaller than our calculated average \(SCR\)s for these courses (these are not equivalent but are analogous measures). This reinforces our overall conclusions of growing interest in STEM while showing differing rates of growth between male and female students. Through a linear mixed effects model calculating the yearly average change in Student Continuation Rates, we examined what has been driving these changing gender imbalances. In biology, change is occurring in both directions. Compared with a decade earlier, a significantly higher proportion of women are now continuing from grade 10 science to 12U Biology while significantly fewer men are (\(+0.75\pm 0.08\) and \(-0.44\pm 0.08\) percentage points per year, respectively). This, we believe, is evidence of a positive feedback loop; when the percentage of female students in biology classrooms increases, other female students experience a greater sense of belonging in these spaces, leading to a rise in enrolment. The corollary is that male students may then perceive these settings as less welcoming, resulting in decreasing participation. If educators and researchers are not mindful of this new, widening division, in a few decades we will face the exact same issue in biology-related STEM fields as we currently do in physics, engineering, and computer science. As a result, the same problems that stem from the lack of gender diversity, e.g., decreased productivity, a dearth of new ideas, and human rights issues from bias or a lack of equitable opportunities may also begin to manifest themselves in the biological sciences. ## V Conclusions These results suggest there have been great strides made to increase enrolment in STEM, particularly for women. However, we have also shown many issues in equitable participation in STEM education still exist including the large attrition of all students between grade 10 science and grade 12 physics, the negligible progress made to close the existing gender gap in physics, and the now widening gender gap in biology. As researchers and educators have developed a growing understanding of how to design successful initiatives to promote STEM education, particularly for women, we believe a second generation of more targeted interventions is now required. As such, we have four recommendations based on our results. 1. Our results add to a growing literature that has compared enrolment trends and gender gaps across different STEM disciplines [13]. The results are clear: there are large discrepancies across different STEM fields and the tendency for researchers, educators, of governments to talk about STEM as one subject hides this fact. This hinders progress to address existing challenges such as the large persistent gender gap in physics and engineering; a problem which needs to be corrected to properly meet the rising demands for STEM-educated individuals in the 21\({}^{\text{st}}\) century. 2. Similar to the flaw in conceptualizing STEM as a single entity, women are not one homogeneous group. As briefly summarized when explaining the intent of this work (sec. 1c), examining the intersectionality of gender with other demographic factors such as race or socioeconomic status will provide a clearer picture of who does or does not choose to participate in STEM education. A subsequent research project using these data examines the intersectionality of gender and other demographic factors for Ontario schools (Corrigan et al., in preparation) and we encourage other researchers to also consider these important distinctions in their future work. 3. Access to high-quality enrolment data related to gendered enrolment in secondary schools is currently not widely available. As we have outlined, this period is pivotal in shaping the future education and employment opportunities of students and a lack of transparency at this level acts as a hindrance to organizations seeking to improve equitable outcomes for STEM education. It would also help flag enrolment trends such as the growing disparity between male and female students in biology we have discovered through this analysis. 4. Future initiatives or interventions to improve or promote STEM education need to be designed with a nuanced consideration of both STEM and gender. Without clearly targeted efforts, e.g., an intervention to promote women in high school physics instead of a more general intervention to promote women in STEM, we will not ameliorate the persistent lack of diversity that hinders progress in these fields. ## Appendix A - Regression Results \begin{table} \begin{tabular}{l c c c} & \multicolumn{3}{c}{Female Students} \\ \cline{2-4} Course & Estimate & Standard Error & \(p\)-value \\ \hline 11U Biology & \(-0.340\) & 0.061 & \(<\).001 \\ 11U Chemistry & \(+0.640\) & 0.062 & \(<\).001 \\ 11U Physics & \(+0.505\) & 0.053 & \(<\).001 \\ 12U Biology & \(-0.440\) & 0.084 & \(<\).001 \\ 12U Chemistry & \(+0.417\) & 0.079 & \(<\).001 \\ 12U Physics & \(+0.555\) & 0.060 & \(<\).001 \\ 12U Functions & \(-0.192\) & 0.079 &.015 \\ 12U Calculus & \(+0.304\) & 0.065 & \(<\).001 \\ \hline \multicolumn{4}{c}{Male Students} \\ \cline{2-4} Course & Estimate & Standard Error & \(p\)-value \\ \hline 11U Biology & \(+0.583\) & 0.059 & \(<\).001 \\ 11U Chemistry & \(+1.114\) & 0.061 & \(<\).001 \\ 11U Physics & \(+0.720\) & 0.055 & \(<\).001 \\ 12U Biology & \(+0.749\) & 0.078 & \(<\).001 \\ 12U Chemistry & \(+1.037\) & 0.077 & \(<\).001 \\ 12U Physics & \(+0.490\) & 0.069 & \(<\).001 \\ 12U Functions & \(+0.144\) & 0.079 &.066 \\ 12U Calculus & \(+0.383\) & 0.067 & \(<\).001 \\ \hline \multicolumn{4}{c}{Interaction Terms (Female as Reference)} \\ \cline{2-4} Course & Estimate & Standard Error & \(p\)-value \\ \hline 11U Biology & \(-0.922\) & 0.072 & \(<\).001 \\ 11U Chemistry & \(-0.473\) & 0.071 & \(<\).001 \\ 11U Physics & \(-0.215\) & 0.062 & \(<\).001 \\ 12U Biology & \(-1.189\) & 0.087 & \(<\).001 \\ 12U Chemistry & \(-0.620\) & 0.081 & \(<\).001 \\ 12U Physics & \(+0.065\) & 0.075 &.387 \\ 12U Functions & \(-0.336\) & 0.100 & \(<\).001 \\ 12U Calculus & \(-0.079\) & 0.082 &.333 \\ \end{tabular} \end{table} Table 2: Mixed effects linear regression estimates predicting yearly change in SCR for senior STEM courses.
2306.13808
Improving the selection of changing-look AGNs through multi-wavelength photometric variability
We present second epoch optical spectra for 30 changing-look (CL) candidates found by searching for Type-1 optical variability in a sample of active galactic nuclei (AGNs) spectroscopically classified as Type 2. We use a random-forest-based light curve classifier and spectroscopic follow-up, confirming 50 per cent of candidates as turning-on CLs. In order to improve this selection method and to better understand the nature of the not-confirmed CL candidates, we perform a multi-wavelength variability analysis including optical, mid-infrared (MIR) and X-ray data, and compare the results from the confirmed and not-confirmed CLs identified in this work. We find that most of the not-confirmed CLs are consistent with weak Type 1s dominated by host-galaxy contributions, showing weaker optical and MIR variability. On the contrary, the confirmed CLs present stronger optical fluctuations and experience a long (from five to ten years) increase in their MIR fluxes and the colour W1-W2 over time. In the 0.2-2.3 keV band, at least four out of 11 CLs with available SRG/eROSITA detections have increased their flux in comparison with archival upper limits. These common features allow us to select the most promising CLs from our list of candidates, leading to nine sources with similar multi-wavelength photometric properties to our CL sample. The use of machine learning algorithms with optical and MIR light curves will be very useful to identify CLs in future large-scale surveys.
E. López-Navas, P. Sánchez-Sáez, P. Arévalo, S. Bernal, M. J. Graham, L. Hernández-García, D. Homan, M. Krumpe, G. Lamer, P. Lira, M. L. Martínez-Aldama, A. Merloni, S. Ríos, M. Salvato, D. Stern, D. Tubín-Arenas
2023-06-23T23:00:35Z
http://arxiv.org/abs/2306.13808v1
# Improving the selection of changing-look AGNs through multi-wavelength photometric variability ###### Abstract We present second epoch optical spectra for 30 changing-look (CL) candidates found by searching for Type-1 optical variability in a sample of active galactic nuclei (AGNs) spectroscopically classified as Type 2. We use a random-forest-based light curve classifier and spectroscopic follow-up, confirming 50 per cent of candidates as turning-on CLs. In order to improve this selection method and to better understand the nature of the not-confirmed CL candidates, we perform a multi-wavelength variability analysis including optical, mid-infrared (MIR) and X-ray data, and compare the results from the confirmed and not-confirmed CLs identified in this work. We find that most of the not-confirmed CLs are consistent with weak Type 1s dominated by host-galaxy contributions, showing weaker optical and MIR variability. On the contrary, the confirmed CLs present stronger optical fluctuations and experience a long (from five to ten years) increase in their MIR fluxes and the colour _W1-W2_ over time. In the 0.2-2.3 keV band, at least four out of 11 CLs with available _SRG_/eROSITA detections have increased their flux in comparison with archival upper limits. These common features allow us to select the most promising CLs from our list of candidates, leading to nine sources with similar multi-wavelength photometric properties to our CL sample. The use of machine learning algorithms with optical and MIR light curves will be very useful to identify CLs in future large-scale surveys. keywords: galaxies: active - accretion, accretion discs - quasars: emission lines ## 1 Introduction For the past few years, a growing (\(>\)200) population of active galactic nuclei (AGNs) with emerging or disappearing optical broad emission lines (BELs) has been found, arousing great interest among the astrophysics community (see review by Ricci & Trakhtenbrot, 2022). Most studies favour an accretion rate change as the origin of such dramatic changes in unobscured AGNs, so these sources are often called changing-state (CS) AGNs. Other mechanisms such as variable absorption and tidal disruption events (TDEs) are also expected to produce variations in the BELs, so the term changing-look (CL) is generally used to refer to all AGNs that show such spectral transitions, regardless of the physical mechanisms driving these changes. This term is borrowed from the X-ray community, where a CL event is led by extreme variable X-ray absorption, causing a switch between Compton-thin (\(N_{\rm H}<10^{24}\) cm\({}^{-2}\)) and Compton-thick (\(N_{\rm H}\gtrsim 10^{24}\) cm\({}^{-2}\)) states in AGNs (e.g. Matt et al., 2003). The CL phenomenon is characterised by drastic changes in the optical BELs. The BELs consist of permitted and semi-forbidden emission lines with typical line widths FWHM \(\geq\) 1000 km s\({}^{-1}\), formed by high density gas clouds called the broad line region (BLR) located close to the central engine (e.g. Netzer, 2015). Therefore, most of the effort to find CL AGNs has focused on systematic searches of broad Balmer line variations (generally \(>\)3\(\sigma\) flux change in broad H \(\beta\)) in sources with multi-epoch spectroscopy (although other lines such as Mg ii are also possible, see MacLeod et al., 2016; Ross et al., 2018; Guo et al., 2019). In particular, some sizable samples have been found comparing repeated spectra from different surveys such as the Sloan Digital Sky Survey (SDSS) and the Large Sky Area Multi Object Fiber Spectroscopic Telescope (LAMOST) (Yang et al., 2018; Green et al., 2022). CL events are often accompanied by large photometric changes in the optical and ultraviolet (UV) bands, and this has been used as a selection criteria to find new CL AGNs (i.e., \(|\Delta g|>1\) mag, \(|\Delta r|>0.5\) mag in MacLeod et al., 2016, 2019). However, the link between extreme spectroscopic and photometric changes is uncertain, since just 10-15 per cent of photometrically variable AGNs have been found to display CL behaviour (MacLeod et al., 2019). This uncertainty can be affected by the time-scales involved in CL events, which have been constrained just in a few sources (Trakhtenbrot et al., 2019), and the fact that the CL behaviour has been found to occur repeatedly in some sources (e.g. depending on the Eddington ratio, Guolo et al., 2021). More recently, some studies have concentrated on the search for CL events based on the physical expectations for an accretion-state change. In this scenario, it is expected to see gradual changes in the optical and mid-infrared (MIR) flux and colours associated with monotonically varying BEL strengths and/or continuum changes, as the AGN goes to bright (AGN dominated) or dim (host dominated) states (e.g., Sheng et al., 2017; Yang et al., 2018; Lyu et al., 2022). In the optical, CS AGN candidates have been selected by searching for anomalous variability (Sanchez-Saez et al., 2021) and bluer optical colours in turning-on AGNs (Hon et al., 2022), where the latter method shows a higher success rate for confirmed CL compared to other selection techniques. In the MIR band, individual CL AGNs have been found by identifying highly MIR-variable quasars in the _Wide-field Infrared Survey Explorer_ (_WISE_) and Near-Earth Object WISE Reactivation (NEOWISE) data stream (Stern et al., 2018; Assef et al., 2018). In Graham et al. (2020), 111 CS quasars were found by applying two different criteria: strongly enhanced optical variability over some time-scales and a large absolute difference between the initial and final state in the _WISE_ light curve (i.e, \(|\Delta W|>0.2\) or \(|\Delta W|>0.2\)). That work led to a CS sample at higher luminosity than previous CL AGNs in the literature. Moreover, individual CL events have been associated with changes in soft-X-ray/\(UV\) emission, responsible for photoionizing the BLR gas, as in the case of Mrk 1018 (Cohen et al., 1986; McElroy et al., 2016; Noda and Done, 2018). More extreme X-ray spectral and flux variability was found in the CL source 1ES 1927+654, which has been suggested to be caused by a TDE in an AGN (Trakhtenbrot et al., 2019; Ricci et al., 2020, 2021) or a magnetic flux inversion event (Laha et al., 2022). Recently, in a CL search of sources with multi-epoch optical spectroscopy within the _Swift_-BAT AGN Spectroscopic Survey (BASS), it was reported that five out of nine events with _Swift_-BAT data available could be associated with significant flux changes in the 14-195 keV hard X-ray band (Temple et al., 2023). With the advent of deep, large sky-coverage monitoring surveys such as the Zwicky Transient Facility (ZTF, Bellm et al., 2019) and the upcoming Legacy Survey of Space and Time (LSST, Ivezic et al., 2019), the identification of CL AGNs will be possible using machine-learning techniques. In Lopez-Navas et al. (2022, hereafter LN22), we present a method specifically looking for turn-on events using a balanced random forest algorithm with the ZTF alert stream (Sanchez-Saez et al., 2021), confirming CL behavior in four out of six sources that we re-observed based on follow-up spectroscopy. Extending this work further, we obtained a second epoch spectra of 30 additional CL candidates, confirming \(\sim\)50 per cent as CLs. In this paper, we present the new observations and perform a multi-wavelength (optical, MIR and X-ray) variability analysis of the CL sources. This effort enables us to improve the selection technique and reinforces the common features of these CL events. Throughout this work, we assume a standard cosmological model with \(H_{0}=70\) km s\({}^{-1}\) Mpc\({}^{-1}\), \(\Omega_{m}=0.3\), and \(\Omega_{\Lambda}=0.7\). ## 2 Selection of the sample According to the classic view of AGNs, the optical variability in Type 2 sources is highly suppressed due to obscuration of the continuum coming from the central source by the dusty torus (Antonucci, 1993). Based on this consideration, looking for Type 1-like optical flux variability (coming from the accretion disc) in spectrally-classified Type 2 AGNs (whose accretion disc _should_ be hidden) has led to the finding of turning-on CL sources. We note that in these cases, the previous Type 2 classification is due to the absence of significant BELs in their spectra, and not due to the identification of the viewing angle of the system. This is the selection strategy we followed in LN22 to find potential CL candidates. Here, we update the candidate list reported in LN22 and clean it further. Our initial sample consists of all the spectrally classified Type 2 AGNs included in the Million Quasars Catalog (MILLIQUAS, Version 7.7, N and K types, Flesch, 2021). We removed sources classified as Seyfert 1, Low Ionization Nuclear Emission Region (LINER) or blazar in any other study according to the SIMBAD Astronomical Database, and those included in the Type 1 AGN catalogues from Oh et al. (2015) and Liu et al. (2019). We also required the sources to have a public 'GALAXY AGN' or 'QSO AGN' spectrum in SDSS DR17, and discarded objects with a BROADLINE classification. This led to 20834 Type 2 AGNs. We also checked that none of these sources were included in the Roma-BZCAT Multifrequency Catalogue of Blazars (Massaro et al., 2015). To select potential CL candidates, we searched for _current_ Type 1 optical flux variability in our Type 2 sample. In particular, we used the host- and core-dominated AGN classifications given by the Automatic Learning for the Rapid Classification of Events (ALeRCE, Forster et al., 2021) broker light curve classifier (LCC, Sanchez-Saez et al., 2021). The ALeRCE broker is currently processing the alert stream from the Zwicky Transient Facility (ZTF, Bellm, 2014; Bellm et al., 2019) to provide a fast classification of variable objects by applying a balanced random forest algorithm. In particular, the LCC computes a total of 174 features, including colours obtained from AllWISE and ZTF photometry and variability features, for all objects that had generated at least 6 alerts in either \(g\) or \(r\) band in the ZTF alert stream. Since each alert is produced when a 5\(\sigma\) variation in the template-subtracted image occurs, only sufficiently variable objects are detected and classified by the LCC. In the case of AGNs, only 10 per cent of known Type 1s with \(r<20.5\) mag exhibit variations reaching this threshold and produce alerts in ZTF. Therefore, using this method we expect to select the 10 per cent most variable CLs in the parent sample. We performed a sky crossmatch within 1 arcsec between our Type 2 parent sample and the sources classified primarily as AGN or QSO by the LCC (updated on 2022 November 30), which led to 71 matches. Ten sources had two different identification names in ZTF, resulting in 61 CL candidates. Of these, \(\sim\)30 objects were identified as bad candidates from a visual examination of the optical light curves and the SDSS spectra. In the first place, some sources show BELs in their optical spectra and were apparently misclassified in the catalogues. This generally occurs in the lower-luminosity and lower-black hole-mass regimes, where the BELs fail to meet the FWHM \(\geq\) 1000 km s\({}^{-1}\) criterion (Liu et al., 2019), and also in intermediate Type 1.8/1.9 AGNs, which show weak broad Balmer lines (e.g. Hernandez-Garcia et al., 2017). Secondly, other sources appear misclassified by the ALeRCE LCC due to a small number of data points or transient events in the reference image used to construct the difference images (mostly supernovae, Sanchez-Saez et al., 2023). We note that the ZTF alert light curves only contain alerts generated since the ZTF started its operations in 2018, but the actual length of the alert light curves depend on how variable the object is. We stress that the first ZTF alert does not necessarily mark the time of the change of state, considering that most known Type 1 AGNs have not triggered alerts or have taken years after the start of ZTF to show their first alert. This method does not give information about time the CL transition occurred other than that it happened some time between the SDSS spectrum was taken 10-20 years ago and the ALeRCE LCC classification as Type 1 AGN. In order to confirm the selected candidates as CLs, we need to perform optical follow-up spectroscopy that quantifies the changes in the BELs with respect to the SDSS spectrum. In this work, we further investigate the variability and spectral properties of the most promising CL candidates to improve the selection method and shed light on the origin of these phenomena. ## 3 Spectroscopic follow-up ### Data We obtained second epoch spectra for 36 of our 61 CL candidates, allowing us to confirm CL objects by quantifying BEL changes with respect to archival SDSS spectra. These sources were selected via visual examination of the optical light curves and archival spectra of the CL candidates. Six out of the 36 sources were reported in LN22. In this paper, we present spectral analysis for the remaining 30 objects. The new optical spectra were taken during February and April 2022 using either the Double Spectrograph (DBSP) on the Palomar 200-inch Hale Telescope (P200) or the Low Resolution Imaging Spectrometer (LRIS) spectrograph on the Keck I telescope at the W. M. Keck Observatory, as specified in the Appendix, Table 1. The spectra were obtained using a blue and a red arm with the 600 and 316 lines/mm grating respectively, 1.5" or 2" slit widths and 1x1 binning, and processed using standard procedures. All the observed sources fall within the redshift range 0.04\(\leq z\leq\) 0.22, therefore covering the H \(\alpha\) for all sources. In some of the cases, the H \(\beta\) and [O iii] emission lines were in the border between the blue (2500 - 5700 A) and red (4800 - 10700 A) useful regions, leading to uncertain fits. Thus, we compared the broad H \(\alpha\) and the H \(\alpha\)/[S ii] ratio between epochs, instead of the broad H \(\beta\) or the H \(\alpha\)/[O iii] ratio as performed in other studies (e.g. in Graham et al., 2020). ### Spectroscopic fitting We fit the archival DR17 SDSS spectra and the second epoch spectra from the Keck I and P200 telescopes using the Penalized PixelFitting (pPXF) software (Cappellari, 2017). To account for the stellar continuum component we used the E-MILES library (Vazdekis et al., 2010), and to model the AGN emission we added the following components: * A power low template for the accretion disc contribution of the form \((\frac{\lambda}{\lambda N})^{\alpha}\) where \(\lambda\) is the wavelength, \(\lambda_{N}\) = 5000 A is a normalization factor and \(\alpha\) goes from \(-\)3 to 0 in steps of 0.1. * One kinematic component with both permitted and forbidden emission lines, with free normalizations, to model the narrow lines. * One kinematic component with permitted emission lines, with free normalizations, to model the possible BELs. * One component for possible outflows with velocity dispersion values from 400 to 1000 km s\({}^{-1}\). To fit the second epoch spectra we used the same stellar population templates obtained from the SDSS fit and we left the normalization free during the fitting process. We obtained errors for each spectrum by performing Monte Carlo simulations using the best-fitting model and simulating random noise generated from the standard deviation of the best-fitting residuals. Then we fit the simulated spectra using the same procedure described for the SDSS and second epoch spectra, providing an error on the model parameters. Table 1 shows the results of the spectral fitting for broad H \(\alpha\) and the 1\(\sigma\) error from the simulations. We identify 13 confirmed CL AGNs with \(>\)3\(\sigma\) change in the EW of broad H \(\alpha\)_and_\(>\)3\(\sigma\) change in the H \(\alpha\)/[S ii] ratio. We also identify as CL two sources with a \(>\)3\(\sigma\) change in the H \(\alpha\)/[S ii] ratio, whose change can be confirmed via visual inspection. In total, we find 15 CL sources (highlighted in bold, see Table 1). Their optical spectra are shown in the Appendix, Fig. 11. Some of the sources show bluer continuum emission and/or asymmetric and complex BEL profiles (e.g., ZTF18acbzrill, ZTF19aavydin and ZTF20aavwsqq) during the high state, as found in previous CL works (Oknyansky et al., 2021). In one case, ZTF19aafczyn, the changes in the BELs according to the spectral fits are significant but they are not obvious when looking at the difference spectrum, so we speculate these changes could be driven by differences in the spectroscopy (that is, different instruments and set-up) and not to physical changes. For the 36 observed sources from our CL candidates, we distinguish the 19 confirmed CL AGNs as the CL sample (including the four CLs reported in LN22) and the 17 not confirmed CLs as the NOT CL sample (including two such sources reported in LN22). ## 4 Results ### Improvement of the selection method: ALeRCE features for the alert light curves All the CL candidates considered in this work have generated at least 6 ZTF alerts in either \(g\) or \(r\) band and have been classified by the AleRCE LCC. An alert is generated when a 5\(\sigma\) variation in the template-subtracted image occurs. In this section, we analyse the variability of the sample to determine whether the CL phenomenon is related to any physical parameter or if we can make other improvements to the selection method and thereby to the CL candidate list. The LCC uses a total of 174 features, most of them computed solely with the public ZTF \(g\) and \(r\) data. The complete set of features is described in the ALeRCE website1, and can be requested using the ALeRCE python client. In this work, we separate the features that dominate the classifier (both the 'Top' level and the 'Stochastic' level of the LCC) as reported in Sanchez-Saez et al. (2021) and the secondary, not-ranked features. For comparison, we obtained the features for the known AGNs that were used to train the LCC, which includes the Weak Type 1 sources from Oh et al. (2015) and the host-dominated AGNs (class 'A') from MILLIQUAS, totalling 4612 sources. Footnote 1: [http://alerce.science/features/](http://alerce.science/features/) #### 4.1.1 Top-ranked variability and colour features Most of the features that dominate the LCC consist of ZTF and AllWISE colours and variability features related to the amplitude and time-scale of the variability and to a decrease/increase of the luminosity. In order to evaluate the difference in distribution of these features between the CL and NOT CL samples we applied the Kolmogorov-Smirnov (KS) test to all their ranked-features. In Table 2 we present the features that dominate the LCC and have a p-value <0.05, that is, where we can reject the null hypothesis that the two distributions (from the CL and the NOT CL samples) are identical. We note that the DRW parameters determination is generally biased for light curve lengths shorter than 10 times the true \(\tau\) value (Kozlowski, 2017; Sanchez et al., 2017), which is the case of our ZTF data. Therefore, the DRW parameters obtained in this work should be considered just as variability features and not as physically correct estimations. In Fig. 1 we show the distribution of some of the variability features computed in the \(g\) band that present different distributions between the CL and NOT CL sources. In particular, the DRW relaxation time for the NOT CL objects peaks at the minimum data sampling (\(\sim\) 1 d), and spreads up to >1000 d, larger than the maximum light curve length. This indicates a DRW model is unable to properly model the optical variability for some NOT CLs, and thus it is unlike Type-1 AGN. For the CLs however, the DRW relaxation time peaks at 10-100 days as expected for Type 1 AGNs. In terms of the amplitude of the variability, from the GP_DRW_sigma distribution we see that some NOT CL objects reach much smaller values (log10(GP_DRW_sigma) \(<\) -6), which again indicates they have most likely been misclassified as Type 1 AGN by the LCC. Interestingly, the amplitude of the variations for all our objects peaks at a smaller value than the distribution for the AGN training set, suggesting their variability could be diluted by the host galaxy contribution. On the other hand, the autocorrelation of the light curves, given by the IAR_phi parameter, reaches smaller values for the NOT CL sample than for the CL sample. These features could be used to further clean the CL candidate list. Apart from the variability features, the classifier is also dominated by ZTF and AllWISE colours and the morphological properties of the images. In general, the optical colours for both samples look similar to each other but show a redder tendency than the AGN training set distribution, as shown in Fig. 2a. In Fig. 2b we present the 2010-2011 AllWISE _W1_-_W2_ versus the _W2_-_W3_ colours for the CL and NOT CL objects, in comparison to the Type 2 parent sample and the AGN training set. Most of our objects have fairly similar AllWISE colours, but the distribution of the _W1_-_W2_ colour peaks at a lower value than the AGN training set and closer to the Type 2 distribution, which implies that the MIR colours are generally dominated by the stellar populations. We note that the AllWISE observations were taken between 2010 and 2011, so these features are not indicative of their _current_ MIR colour. We further investigate the behaviour of the MIR colours in Section 4.2.4, including contemporaneous _WISE_ observations. #### 4.1.2 Not-ranked features The top-ranked features are the most important features that the LCC uses for the classification of variable sources. However, there are secondary features that could potentially allow us to evaluate whether a source is a good CL candidate or not. In Table 2 we present other secondary features showing distinct distributions between the CL and NOT CL samples. A particular example is the number of positive detections in the alert light curves (n_pos, see Fig. 3), which reaches higher values for the CL sample, meaning that CL objects tend to increase more their flux with respect to the template image (as expected for turning-on events). ### Characteristics of the CL vs NOT CL AGNs #### 4.2.1 BPT diagnostics To investigate the emission-line properties of the samples, we calculated the BPT (Baldwin et al., 1981) diagnostics from the archival SDSS spectra. We used the classification system defined by Kewley et al. (2006) for different ionisation mechanisms utilizing the three BPT diagnostic criteria ([N ii], [S ii], and [O i]) and found all sources are consistent with a Seyfert classification. The KS-test leads to large p-values >0.4 for the three cases, showing that these samples are indistinguishable in terms of the emission lines properties from their old (pre-CL) optical spectra. Similar results were found by analysing second epoch spectra. As an example, the [S ii] BPT diagram for the CL and NOT CL samples is plotted in Fig. 4. #### 4.2.2 Eddington ratio estimates We estimated the black hole masses (\(M_{\rm BH}\)) and continuum luminosity at 5100 A (\(L_{5100}\)) using the full width at half maximum (FWHM) and luminosity of broad H \(\alpha\) as outlined in Reines et al. (2013), using the values obtained from the new spectra. Then, we computed the Eddington ratios for the old and new spectra \(\lambda_{\rm Edd}=L_{\rm bol}/L_{\rm Edd}\), where \(L_{\rm Edd}=1.5\times 10^{38}(M_{\rm BH}/\rm M_{\odot})\) erg s\({}^{-1}\) is the Eddington luminosity and \(L_{\rm bol}\) is the bolometric luminosity defined as \(L_{\rm bol}=40\cdot(L_{5100}/10^{42})^{-0.2}\) erg s\({}^{-1}\) according to Netzer (2019). In Fig. 5 we present \(\lambda_{\rm Edd}\) for both samples computed for the old (\(L_{\rm Edd}\)) and new spectra (\(\lambda_{\rm Edd}\)), and the difference of accretion rate (\(\Delta\lambda_{\rm Edd}\)). We find that the old accretion rate is similar for both samples, but in the second epoch spectra the distribution shifts towards higher values for the CLs. These are the expected results for turning-on AGNs, since both the method to compute \(\lambda_{\rm Edd}\) and the criteria to confirm the sources as CL (e.g. >3\(\sigma\) change in the EW of broad H \(\alpha\)) use the properties of broad H \(\alpha\). \begin{table} \begin{tabular}{l l l} **Name** & **Filter** & **Description** \\ & & \multicolumn{2}{c}{Top ranked variability and colour features used in the LCC} \\ **SF \_ML, amplitude** & \(g\) & rms magnitude difference of the structure function computed over a one year time-scale \\ **MHPS\_low** & \(g\) & Variance associated with the 100 days time-scale (‘low’ frequency) \\ **SPM\_tau\_rise** & \(g\) & Initial rise time-scale from the supernova parametric model \\ **GP\_DRW\_sigma** & g & Amplitude of the variability at short time-scales, from the damp random walk (DRW) model \\ GP\_DRW\_tau & g & Relaxation time from the DRW model \\ IAR\_phi & g & Level of autocorrelation from an irregular autoregressive (IAR) model \\ positive\_fraction & \(g\),\(r\) & Number of detections in the difference images that are brighter than the template \\ delta\_mag\_fid & \(g\) & Difference between the maximum and minimum observed magnitudes in a given band \\ \(r\)_–W3_ & & colour computed using the ZTF mean \(r\) magnitude and the AlWiISE _W3_ filter \\ g\_r\_mean\_corr & & ZTF \(g\)-\(r\) colour using the mean magnitudes of each band \\ g\_r\_max & & ZTF \(g\)-\(r\) colour using the brightest magnitudes of each band \\ & & \\ \hline \multicolumn{3}{c}{Other features used in the LCC} \\ n\_pos & \(g\),\(r\) & Number of positive detections in the alert light curve \\ n\_neg & \(g\),\(r\) & Number of negative detections in the alert light curve \\ iqr & \(g\) & Difference between the 3rd and the 1st quartile of the light curve \\ **MHPS\_ratio** & \(g\) & Ratio between the variances at 100 days and 10 days time-scales for a given band, applying a Mexican hat filter \\ & & \\ & & \\ \hline \multicolumn{3}{c}{Variability features for the forced-photometry light curves} \\ SPM\_A & \(r\) & Amplitude from the supernova parametric model \\ LinearTrend & \(g\),\(r\) & Slope of a linear fit to the light curve \\ ExcessVar & \(g\) & Intrinsic variability amplitude \\ Meanvariance & \(g\) & Ratio of the standard deviation to the mean magnitude \\ Std & \(g\) & Standard deviation of the light curve \\ Amplitude & \(g\) & Half of the difference between the median of the maximum 5 per cent and of the minimum 5 per cent magnitudes \\ **SPM\_tau\_rise** & \(g\),\(r\) & See above \\ **MHPS\_low** & \(r\) & See above \\ **MHPS\_ratio** & \(r\) & See above \\ **GP\_DRW\_sigma** & \(g\),\(r\) & See above \\ **SF\_ML\_amplitude** & \(g\),\(r\) & See above \\ \end{tabular} \end{table} Table 2: Features that show different distributions for the CL and the NOT CL samples. Features recovered for both the alert light curves and the forced-photometry light curves are highlighted in bold. #### 4.2.3 ALeRCE features for the forced-photometry light curves To compare with the alert light curves, we also analysed variability in the full forced-photometry light curves, which are produced based on all ZTF difference images available. We requested the most updated forced photometry (up to 2022 September 26) of the entire sample from the ZTF forced-photometry service and generated the cleaned light curves according to the recommendations outlined in Masci et al. (2019). A python library to extract variability features in astronomical light curves is publicly available2. The forced-photometry light curves have a mean of 376 (453) data points in the \(g\) (\(r\)) filter, in comparison with the 30 (27) detections from the alert light curves. As a result, the variability features can be better constrained and we find more top-ranked features that show different distributions according to the KS-test between the CL and NOT CL samples, as shown in Table 2. We recover many variability features related to the amplitude of the variations and the deviations from the mean (e.g. ExcessVar, Meanvariance or Std), which are missing in the comparison of features from the alert light curves. Fig. 6 shows the amplitude and the standard deviation in the \(g\) filter for the forced-photometry light curves for both samples. The CL objects present generally higher values for both features, indicating that their variability is more similar to the expected Type 1 behaviour than the NOT CL sample. Footnote 2: [https://github.com/alercebroker/lc_classifier](https://github.com/alercebroker/lc_classifier) #### 4.2.4 Mid-infrared variability Some of the features that the LCC uses to classify variable objects are computed with AllWISE data, which are indicative of the state of the sources ten years ago. To investigate whether the CL and NOT CL samples show distinct MIR behaviour we downloaded all Figure 1: Distributions of the alert light curves top-ranked variability features for the CL and NOT CL samples and the AGN LCC training set: (a) relaxation time and amplitude of the variability on short time-scales, obtained from the damp random walk (DRW) model; and (b) level of autocorrelation from an irregular autoregressive (IAR) model. These features show distinct distributions between the CL and NOT CL samples and could be used to select the most promising CL candidates. Figure 2: Alert light curves top-ranked colour features for the CL and NOT CL samples and the AGN LCC training set: (a) \(g\)–\(r\) colour obtained with ZTF data and (b) AllWISE MIR colours in comparison with the Type 2 parent sample. Both the CL and NOT CL samples show optical and MIR colours more host-galaxy dominated than the AGN LCC training set. the AllWISE multi-epoch and NEOWISE-R single exposure (L1b) photometric data spanning from 2010 to 2021 and averaged every six months. For each source, we obtained the following variability features for the \(W1\) and \(W2\) bands: the maximum magnitude and colour variations (\(\Delta W1\), \(\Delta W2\), \(\Delta W1\)-\(W2\)), the colour from the last epoch (\(W1\)-\(W2_{f}\)), the intrinsic variability (\(\sigma_{m1}\) and \(\sigma_{m2}\)) computed as in Lyu et al. (2022) and the slopes of a linear trend fit to the MIR magnitude light curves (a\({}_{1}\) and a\({}_{2}\)) and to the \(W1\)-\(W2\) colour (a\({}_{12}\)). Table 3 shows the comparison between the median values with the 1\(\sigma\) errors for the CL and NOT CL samples. All the features except for the maximum colour variation \(\Delta W1\)-\(W2\) show distinct distributions according to the KS-test, with CL sources having a stronger variability. Moreover, the results from the linear fits indicate that the CLs have become brighter in both bands and have higher \(W1\)-\(W2\) values (see Fig. 7 and A1), whereas for the NOT CL the distributions peak closer to zero resulting in no net increase or decrease in brightness or colour. #### 4.2.5 X-ray variability In order to obtain the X-ray fluxes of our sources, we have used the individual eROSITA (extended ROentgen Survey with an Imaging Telescope Array, Predehl et al., 2021) All-Sky Surveys (eRASS1 to eRASS5). The data were processed with the eROSITA Standard Analysis Software System (eSASS, Brunner et al., 2022). We used the newest available pipeline processing version c020 which is an updated version of the software used for the first eROSITA Data Release (Merloni et al., 2023, in prep). The counterparts are determined using the same procedure adopted in the eROSITA/eFEDS field (Salvato et al., 2022), but applied to Legacy Survey DR103 and Gaia DR3 separately. After the identification of the CL candidates with the counterparts, we obtained the 0.2-2.3 keV flux from the corresponding eROSITA catalog (see Brunner et al., 2022 for a description of the eROSITA catalog processing). Footnote 3: [https://www.legacysurvey.org/dr10/](https://www.legacysurvey.org/dr10/) From our list of 61 CL candidates, there are 28 sources within the eROSITA-DE footprint (Galactic longitude 179.9442 < \(l\) < 359.9442 deg): 11 CLs, seven NOT CLs and ten CL candidates without a second epoch optical spectrum. From the CLs, ten sources have at least one detection within the five different eRASS, and one (ZTF18aawoghx) has only upper limits. The upper limits are calculated based on X-ray photometry on the eROSITA standard pipeline data products (science image, background image, and exposure time) following the Bayesian approach described by Kraft et al. (1991). For details about eROSITA upper limits, see TubingenArenas et al. (2023, in prep). We consider a circular aperture with a radius given by a PSF encircled energy fraction of EEF = 0.75 (\(\sim 30\arcsec\)) and a single-sided 3\(\sigma\) confidence level. From the NOT CLs, four sources have been detected in at least one eRASS. The remaining three (ZTF18acgvmzbZ/ZTF18aclofgf, ZTF19aaixgjo, and ZTF20aaixgjo) have only upper limits. Interestingly, six CL sources show an X-ray flux increase between eROSITA scans by factors of 2 to 15 times. For two sources (ZTF19aavyjohn and ZTF21abcsvbr) the difference between the maximum and the minimum values is similar to the error of the minimum value. For the remaining four sources (ZTF18acdchx, ZTF19aaixugo, ZTF21aafkiyo and ZTF21aaqlazo) the difference is at least eight times the error (see Fig. 8). We also checked archival X-ray fluxes from other missions to compare to the eROSITA fluxes. All the 11 CLs and 7 NOT CLs in the eROSITA-DE footprint have at least one X-ray upper limit from either the _XMM-Newton_ Slew (Saxton et al., 2008) or _ROSAT Figure 4: BPT diagram for the CL and NOT CL samples obtained from their archival SDSS spectra. The solid and dashed lines show the classification scheme from Kewley et al. (2001, 2006). The comparison indicates these samples are indistinguishable in terms of the BPT diagnostic criteria. Figure 3: Number of positive detections in the alert light curves for the CL and NOT CL samples and the AGN LCC training set, where the CL sample tends to increase their optical flux. \begin{table} \begin{tabular}{c c c} **Feature** & **CL** & **NOT CL** \\ \hline \(\langle\Delta W1\rangle\) (mag)* & \(0.5^{+0.3}_{-0.1}\) & \(0.4\pm 0.2\) \\ \(\langle\Delta W2\rangle\) (mag)* & \(0.7\pm 0.3\) & \(0.5^{+0.1}_{-0.3}\) \\ \(\langle\Delta W1\rangle\)–\(W2\rangle\)(mag) & \(0.3^{+0.2}_{-0.2}\) & \(0.3^{+0.2}_{-0.2}\) \\ \(\langle W1\)–\(W2_{f}\rangle\)(mag)* & \(0.5^{+0.1}_{-0.1}\) & \(0.4^{+0.1}_{-0.2}\) \\ \(\langle\sigma_{m1}\rangle\) * & \(0.15\pm 0.05\) & \(0.09\pm 0.07\) \\ \(\langle\sigma_{m2}\rangle\)* & \(0.18^{+0.00}_{-0.00}\) & \(0.14^{+0.00}_{-0.00}\) \\ \(\langle a_{1}\rangle\) \(\cdot 10^{-5}\) (mag)* & \(-4^{+0.05}_{-0.9}\) & \(0.5^{+0.09}_{-5.5}\) \\ \(\langle a_{2}\rangle\) \(\cdot 10^{-5}\) (mag)* & \(+9^{+0}_{-1.3}\) & \(2^{+4}_{-8}\) \\ \(\langle a_{12}\rangle\) \(\cdot 10^{-5}\)(mag)* & \(2^{+4}_{-4}\) & \(0^{-2}_{-2}\) \\ \hline \end{tabular} \end{table} Table 3: Mid-infrared variability features. Asterisks indicate the features that show distinct distributions between the samples according to the KS-test (p-value0.05). The errors correspond to the 1\(\sigma\) deviation from the median. Survey (Boller et al., 2016). However, due to the low sensitivity of the data, most of the archival upper limits fall above the current eROSITA measurements. This hinders us from finding the possible changes, with the notable exception of four of the CLs that show a significant (\(\gtrsim 1.5\sigma\)) increase in the eROSITA 2021 flux with respect to the archival 1990-1993 _ROSAT_\(1\sigma\) upper limits or fluxes, which are shown in Table 4. In the table, we converted the observed _ROSAT_ fluxes to the 0.2-2.3 keV band for a direct comparison, using an absorbed power law model with photon index \(\Gamma=2\) and column density \(N_{\rm H}=3\cdot 10^{20}\) cm\({}^{-2}\). These sources are also the CLs that experience a significant X-ray increase during the eROSITA monitoring as shown in Fig. 8. To compare the eROSITA fluxes between the CL and NOT CL sources, we selected the last X-ray detection within the five eRASS, or the last eROSITA upper limit for the sources without detections. We also computed the ratio between the X-ray flux and the optical flux in the \(g\) band, obtained from the ZTF forced photometry light curves. To avoid spurious results coming from variability, we chose pairs of contemporaneous fluxes, i.e., that were taken within the same days or week in the X-ray and optical bands. The results are plotted in Fig. 9, which shows the CL sources are generally brighter in the X-ray band than the NOT CL sources, both in absolute terms and relative to their \(g\) band fluxes. The KS-test indicates the X-ray flux distribution is significantly distinct between the CL and NOT CL samples (p-value \(<0.05\)), both considering just detections (p-value \(=0.01\)) and considering detections and upper limits (p-value \(=0.01\)). However, the X-ray to optical ratio distributions are not significantly distinct according to the KS-test, either considering just detections (p-value \(=0.08\)) or considering detections and upper limits (p-value \(=0.10\)). Therefore, although the X-ray to optical ratios tend to be higher for the CLs than for the NOT CLs, the difference is not statistically significant for the sources considered in this work, and more extended samples are needed to improve the statistics in terms of the X-ray behaviour for CLs. As a final step we also checked the harder, 2.3-5 keV eROSITA \begin{table} \begin{tabular}{c c c} ZTF ID & _ROSAT_ flux & eROSITA flux \\ & \(\cdot 10^{-13}\) erg s\({}^{-1}\) cm\({}^{-2}\) & \(\cdot 10^{-13}\) erg s\({}^{-1}\) cm\({}^{-2}\) \\ \hline ZTF18accdlux & \(<\)2.853 & 5.0 \(\pm\) 0.8 \\ ZTF19axlusyo & \(<\)3.279 & 4.6 \(\pm\) 0.8 \\ ZTF21axfkiyq & \(<\)1.049 & 7.7 \(\pm\) 0.9 \\ ZTF21aql2ro & 0.51 \(\pm\) 0.06 & 11.1 \(\pm\) 0.1 \\ \hline \end{tabular} \end{table} Table 4: 0.2–2.3 keV X-ray fluxes for four CL sources that show an increase between the archival 1990–1993 _ROSAT_\(1\sigma\) upper limits or fluxes and the 2021 eROSITA data. Figure 5: Eddington ratios for the CL and NOT CL samples obtained from the archival SDSS spectrum (left panel) and current spectra (middle panel), and their difference (\(\Delta\)\(\Delta\)\(\rm{E_{\rm{fid}}}\) = \(\lambda\)\(\rm{E_{\rm{fid2}}}\) - \(\lambda\)\(\rm{E_{\rm{fid1}}}\), right panel). CL objects have increased their Eddington ratios and are now accreting at 1–5 per cent \(\rm{\it L_{\rm{fid}}}\). Figure 6: Amplitude and standard deviation in the \(g\) filter for the CL and NOT CL forced-photometry light curves. CL objects present higher values for both features, indicating a stronger optical variability. Figure 7: Distributions of the linear slope of the \(W1\) 10-years-long light curve (a1) and the last epoch colour (\(W1\)\(-\)\(W2\)\({}_{f}\)) for the CL and NOT CL samples. The lower a1 values for the CL sample indicates the sources are getting brighter in the MIR waveband, while the slopes for the NOT CL distribute around zero, indicating that as a sample they are neither brightening nor dimming. The higher last epoch colour (\(W1\)\(-\)\(W2\)\({}_{f}\)) for the CLs is expected for AGN-dominated galaxies (\(W1\)\(-\)\(W2\)\(>\)0.5). fluxes. Most of the sources have just upper limits in this band, thus we cannot draw further conclusions about the X-ray spectral shape. ## 5 The origin of the hot CL sources The previous section showed that the CL and NOT CL samples are significantly different in terms of their optical, MIR and X-ray flux variability properties, with the CLs showing stronger optical and MIR variability with a tendency to increase their MIR flux and the colour _W1-W2_ over time. In order to understand the nature of the NOT CL sources we visually inspected the ZTF forced photometry and alert light curves and the reference images used by ZTF to compute the difference images. In Table 5 we present the characteristics of the variability in individual NOT CL objects, indicating the most probable cause of their variations. As a result, we find two sources that have been most likely misclassified by the LCC due to a small number of alerts (ZTF20aakreaa and ZTF20aacorzxv) and another two sources are possibly false detections due to a bad template image subtraction (ZTF18acymzh/ZTF18aclfugd and ZTF18acustgpt/ZTF18adppkkj). Three sources apparently show a transient event (that is, a flare-type variation in an otherwise flat curve) in the optical light curves: two of them resemble SN events (ZTF18aaiwdzt and ZTF18aaajyon) and one shows a sharp rise in the optical, followed by an MIR echo, which we speculate could be due to a TDE. The occurrence of TDEs in turning-on AGNs is theorised to be more likely than in other galaxies, due to the possibility of 'Starfall' (McKernan et al., 2022). This TDE candidate in a Type 2 AGN (shown in Fig. 10) could potentially be happening in a turning-on AGN whose BELs are still too weak to be detected, and merits further study which is beyond the scope of this paper. This discovery highlights the possibility of finding TDE candidates in AGNs, in order to compare their rate of occurrence to TDEs in quiescent galaxies. Notably, the remaining ten sources show small-amplitude, stochastic optical variations, characteristic of Type 1 AGNs. This is also consistent with their optical spectra, which show weak broad H \(\alpha\) emission lines, indicative of weak Type 1 AGNs. Interestingly, eight of these ten objects show a decrease in their optical flux along with a decrease in their MIR flux and the colour _W1-W2_, which suggests they are now transitioning to a dimmer state. Fig. 11 shows the optical and MIR light curves of two clear examples of this behaviour, ZTF19aapelvhs and ZTF20aagwvkl. Generally, the MIR colours from these NOT CL weak Type 1s are galaxy dominated (i.e., _W1-W2_ < 0.5), which suggests their weaker variability and BELs are not due to an orientation effect, but to an intrinsically lower AGN luminosity diluted by the emission of the host galaxy. The remaining two sources, ZTF19aavrjg and ZTF20abgnlgy, also show Type-1 like variability and stronger broad H \(\alpha\) emission (EW H \(\alpha\) SDSS > 40 A), indicative of Type 1 AGNs. ## 6 Discussion ### Improvement of the CL selection method through ALeRCE One of the main aspects of the selection of CL candidates through ALeRCE relies on the correct classification of their alert light curve variability by the LCC. As mentioned in Section 5, there are seven out of 17 NOT CLs that have been misclassified by the LCC due to a bad subtraction of the ZTF images used to compute the alerts, a small number of data points and/or transient events in the alert light curves. On the other hand, we also found that ten (\(\sim\)60 per cent) NOT CLs have been correctly classified by the LCC as Type 1 AGN Figure 8: ZTF forced-photometry light curves and contemporaneous eROSITA fluxes in the 0.2–2.3 keV band. The triangle in the last plot indicates an upper limit. These sources experience an increase in their X-ray flux during the eROSITA monitoring. and are spectroscopically consistent with being weak Type 1 AGNs whose optical and MIR properties are dominated by the host galaxy contribution. It is possible to increase the completeness of the CL candidate list as well as the purity by applying two additional criteria on the ALeRCE features simultaneously. The best combinations in the \(g\) band, with a loss of 0-5 per cent of CLs and removal of 50 per cent of NOT CLs, were found to be all the possible pairs between GP_DRW_sigma, GP_DRW_tau and delta_mag_fid, and the combinations of iqr with MHPS_low or delta_mag_fid. We note that our samples are too small to judge the universality of this additional cleaning with bootstrap methods. However, similar cleaning can be devised in the future for ZTF forced photometry and/or the classifications of the ZTF data release light curves (Sanchez-Saez et al., 2023), which would help to improve the selection of CL candidates. ### Characterizing the turning-on CL properties Apart from the ALeRCE LCC features, we have analysed other multi-wavelength properties of the sample to investigate what can increase the likelihood of finding a turning-on transition. #### 6.2.1 BPT diagrams While most of the reported CLs in the literature are Seyferts, some of them have been found to lie on the borderline between a LINER and Seyfert classification, with extreme order-of-magnitude changes in continuum and emission-line flux compared to less dramatic CLs occurring in Seyferts (Frederick et al., 2019). Here, we tested whether the CL transitions could be related to Figure 10: Optical ZTF forced-photometry light curves and evolution of the _WISE_ MIR fluxes and \(W1-W2\) colour for a TDE candidate belonging to the NOT CL sample. Note the different time-scales of the ZTF and _WISE_ data: the optical monitoring starts at the end of the MIR light curves. Figure 9: Distributions of the 0.2–2.3 keV eROSITA fluxes (left) and ratios between the 0.2–2.3 keV flux and the \(g\)-band flux from the ZTF forced photometry light curves, taken within the same day or week (right). _UP_ indicates the eROSITA upper limits. \begin{table} \begin{tabular}{c c c c c c} ZTF ID & Optical flux variability & MIR flux trend & MIR colour & 0.2–2.3 keV flux & Most likely cause \\ ZTF18aaiescp & yes, decr. & decr. & 0.5–0.3 & / & Weak Type 1 \\ ZTF18aaivdzt & transient & decr. & 1–0.7 & / & Transient event in the optical light curve \\ ZTF18aajyoon & transient & – & \(<\) 0.3 & / & Transient event in the optical light curve \\ ZTF19aabyvtvz/ZTF18aazyapub & yes, decr. & decr. & decr. \(<\)0.4 & / & Weak Type 1 \\ ZTF18acgvmzh/ZTF18acftigf & no & decr. & decr. \(<\) 0.5 & \(<\) 9.9 & Bogus, bad image subtraction \\ ZTF18acbadyv & yes, decr. & decr. & decr. \(<\) 0.5 & \(8\pm 3\) & Weak Type 1 \\ ZTF18acbadyf2TF18adeppkj & no & var. & \(<\) 0.3 & \(8\pm 3\) & Bogus, bad image subtraction \\ ZTF19aixgoj & yes, decr. & decr. & decr. \(<\) 0.5 & \(<\)11 & Weak Type 1 \\ ZTF19aapehvs & yes, decr. & decr. & decr. \(<\) 0.5 & / & Weak Type 1 \\ ZTF19aavrige & yes & – & \(\sim\)0.45 & \(6\pm 3\) & Type 1 (BELs) \\ ZTF20aaetruz & yes, decr. & decr. & decr. \(<\) 0.5 & \(13\pm 5\) & Weak Type 1 \\ ZTF20aaetruzk & yes, decr. & decr. & decr. \(0.8-0.4\) & / & Weak Type 1 \\ ZTF20aaetreaa & no & var. & 0.2–0.6 & / & Misclassified, small number of alerts \\ ZTF20aaetzv & no & – & \(\sim\) 0.2 & \(<\) 9.5 & Misclassified, small number of alerts \\ ZTF20abnglyz & yes & var. & 0.4–0.6 & / & Type 1 (BELs) \\ ZTF18abtze * & yes & var. & 0.4–1 & / & Possible TDE \\ ZTF19aaxdini * & yes, decr. & decr. & decr. 0.6–0.2 & / & Weak Type 1 \\ \hline \end{tabular} \end{table} Table 5: Characterization of the NOT CL sources analysed in this work. _decr._ denotes a decreasing trend in the optical forced-photometry or MIR light curve. _var._ denotes a variable trend. Dashes denote the light curve is fairly flat. Slashes denote no available eROSITA-DE data. Asterixs denote the two NOT CLs reported in LN22. the emission-line properties of the sources by computing the line-ratio diagnostic diagrams involving the line ratios [O iii]/H \(\beta\), [N ii]/H \(\alpha\), [O i]/H \(\alpha\), and [S ii]/H \(\alpha\)(Baldwin et al., 1981; Kewley et al., 2006). By selection, all our CL candidates are consistent with a Seyfert classification, and the BPT criteria indicate that the line ratios from both their archival and new optical spectra distribute similarly, regardless of whether they experience a CL transition or not. #### 6.2.2 Forced photometry variability Due to a higher number of data points, analyzing the complete light curves (instead of the alert curves) gives us a much more reliable determination of the variability of the sources. In general, we find a stronger variability typical of luminous Type 1 AGNs in the CL sample (during or post transition), which indicates that most of the candidates with non-variable BELs (i.e., the NOT CL sample) belong to a different population. Recently, ZTF light-curve variability analysis has been found to be a powerful tool to seek new CL candidates. In Lopez-Navas et al. (2023), by analysing ZTF forced photometry light curves of \(\sim\) 15000 Type 2 AGNs, we were able to distinguish between weak Type 1 and Type 2 sources and select CL candidates that have also been found through ALeRCE in the work presented here. Sanchez-Saez et al. (2021) used light curves from ZTF data release 5 (ZTF DR5) instead, and applied deep learning techniques to find anomalous behaviour in AGNs, leading to the identification of 75 promising CS candidates. #### 6.2.3 MIR variability The study of the photometric variability in the MIR band can help us understand the physical mechanisms that may be involved in CL events. This emission is believed to come from dust thermally heated by the AGN, which emits relatively unimpeded by dust extinction. Indeed, simple \(W\)I\(-\)W2 (i.e., \([3.4]-[4.6]\mu\)m) colour cuts with _WISE_ data are able to reliably differentiate AGNs from stars and galaxies (e.g. \(W\)I\(-\)W2 \(\geq\) 0.8, Stern et al., 2012). Along these lines, an important feature in our analysis is a\({}_{12}\), which indicates the linear trend of the \(W\)I\(-\)W2 colour. Our CL sample shows a general upward trend during the last \(\sim\)11 years, indicating a redder colour in the MIR band. This behavior is consistent with a turning-on transition, where the sources change from galaxy-like (i.e., \(W\)I\(-\)W2 \(<\)0.5) to AGN-like (i.e., \(W\)I\(-\)W2 \(>\) 0.8) MIR colour, as it has been found in previous CL studies (Sheng et al., 2017; Yang et al., 2018; Sheng et al., 2020; Lyu et al., 2022). This phenomenon can be explained by an increase of the brightness of the AGN, which illuminates the torus and produces a stronger contribution of hot dust in the \(W\)I and \(W\)2 bands, with a larger effect on the latter. This scenario is supported by the observed \(W\)I and \(W\)2 fluxes in our CL sample, which also show an upward trend given by the slopes a\({}_{1}\) and a\({}_{2}\). To estimate the variability of the CLs in comparison with the NOT CL sample, we computed the parameters \(\langle\sigma_{m1}\rangle\) and \(\langle\sigma_{m2}\rangle\) (indicative of the intrinsic variability) and the maximum variations of \(W\)I and \(W\)2 (\(\langle\Delta W\rangle\) and \(\langle\Delta W2\rangle\)) as in Lyu et al. (2022). Those authors present a study of the variability of the \(W\)I and \(W\)2 light curves of a population of CLs in comparison with low-luminosity AGN and QSO samples. They also use the AllWISE multi-epoch photometry and NEOWISE single exposure (L1b) source table data, though we average the data every six months. Our results, presented in Table 3, are consistent with their conclusions. On the one hand, we find that the CL population has a higher \(\langle\sigma_{m1}\rangle\) and \(\langle\sigma_{m2}\rangle\) variability than the NOT CL population. On the other hand, the CL sample exhibits \begin{table} \begin{tabular}{c c c} **Feature** & **Limiting value** & **\% cleaning** \\ \hline positive\_fraction & 0 & 56 \\ SF\_ML\_amplitude & \(-\)0.2 & 56 \\ SPM\_tau\_rise & 44 & 50 \\ GP\_DRW\_sigma & 4\(\cdot\)10\({}^{-5}\) & 56 \\ GP\_DRW\_tau & 9 & 56 \\ IAR\_phi & 0.97 & 44 \\ MHPS\_low & 0.012 & 63 \\ delta\_mag\_fid & 0.17 & 56 \\ \(\cdot\)_FW3_ & 6.7 & 6 \\ g\_r\_mean\_corr & 0.60 & 18 \\ g\_r\_max & \(-\)0.10 & 12 \\ n\_pos & 0 & 56 \\ igr & 19.7 & 63 \\ \hline \end{tabular} \end{table} Table 6: Variability and colour features in the \(g\) band that can be used to clean the CL candidate list. The limiting lower values correspond to the 16th percentile obtained for the CL sample. The third column indicates the percentage of NOT CL sources that can be discarded by limiting each feature to its limiting value. Figure 11: Optical ZTF forced photometry light curves and evolution of the _WISE_ MIR fluxes and \(W\)I - \(W\)z colour for two NOT CLs sources. Note the different time-scales of the ZTF and _WISE_ data: the optical monitoring starts at the end of the MIR light curves, where there is a dimming in the MIR emission. This evolution suggests the sources are now transitioning to a dimmer state. maximum variations of \(W1\) and \(W2\) (\(\langle\Delta W1\rangle\) and \(\langle\Delta W2\rangle\)) greater than 0.3 mag. In the scenario of variable obscuration, Yang et al. (2018) estimate that a variation in the \(W1\) band due to dust extinction would yield a factor of \(\sim\)21 change in the \(g\) band magnitude, assuming the extinction curve in the optical and MIR and considering micrometer-sized grains (Wang et al., 2015). A variation of 0.3 mag in the \(W1\) band would lead to \(\sim\)6.3 mag difference in the \(g\) band, which is not consistent with the observed properties of CLs. Therefore, the variability and general increase in the MIR flux and the colour \(W1\)-\(W2\) of the CL sample, over the last 11 years, are most likely due to an intrinsic brightening of the AGN and cannot be solely explained by the motion of absorbing clouds within our line of sight. #### 6.2.4 X-rays The CL sample shows higher X-ray fluxes and higher X-ray to \(g\)-band flux ratios than the NOT CL sample in the recent eROSITA scans. We note that the AGN continuum in the optical spectra of these sources is sub-dominant or undetectable under the starlight contribution, so a significant fraction of the \(g\)-band flux can be associated with the host galaxy. For a given host-galaxy flux, the CL sources are X-ray brighter, as expected if their AGNs are more dominant than in the NOT CL sample. There is a relatively small number of CL sources with contemporaneous X-ray data in the literature, so it is still unclear whether these events are always accompanied by a clear change in the X-ray band. We note that we are talking about optical CLs, not X-ray CLs where the changes are due to obscuration. In the well studied case of the CL Mrk 1018, a strong soft X-ray excess was found when the source was in the bright state, which dropped by much more than the hard X-rays when the source transitioned to the dimmest state (McElroy et al., 2016; Husemann et al., 2016). In this case, the changes in the BELs were associated with an increase/decrease of ionizing photons from the soft X-ray excess (Noda & Done, 2018). Additionally, Temple et al. (2023) found that five out of nine CL events with _Swift_-BAT data available could be associated with significant X-ray changes in the hard 14-195 keV band. The hard band is less affected by obscuration and thus rules out a varying column density as the origin of the flux changes. In this work, we find further evidence that at least some of the CL transitions could be associated with significant X-ray flux changes. In particular, four out of the 11 CLs with eROSITA data have experienced an X-ray flux rise with respect to archival _ROSAT_ data taken 30 years earlier. These sources also show variable flux within the eROSITA 2019-2022 light curves, with the X-ray flux being four, eight and up to 15 times higher than the minimum value (see Fig. 8). However, due to the small number of data points and the intrinsic X-ray variability in unobscured AGNs, we are unable to discard the possibility that these changes are purely stochastic and not related to the CL event. The CL AGN that has been most extensively observed in X-ray monitoring is 1ES 1927+654. This source was found to show dramatic changes in its X-ray spectral shape, showing the disappearance and re-appearance of the X-ray corona after the CL event (Trakhtenbrot et al., 2019; Ricci et al., 2020, 2021). The authors suggest that this particular phenomenon was caused by a TDE in an AGN. This strengthens the importance of multi-wavelength campaigns studying CLs, which enable constraints on the evolution of the AGN components during the transitions. ### Promising CL candidates selected through multi-wavelength flux variability The multi-wavelength photometric analysis performed in this work provides a powerful tool to identify turning-on CL events, allowing us to select the most promising CL events from the candidate list we have not re-observed. In Table 7 we show the sources from our CL candidates that are currently varying in the optical according to the refined ZTF feature criteria (all the possible pairs of best combinations mentioned in Section 6.1), and that are experiencing a MIR flux and colour \(W1\)-\(W2\) increase over time. For the sources with eROSITA-DE data, their X-ray fluxes were below the archival limits, so we did not use X-ray data as a selection criterion. In Fig. 10 we present the optical and MIR light curves for these sources. Some of them were not selected to be re-observed because they already showed obvious BELs in their archival optical spectra, such as ZTF18aadaie and ZTF21abmjobi. However, according to their increase in the MIR fluxes and colour, it is likely that these sources now present stronger BELs and are therefore worth re-observing. We strongly encourage follow-up spectroscopy to confirm the CL behaviour in the promising candidates presented here, which will improve the statistics and refine further the selection criteria of CL candidates. We note that the cleaning criteria in the optical and the evolution of the MIR flux and color used to select the most promising candidates are fairly independent for the redshifts considered in this work, from \(z=0\) to \(z=0.4\). ### Key questions about the CL phenomena #### 6.4.1 Frequency and time-scales CL events have been found to occur on time-scales as short as one or two months (Trakhtenbrot et al., 2019; Zeltyn et al., 2022). However, most research to date uses archival spectral and/or photometric data to look for CL transitions, making it difficult to obtain a proper estimate of the typical CL time-scales and thus understand the drivers of such extreme variations (Stern et al., 2018). The time difference between the first and last spectral epochs for our CL sample lies between 10 and 20 years (in the observed frame), which must be taken as an upper limit on the time-scale for these transitions. Given their low incidence rate (discussed below) it is difficult to find a large amount of CL events as they take place. However, much larger and/or intensively sampled spectroscopic campaigns, such as SDSS-V (Kollmeier et al., 2017) and 4MOST (de Jong et al., 2019) will provide better constraints on the typical CL time-scales, potentially finding shorter (weeks and even days) CL events. Another key question we can address with our results is the occurrence rate of turning-on transitions, following the discussion in LN22. There are 30,333 AGNs classified as Type 2 (N and K types in MILLIQUAS, cleaned for weak Type 1s and LINERS) that can be detected by ZTF (Dec \(>-28\)' and \(r<21\) mag). From them, 178 have alerts and are classified as AGN or QSO by the LCC. The 50 per cent confirmation rate found in this work implies that 0.3 per cent of transitions can be detected by this method over an average timespan of 15 years. Taking into account that just 10 per cent of Type 1 AGNs are variable enough to generate alerts in ZTF, we estimate a lower limit of 3 per cent of turning-on events every 15 years (i.e., 0.2 per cent per year). These values are consistent with the results from Hon et al. (2022), who finds a minimum turn-on CL AGN rate of 3 per cent every 15 years, and from Temple et al. (2023), who reports a CL rate of 0.7-6.2 per cent on 10-25 year time-scales (including both turning-on and turning-off events). As a difference with respect to the present work, LN22 selected the most promising candidates via visual inspection of the light curves and archival spectra, reducing the number of candidates and therefore the ratio between CL candidates to the number of objects in the parent sample. Therefore, even though the confirmation rate was higher than in this manuscript, they obtained a lower estimate of the total rate of change (0.12 per cent per year). In the present work we did not further restrict the candidate list to the most promising candidates so the fraction of the parent sample that was considered as a candidate is larger. Even if our success rate is slightly smaller, the product of both factors leads to a somewhat larger total rate of change. We note that in both cases we are estimating lower limits to the rate of change. #### 6.4.2 Physical origin In principle, several mechanisms could lead to variations in the BELs, including variable obscuration and changes in the accretion rate. Here, we find that the MIR emission, which is dominated by the dust response (i.e., from the torus) to the UV-optical variations of the central engine, tends to increase in the CL sources. Since the MIR emitting region is too large to be obscured/unobscured on time-scales of <20 years, and the emission is much less affected by dust obscuration, we infer that the changes in the MIR waveband and by extension the CL transitions are due to changes in the accretion rate. To explain the changes in the accretion flow at these time-scales, the proposed scenarios include instabilities in the accretion disc and major disc perturbations such as those caused by TDEs. In general, the light curves of our CL sources do not follow the power law decay \(\mathrm{t}^{-5/3}\) from the peak brightness as traditionally expected for TDEs (Rees, 1990). On the other hand, the values of \(\lambda_{\mathrm{Edd}}\) obtained for the CL sample in their bright state are similar to the results for CL AGNs and QSOs reported in recent works (\(-2\lesssim\log\lambda_{\mathrm{Edd}}\lesssim-1\), MacLeod et al., 2019; Frederick et al., 2019; Graham et al., 2020; Temple et al., 2023), and is consistent with attributing the changes in accretion rate occurring preferentially in lower activity systems. Interestingly, our sources have an Eddington ratio (post-CL) in the range 1-5 per cent \(L_{\mathrm{Edd}}\), which is in agreement with the expectations from a hydrogen ionization disc instability (Noda & Done, 2018; Ruan et al., 2019; Snigowska et al., 2023). In this scenario, by analogy to the spectral transitions in black hole X-ray binaries, the changes in the structure of the inner accretion disc occur around a critical value of \(\lambda_{\mathrm{Edd}}\sim 0.02\), which is in agreement with the most recent CL studies (Graham et al., 2020; Guolo et al., 2021; Temple et al., 2023). Furthermore, our CL sources show a redder-when-brighter tendency in the MIR, with AGN-like MIR colours (i.e. \(W1\)-\(W2>0.5\)) when they enter the bright state, which also has been found in other CL sources (Yang et al., 2018; Lyu et al., 2022) and supports the accretion state transition. In this scenario, either the BELs appear due to the increase of ionizing photons that excite the BLR (LaMassa et al., 2015), or the BLR itself re-appears according to the expectations in disc-wind BLR models (Nicastro, 2000; Elitzur & Ho, 2009). These models predict an evolutionary sequence for the BLR depending on the accretion rate, leading to different intermediate type spectral transitions and the BLR disappearance at very low luminosities (Lbol \(\lesssim 5\cdot 10^{39}M_{7}^{2/3}\) erg s\({}^{-1}\), where \(M_{7}=M/10^{7}\)M\({}_{\odot}\), Elitzur & Ho, 2009; Elitzur et al., 2014). According to the Eddington ratio estimates, none of our sources fell close to these limits, which suggests the BLR already existed in these sources but was too weakly ionised to produce detectable broad lines. On the other hand, the residual spectra in Fig. 14 show that the continuum of the CLs currently looks either flat or blue. This 'bluer when brighter' effect, although clearly real in quasars, could at least partially be due to differences in the relative contribution of the host galaxy as these lower-luminosity AGNs change their intrinsic flux. ## 7 Summary and conclusions This paper is the continuation of work introduced in LN22, where we present a method to search for turning-on CL candidates. The selection method consists of searching for current Type-1 AGN variability in a sample of spectrally classified Type 2 AGNs, using classifications given by a random-forest based light curve classifier, the ALeRCE LCC (Sanchez-Saez et al., 2021). In order to refine the selection method, we obtained second epoch spectra for 36 of our 61 CL candidates, six of which were reported in LN22, which allows the confirmation of the CL objects by quantifying the change of the BELs in comparison with the archival SDSS spectra taken 15-20 years earlier. As a result, we find 19 (\(\sim\) 50 per cent) turning-on CL confirmations (the CL sample), and 17 sources without significant changes in their BELs (the NOT CL sample). We have analysed the multi-wavelength properties of the CL sample in comparison with the NOT CL sample to investigate what would increase the likelihood of finding a turning-on transition and understand its origin. Firstly, we performed a variability analysis of the alert light curves from ZTF, finding several variability features that are distinct between the samples and that can be applied to select the most promising CL sources. We also find that the turning-on transitions are characterised by an increase in the _WISE_-MIR brightness and the MIR (\(W1\)-\(W2\)) colour, where the stronger H \(\alpha\) emission corresponds to an AGN-dominated MIR colour (\(W1\)-\(W2>0.5\) mag). The current Eddington ratio estimations for the CLs are lower than the overall Type 1 population, falling between one and five per cent \(L_{\mathrm{Edd}}\). In the X-ray band, we find that the CLs tend to be flux brighter than the candidates that have not transitioned, and for four CLs we observe a significant flux increase during the 2019-2022 X-ray monitoring. These results are in agreement with previous CL/CS works, and with the expectations from an accretion-state transition as the origin of these phenomena. We also analyse the nature of the NOT CL sources according to their optical and MIR variability. For seven out of the 17 objects, the Type 2 sources were misclassified by the LCC due to a bad subtrac \begin{table} \begin{tabular}{c c c c c c c c} Name & Ra & Dec & z & MJD & fiberid & plate & Comments \\ \hline ZTF18aalmtt & 136.365522 & 17.928647 & 0.40 & 562646 & 330 & 5769 & Increasing MIR fluxes and \(W1\)-\(W2\). \\ ZTF18aalmtt/ZTF21acdky & 191.572168 & 28.342741 & 0.10 & 54205 & 442 & 2238 & Increasing MIR fluxes and \(W1\)-\(W2\). \\ ZTF18aadopg & 221.957947 & 28.566697 & 0.16 & 53764 & 134 & 2141 & Increasing MIR fluxes and \(W1\)-\(W2\). \\ ZTF18aacryg/ZTF18aabdrk & 43.709715 & -2.797898 & 0.12 & 56978 & 181 & 7823 & Increasing MIR fluxes and \(W1\)-\(W2\). \\ ZTF19aLfrbr & 143.8233831 & 34.3260975 & 0.45 & 56336 & 727 & 8805 & Increasing MIR fluxes and \(W1\)-\(W2\). \\ ZTF20aepf-ggg & 339.999584 & 0.360655 & 0.38 & 52201 & 602 & 674 & Increasing MIR and optical fluxes and \(W1\)-\(W2\). \\ ZTF21aplprnu & 230.292534 & 20.544009 & 0.14 & 54328 & 421 & 2159 & Increasing MIR and optical fluxes. Decreasing \(W1\)-\(W2\), but still \(W1\)-\(W2>0.7\) \\ ZTF21abdrcqr & 335.843667 & 0.7717973 & 0.22 & 52140 & 454 & 375 & Increasing MIR fluxes and \(W1\)-\(W2\). \\ ZTF21aplwei & 358.3635382 & 0.123463 & 0.17 & 52523 & 520 & 684 & Increasing MIR and optical fluxes and \(W1\)-\(W2\). \\ \hline \end{tabular} \end{table} Table 7: Potential CL sources from our list of CL candidates identified through their current optical variability and the criteria mentioned in the comments. tion of the images, a small number of data points and/or transient events in the light curves such as SNe and one case of a likely TDE. Interestingly, we also find that ten sources are consistent with a Type 1 classification, where the optical and MIR emission is dominated by the host galaxy. This translates into lower amplitude variations in the optical and MIR wavebands, weaker BELs and galaxy-dominated MIR colours (i.e., \(W1\)-\(W2\)\(<\)0.5). Incidentally, we also find that seven of the NOT CL sources are currently decreasing their optical and MIR fluxes, suggesting they are currently transitioning to a dimmer state. The multi-wavelength differences between the CL and NOT CL sources allow us to select the most promising CL candidates from our list without spectroscopic follow-up, leading to nine sources that are worth re-observing. The use of machine learning algorithms on complete optical light curves from the ZTF or the upcoming LSST can be combined with MIR data to unequivocally identify CLs, improve the statistics and ultimately understand the underlying physics of these phenomena. ## Acknowledgements ELN and SB acknowledge support from Agencia Nacional de Investigacion y Desarrollo (ANID) / Programa de Becas/ Doctorado Nacional 21200718 and 21212344. ELN acknowledges the California Institute of Technology and the European Southern Observatory for their hospitality. PA, ELN, MLMA and PL acknowledge financial support from Millenium Nucleus NCN19_058 (TITANs). PA acknowledges financial support from the Max Planck Society through a Partner Group with the between MPA and the University of Valparaiso. LHG acknowledges funds by ANID - Millennium Science Initiative Program - ICN12_009 awarded to the Millennium Institute of Astrophysics (MAS). PL acknowledges partial support from FONDECYT through grant N\({}^{\star}\) 1201748. PSS acknowledges funds by ANID grant FONDECYT Postdoctorado N\({}^{\star}\) 3200250. MJG acknowledges partial support from the NSF grant AST-2108402. DT acknowledges support by DLR grant FKZ 50 OR 2203. MK acknowledges support from DFG grant number KR3338/4-1. Based on observations collected at the Samuel Oschin Telescope 48-inch and the 60-inch Telescope at the Palomar Observatory as part of the Zwicky Transient Facility project. The ZTF forced-photometry service was funded under the Heising-Simons Foundation grant 12540303 (PI: Graham). This publication makes use of data products from the Wide-field Infrared Survey Explorer, which is a joint project of the University of California, Los Angeles, and the Jet Propulsion Laboratory/California Institute of Technology, funded by the National Aeronautics and Space Administration. This publication also makes use of data products from NEOWISE, which is a project of the Jet Propulsion Laboratory/California Institute of Technology, funded by the Planetary Science Division of the National Aeronautics and Space Administration. This work is based on data from eROSITA, the soft X-ray instrument aboard SRG, a joint Russian-German science mission supported by the Russian Space Agency (Roskosmos), in the interests of the Russian Academy of Sciences represented by its Space Research Institute (IKI), and the Deutsches Zentrum fur Luft- und Raumfahrt (DLR). The SRG spacecraft was built by Lavochkin Association (NPOL) and its subcontractors, and is operated by NPOL with support from the Max Planck Institute for Extraterrestrial Physics (MPE). The development and construction of the eROSITA X-ray instrument was led by MPE, with contributions from the Dr. Karl Remeis Observatory Bamberg & ECAP (FAU Erlangen-Nuernberg), the University of Hamburg Observatory, the Leibniz Institute for Astrophysics Potsdam (AIP), and the Institute for Astronomy and Astrophysics of the University of Tubingen, with the support of DLR and the Max Planck Society. The Argelander Institute for Astronomy of the University of Bonn and the Ludwig Maximillas Universitat Munich also participated in the science preparation for eROSITA. The eROSITA data shown here were processed using the eASS/NRTA software system developed by the German eROSITA consortium. ## Data Availability The SDSS data underlying this article were accessed from SDSS DR17 ([http://skyserver.sdss.org/dr17](http://skyserver.sdss.org/dr17)). The second epoch spectra data will be shared on reasonable request to the corresponding author. The MIR data are publicly available at [https://irsa.ipac.caltech.edu/Missions/wise.html](https://irsa.ipac.caltech.edu/Missions/wise.html). The alert ZTF light curves, together with the LCC classifications, can be downloaded at [https://alerc.online](https://alerc.online). The forced-photometry light curves can be requested via the ZTF photometry service. The eROSITA data underlying this article were provided by the eROSITA-DE collaboration by permission, and will be shared on request to the corresponding author with permission of the eROSITA-DE collaboration. eRASS-1 will be public from Fall 2023. ## References * Antonucci (1993) Antonucci R., 1993, ARA&A, 31, 473 * Assef et al. (2018) Assef R. J., Stern D., Noirot G., Jun H. D., Cutri R. M., Eisenhardt P. R. M., 2018, ApJS, 234, 23 * Baldwin et al. (1981) Baldwin J. A., Phillips M. M., Terlevich R., 1981, PASP, 93, 5 * Bellm (2014) Bellm E., 2014, in Wozniak P. R., Graham M. J., Mahabal A. A., Seaman R., eds, The Third Hot-wiring the Transient Universe Workshop. pp 27-33 (arXiv:1410.8185) * Bellm et al. (2019) Bellm E. C., et al., 2019, PASP, 131, 018002 * Boller et al. (2016) Boller T., Freyberg M. J., Trumper J., Haberl F., Voges W., Nandra K., 2016, A&A, 588, A103 * Brunner et al. (2022) Brunner H., et al., 2022, A&A, 661, A1 * Cappellari (2017) Cappellari M., 2017, MNRAS, 466, 798 * Cohen et al. (1986) Cohen R. D., Rudy R. J., Puetter R. C., Ake T. B., Foltz C. B., 1986, ApJ, 311, 135 * Elitzur & Ho (2009) Elitzur M., Ho L. C., 2009, ApJ, 701, L91 * Elitzur & Trump (2014) Elitzur M., Ho L. C., Trump J. R., 2014, MNRAS, 438, 3340 * Flesch (2021) Flesch E. W., 2021, arXiv e-prints, p. arXiv:2105.12985 * Forster et al. (2021) Forster F., et al., 2021, AJ, 161, 242 * Fredericks et al. (2019) Fredericks, S., et al., 2019, ApJ, 883, 31 * Graham et al. (2020) Graham M. J., et al., 2020, MNRAS, 491, 4925 * Green et al. (2022) Green P. J., et al., 2022, ApJ, 933, 180 * Guo et al. (2019) Guo H., Sun M., Liu X., Wang T., Kong M., Wang S., Sheng Z., He Z., 2019, ApJ, 883, L94 * Guolo et al. (2021) Guolo M., Ruschel-Dutra D., Grupe D., Peterson B. M., Storchi-Bergmann T., Schimoia J., Nemmen R., Robinson A., 2021, MNRAS, 508, 144 * Hernandez-Garcia et al. (2017) Hernandez-Garcia L., Masegosa J., Gonzalez-Martin O., Marquez I., Guainazzi M., Panessa F., 2017, A&A, 602, A65 * Hon et al. (2022) Hon W. J., Wolf C., Onken C. A., Webster R., Auchettl K., 2022, MNRAS, 511, 54 * Husemann et al. (2016) Husemann B., et al., 2016, A&A, 593, L9 * Ivezic et al. (2019) Ivezic Z., et al., 2019, ApJ, 873, 111 * Kewley et al. (2001) Kewley L. J., Dopita M. A., Sutherland R. S., Heisler C. A., Trevena J., 2001, ApJ, 556, 121 * Kewley et al. (2006) Kewley L. J., Groves B., Kauffmann G., Heckman T., 2006, MNRAS, 372, 961 * Kollmeier et al. (2017) Kollmeier J. A., et al., 2017, arXiv e-prints, p. arXiv:1711.03234 * Kozlowski (2017) Kozlowski S., 2017, A&A, 597, A128 Kraft R. P., Burrows D. N., Nousek J. A., 1991, ApJ, 374, 344 * LaMassa et al. (2015) LaMassa S. M., et al., 2015, ApJ, 800, 144 * Laha et al. (2022) Laha S., et al., 2022, ApJ, 931, 5 * Liu et al. (2019) Liu H.-Y., Liu W.-J., Dong X.-B., Zhou H., Wang T., Lu H., Yuan W., 2019, ApJS, 243, 21 * Lopez-Navas et al. (2022) Lopez-Navas E., et al., 2022, MNRAS, 513, L57 * Lopez-Navas et al. (2023) Lopez-Navas E., Arevalo P., Bernal S., Graham M. J., Hernandez-Garcia L., Lira P., Sanchez-Saez P., 2023, MNRAS, 518, 1531 * Lyu et al. (2022) Lyu B., Wu Q., Yan Z., Yu W., Liu H., 2022, ApJ, 927, 227 * MacLeod et al. (2016) MacLeod C. L., et al., 2016, MNRAS, 457, 389 * MacLeod et al. (2019) MacLeod C. L., et al., 2019, ApJ, 874, 8 * Masci et al. (2019) Masci F. J., et al., 2019, PASP, 131, 018003 * Massaro et al. (2015) Massaro E., Maselli A., Leto C., Marchegiani P., Perri M., Giommi P., Piranomonte S., 2015, Ap&SS, 357, 75 * Matt et al. (2003) Matt G., Guainazzi M., Maiolino R., 2003, MNRAS, 342, 422 * McElroy et al. (2016) McElroy R. E., et al., 2016, A&A, 593, L8 * McKernan et al. (2022) McKernan B., Ford K. E. S., Cantiello M., Graham M., Jermyn A. S., Leigh N. W. C., Ryu T., Stern D., 2022, MNRAS, 514, 4102 * Netzer (2015) Netzer H., 2015, ARA&A, 53, 365 * Netzer (2019) Netzer H., 2019, MNRAS, 488, 5185 * Nicastro (2000) Nicastro F., 2000, ApJ, 530, L65 * Noda & Done (2018) Noda H., Done C., 2018, MNRAS, 480, 3898 * Oh et al. (2015) Oh K., Yi S. K., Sahawiski K., Koss M., Trakhtenbrot B., Soto K., 2015, ApJS, 219 * Oknymayly et al. (2021) Oknymayly V. L., et al., 2021, MNRAS, 505, 1029 * Predehl et al. (2021) Predehl P., et al., 2021, A&A, 647, A1 * Rees (1990) Rees M. J., 1990, Science, 247, 817 * Reines et al. (2013) Reines A. E., Greene J. E., Geha M., 2013, ApJ, 775, 116 * Ricci & Trakhtenbrot (2022) Ricci C., Trakhtenbrot B., 2022, arXiv e-prints, p. arXiv:2211.05132 * Ricci et al. (2020) Ricci C., et al., 2020, ApJ, 898, L1 * Ricci et al. (2021) Ricci C., et al., 2021, ApJS, 255, 7 * Ross et al. (2018) Ross N. P., et al., 2018, MNRAS, 480, 4468 * Ruan et al. (2019) Ruan I. J., Anderson S. F., Eracleous M., Green P. J., Haggard D., MacLeod C. L., Runnoe J. C., Sobolewska M. A., 2019, ApJ, 883, 76 * Salvato et al. (2022) Salvato M., et al., 2022, A&A, 661, A3 * Sanchez-Saez et al. (2021a) Sanchez-Saez P., et al., 2021a, AJ, 161, 141 * Sanchez-Saez et al. (2021b) Sanchez-Saez P., et al., 2021b, AJ, 162, 206 * Sanchez-Saez et al. (2023) Sanchez-Saez P., et al., 2023, arXiv e-prints, p. arXiv:2304.08519 * Sanchez et al. (2017) Sanchez P., et al., 2017, ApJ, 849, 110 * Saxton et al. (2008) Saxton R. D., Read A. M., Esquej P., Freyberg M. J., Altieri B., Bermejo D., 2008, A&A, 480, 611 * Sheng et al. (2017) Sheng Z., Wang T., Jiang N., Yang C., Yan L., Dou L., Peng B., 2017, ApJ, 846, L7 * Sheng et al. (2020) Sheng Z., et al., 2020, ApJ, 889, 46 * Sniegowska et al. (2023) Sniegowska M., Grzcdzielski M., Czerny B., Janiuk A., 2023, A&A, 672, A19 * Stern et al. (2012) Stern D., et al., 2012, ApJ, 753, 30 * Stern et al. (2018) Stern D., et al., 2018, ApJ, 864, 27 * Temple et al. (2023) Temple M. J., et al., 2023, MNRAS, 518, 2938 * Trakhtenbrot et al. (2019) Trakhtenbrot B., et al., 2019, ApJ, 883, 94 * Vazdekis et al. (2010) Vazdekis A., Sanchez-Blazquez P., Falcon-Barroso J., Cenarro A. J., Beasley M. A., Cardiel N., Gorgas J., Peletier R. F., 2010, MNRAS, 404, 1639 * Wang et al. (2015) Wang S., Li A., Jiang B. W., 2015, MNRAS, 454, 569 * Yang et al. (2018) Yang Q., et al., 2018, ApJ, 862, 109 * Zel'dny et al. (2022) Zel'dny G., et al., 2022, ApJ, 939, L16 * de Jong et al. (2019) de Jong R. S., et al., 2019, The Messenger, 175, 3 ## Appendix A Light curves and spectra from the CL sample Table 1 indicates the dates and instruments used for the second epoch spectra of all the CL candidates observed in this work. Fig. 1 shows the ZTF optical and _WISE_-MIR light curves (left) and optical spectra (right) for the 15 CL sources identified in this paper. The optical spectra are scaled to the flux of [S ii] in the earliest spectra and smoothed with a 10 A box filter. The lower plots in the right side show the difference between the new and the old spectra. In some cases, the new spectra were taken with a blue and a red arms, leading to very noisy regions that have been deleted. The other four CL sources considered in this work are reported in Lopez-Navas et al. (2022). All the SDSS spectra analysed in this work were taken before the beginning of the _WISE_-MIR light curves, except for the case of ZTF02a9ayaugaug (MDSDSS=55673). The second epoch spectra were taken between MJD 59600 and 59700, at the end of the optical and MID-IR light curves. Fig. 2 shows the ZTF optical and _WISE_-MIR light curves for the most promising CL candidates selected according to their optical and MIR photometric variability.
2310.11721
Chain-of-Thought Tuning: Masked Language Models can also Think Step By Step in Natural Language Understanding
Chain-of-Thought (CoT) is a technique that guides Large Language Models (LLMs) to decompose complex tasks into multi-step reasoning through intermediate steps in natural language form. Briefly, CoT enables LLMs to think step by step. However, although many Natural Language Understanding (NLU) tasks also require thinking step by step, LLMs perform less well than small-scale Masked Language Models (MLMs). To migrate CoT from LLMs to MLMs, we propose Chain-of-Thought Tuning (CoTT), a two-step reasoning framework based on prompt tuning, to implement step-by-step thinking for MLMs on NLU tasks. From the perspective of CoT, CoTT's two-step framework enables MLMs to implement task decomposition; CoTT's prompt tuning allows intermediate steps to be used in natural language form. Thereby, the success of CoT can be extended to NLU tasks through MLMs. To verify the effectiveness of CoTT, we conduct experiments on two NLU tasks: hierarchical classification and relation extraction, and the results show that CoTT outperforms baselines and achieves state-of-the-art performance.
Caoyun Fan, Jidong Tian, Yitian Li, Wenqing Chen, Hao He, Yaohui Jin
2023-10-18T05:39:20Z
http://arxiv.org/abs/2310.11721v1
Chain-of-Thought Tuning: Masked Language Models can also Think Step By Step in Natural Language Understanding ###### Abstract Chain-of-Thought (CoT) is a technique that guides Large Language Models (LLMs) to decompose complex tasks into multi-step reasoning through intermediate steps in natural language form. Briefly, CoT enables LLMs to think step by step. However, although many Natural Language Understanding (NLU) tasks also require thinking step by step, LLMs perform less well than small-scale Masked Language Models (MLMs). To migrate CoT from LLMs to MLMs, we propose Chain-of-Thought Tuning (CoTT), a two-step reasoning framework based on prompt tuning, to implement step-by-step thinking for MLMs on NLU tasks. From the perspective of CoT, CoTT's two-step framework enables MLMs to implement task decomposition; CoTT's prompt tuning allows intermediate steps to be used in natural language form. Thereby, the success of CoT can be extended to NLU tasks through MLMs. To verify the effectiveness of CoTT, we conduct experiments on two NLU tasks: hierarchical classification and relation extraction, and the results show that CoTT outperforms baselines and achieves state-of-the-art performance. ## 1 Introduction Chains-of-Thought (CoT) (Wei et al., 2022; Fu et al., 2023; Zhang et al., 2022) is a technique to help language models think step by step (Kojima et al., 2022). Through intermediate steps in natural language form, CoT can guide language models to decompose complex tasks into multi-step reasoning processes. Currently, CoT is mainly employed in Large Language Models (LLMs) (Brown et al., 2020; Chowdhery et al., 2022; Touvron et al., 2023; Zeng et al., 2023; OpenAI, 2023), as LLMs demonstrate impressive complex reasoning capabilities (Wei et al., 2022; Zhao et al., 2023). However, although LLMs achieved state-of-the-art performance on a wide range of NLP tasks (Zhao et al., 2023), Yang et al. (2023); Kocon et al. (2023); Lai et al. (2023) found that LLMs were less competitive than small-scale Masked Language Models (MLMs) (Devlin et al., 2019; Liu et al., 2019; Lan et al., 2020) in many traditional Natural Language Understanding (NLU) tasks (e.g. tasks in GLUE (Wang et al., 2019) and SuperGLUE (Wang et al., 2019)). The reasons for this are manifold: on the one hand, LLMs are typically autoregressive models and are not well-suited for NLU tasks (Liu et al., 2021; Raffel et al., 2020); on the other hand, NLU tasks usually involve rich domain knowledge (Fan et al., 2023; Zhang et al., 2023) and require fine tuning to master such knowledge (Fan et al., 2023, 2023). Therefore, we wonder whether the success of CoT in LLMs can be transferred to MLMs in NLU tasks. In fact, many NLU tasks also require thinking step by step. For example, in hierarchical classification (Jr. and Freitas, 2011; Kowsari et al., 2017), each text should follow a pre-defined taxonomic hierarchy to generate multiple labels in turn, so that high-level labels can be considered as the intermediate step, as shown in Fig. 1(a); in relation extraction (Zhang et al., 2017; Alt et al., 2020; Stoica et al., 2021), the entity types of the subject and the object in each instance need to be determined in advance of annotating the relation (Zhou and Chen, 2021), so the entity types can be considered as the intermediate step, as shown in Fig. 1(b). Although Figure 1: In NLU tasks, various forms of intermediate steps (drawn in gray) would exist as reasoning evidence between **Text** and **Label**. previous studies Chen et al. (2020); Han et al. (2021); Zhou and Chen (2021) have also attempted to incorporate intermediate steps into language models, in the perspective of CoT, these methods lack both a decomposition process for tasks (not multi-step reasoning) and explicit use of intermediate steps (not in natural language form). In this study, we propose Chain-of-Thought Tuning (CoTT), a two-step reasoning framework based on prompt tuning Sun et al. (2022); Liu et al. (2022), to implement step-by-step thinking for MLMs on NLU tasks. The two steps of CoTT are: **Step I**: _MLM generates the intermediate step \(\hat{I}\) based on the text \(x\)._ **Step II**: _MLM predicts the final result \(y\) based on the text \(x\) and the intermediate step \(\hat{I}\)._ CoTT effectively addresses the shortcomings of previous methods: on the one hand, CoTT's two-step framework enables MLM to implement task decomposition; on the other hand, CoTT is based on prompt tuning, which allows intermediate steps to be used in natural language form Schick and Schutze (2021); Gu et al. (2022). In order to inject/generate intermediate steps flexibly in both steps, we propose _convertible slot_C, a new type of template slot Petroni et al. (2019); Liu et al. (2022) in prompt tuning. To evaluate the effectiveness of CoTT, we conduct extensive experiments on two traditional NLU tasks: hierarchical classification and relation extraction, and the experimental results reveal that CoTT obtains superior performance compared with baselines and achieves state-of-the-art performances. Furthermore, due to the introduction of intermediate steps, CoTT is no longer an end-to-end method but can display the reasoning process. Therefore, we can further improve the ability of CoTT by monitoring the reasoning process. We summarize our contributions as follows: * Based on the philosophy of CoT, we propose a two-step reasoning framework to enable MLMs to think step by step in NLU tasks, and we call it Chain-of-Thought Tuning (CoTT). * We propose _convertible slot_C, a new type of slot in prompt tuning, which can flexibly inject or generate intermediate steps depending on scenarios. * We evaluate our CoTT on two NLU tasks: hierarchical classification and relation extraction, and the experimental results demonstrate the effectiveness of CoTT. ## 2 Related Work ### Chain-of-Thought Chain-of-Thought (CoT) Wei et al. (2022) is a prompt technique that elicits LLMs to produce intermediate reasoning steps leading to the final answer. Recent studies Zhou et al. (2023); Fu et al. (2023); Zhang et al. (2022) have confirmed that CoT can substantially improve the reasoning ability of LLMs. Studies Zhang et al. (2022) have shown that LLMs can perform CoT reasoning with zero-shot scenarios Kojima et al. (2022) or manually written few-shot scenarios Wei et al. (2022). In zero-shot scenarios, Kojima et al. (2022) showed that LLMs can generate decent intermediate steps, by adding certain magic phrases like "Let's think step by step", Zelikman et al. (2022) employed LLM to generate many intermediate steps and those intermediate steps that can lead to the final answer are chosen. In few-shot scenarios, LLM's reasoning ability can be improved by a few effective demonstrations on multi-step reasoning tasks. Some studies discussed how to select demonstrations efficiently: Fu et al. (2023) considered prompts with higher reasoning complexity (more intermediate steps) to be efficient demonstrations, Rubin et al. (2022) automatically constructed demonstrations based on the semantic similarity of texts. However, it is still unknown whether CoT can be applied to small-scale language models. ### Prompt Tuning Since the advent of GPT-3 Brown et al. (2020), prompt tuning Sun et al. (2022); Liu et al. (2022) has received considerable attention. Prompt tuning Schick et al. (2020); Gu et al. (2022) aims to transform the downstream tasks into the pre-training tasks of Pre-trained Language Models (PLMs) with appropriate manual prompts, which can bridge their gap and better utilize PLMs Han et al. (2021); Chen et al. (2022). Although the origin of prompt tuning is large-scale autoregressive language models, the following studies Schick and Schutze (2021); Liu et al. (2019) found that small-scale Masked Language Models (MLMs) Devlin et al. (2019); Liu et al. (2019); Lan et al. (2020) can also achieve competitive performance using prompt tuning. In practice, MLMs can implement prompt tuning in the form of close-style tasks Devlin et al. (2019); Liu et al. (2019). With MLMs, prompt tuning has been applied to a large variety of tasks such as factual probing Perez et al. (2021), text classification Gao et al. (2021); Hambardzumyan et al. (2021), relation extraction Chen et al. (2022), commonsense reasoning Ettinger (2020) and question answering Khashabi et al. (2020); Jiang et al. (2021), etc. ## 3 Preliminaries of Prompt Tuning Formally, a text classification dataset can be denoted as \(\mathcal{D}=\{\mathcal{X},\mathcal{Y}\}\), where \(\mathcal{X}\) is the text set and \(\mathcal{Y}\) is the class set. For each instance \(x\in\mathcal{X}\), it is made up of several words \(x=\big{\{}w_{1},w_{2},\ldots,w_{|x|}\big{\}}\), and is annotated with a label \(y\in\mathcal{Y}\). To bridge the gap between pre-training tasks and downstream tasks Schick et al. (2020); Gu et al. (2022), prompt tuning is proposed as a cloze-style task to tune MLMs. Prompt tuning consists of a template \(T\), a verbalizer \(\phi_{\mathcal{Y}}(\cdot)\) and a MLM \(\mathcal{M}\). The template Petroni et al. (2019); Liu et al. (2022) is a textual string with two slots: a _text slot_[T] for text \(x\) and an _answer slot_[A] (a <MASK> token) for the cloze-style prediction. The verbalizer is an injective mapping function \(\phi_{\mathcal{Y}}:\mathcal{Y}\rightarrow\mathcal{V}_{\mathcal{Y}}\) that bridges the class set \(\mathcal{Y}\) and the label word set \(\mathcal{V}_{\mathcal{Y}}\). Specifically, when the text \(x\) is injected into the _text slot_, we get the prompt \(T_{x}\). Then, we can formalize the label probability by feeding the prompt \(T_{x}\) into MLM \(\mathcal{M}\) as: \[\begin{split} p(y|x)&=p_{\mathcal{M}}(\texttt{<MASK> }=\phi_{\mathcal{Y}}(y)|T_{x})\\ &=\frac{\exp(e_{\phi_{\mathcal{Y}}(y)}\cdot h_{\texttt{<MASK>}}) }{\sum_{v\in\mathcal{V}_{\mathcal{Y}}}\exp(e_{v}\cdot h_{\texttt{<MASK>}})},\end{split} \tag{1}\] where \(h_{\texttt{<MASK>}}\) is the hidden vector of _answer slot_, and \(e_{v}\) is the embedding of each label word \(v\in\mathcal{V}_{\mathcal{Y}}\). Hereafter, we abbreviate <MASK> as <MP>. In the training process, we can maximize the learning objective \(\sum_{x\in\mathcal{X}}\log p(\texttt{<MP>}=\phi_{\mathcal{Y}}(y)|T_{x})\) to tuning MLM \(\mathcal{M}\). ## 4 Chain-of-Thought Tuning We propose Chain-of-Thought Tuning (CoTT), a two-step reasoning framework for Masked Language Models (MLMs) to think step by step, as shown in Fig. 2. The core of CoTT is intermediate steps in natural language form. MLM first generates the intermediate step in step I (Section 4.2), and then uses the generated intermediate step to predict the final result in step II (Section 4.3). We design _convertible slot_[C], a new type of slot in templates, to introduce intermediate steps flexibly in both steps (Section 4.1). Finally, we fuse the information from both steps to rectify MLM's prediction (Section 4.4). ### Convertible Slot in Template In Section 3, the traditional template contains two types of slots ([T] and [A]) for injecting texts and generating predictions, respectively. However, this template cannot flexibly introduce intermediate steps in natural language form. To overcome this problem, we propose a new type of slot -- _convertible slot_[C], which can be converted between injecting and generating intermediate steps based on scenarios. Specifically, we can incorporate the _convertible slot_[C] at the appropriate position of the template based on semantics. Here, we notate \(\mathcal{I}\) as the intermediate step set and prepare a verbalizer \(\phi_{\mathcal{I}}:\mathcal{I}\rightarrow\mathcal{V}_{\mathcal{I}}\) to establish a connection between \(\mathcal{I}\) and intermediate step word set \(\mathcal{V}_{\mathcal{I}}\). When the intermediate step is unknown, MLM needs to generate the intermediate step. In this case, we simply fill in [C] with <MP>, allowing MLM to make a cloze-style prediction about \(\mathcal{V}_{\mathcal{I}}\) to get the intermediate step. [C] is analogous to [A]. When a specific intermediate step \(I\in\mathcal{I}\) is given, MLM needs to be provided with such information through the prompt. In this case, we can directly fill the intermediate step word \(v_{I}=\phi_{\mathcal{I}}(I)\) into [C] to inject the appropriate information. [C] is analogous to [T]. In brief, the flexibility of _convertible slot_[C] to convert between [T] and [A] allows MLM to combine intermediate steps to make predictions. ### Step I: Generate Intermediate Step The purpose of step I is to generate the intermediate step by MLM. Specifically, as shown in Fig. 2(a), we fill the text \(x\) into [T] and fill an additional <MP> into [C] to get the prompt. We still notate such prompt as \(T_{x}\). Then, the probability of \(\mathcal{I}\) can be obtained by feeding \(T_{x}\) to MLM \(\mathcal{M}\) as: \[p(I|x)=p_{\mathcal{M}}(\texttt{[C]}=\phi_{\mathcal{I}}(I)|T_{x}). \tag{2}\] Eq. 2 implements the first step of CoT: generate the intermediate step based on the text (\(x\to I\)). Here, we denote MLM's prediction of the intermediate step in Eq. 2 as \(\hat{I}\). #### Predict Label in Parallel Due to the introduction of the convertible slot, the generation of intermediate steps and labels can be simultaneous (multiple <M>s in prompt). Therefore, using the same prompt \(T_{x}\), MLM can predict the label in parallel as: \[p(y|x)=p_{\mathcal{M}}(\texttt{[A]}=\phi_{\mathcal{Y}}(y)|T_{x}). \tag{3}\] We denote MLM's label prediction in Eq. 3 as \(\hat{y}_{x}\), which is independent of \(\hat{I}\). It is worth noting that Eq. 3 does not follow the generation process of CoT: MLM predicts the label in one step without any intermediate steps (\(x\to y\)). Due to the lack of a step-by-step reasoning process, we consider \(\hat{y}_{x}\) to be an intuitive prediction. ### Step II: Use Intermediate Step In step II, since the intermediate step \(\hat{I}\) is available, MLM can predict the label using both the text and the intermediate step, which is consistent with the second step of CoT (\(x\to y\gets I\)). As shown in Fig. 2(b), we inject \(x\) and \(\hat{I}\) into the proper slots of the template to obtain the prompt, and we notate such prompt as \(T_{x,I}\). Similar to Eq. 1, the label probability with the intermediate step can be expressed as: \[p(y|x,\hat{I})=p_{\mathcal{M}}(\texttt{[A]}=\phi_{\mathcal{Y}}(y)|T_{x,I}). \tag{4}\] We denote MLM's label prediction in Eq. 4 as \(\hat{y}_{x,I}\). Compared to Eq. 3, MLM can perceive and combine the text and the intermediate step in Eq. 4, which makes sophisticated reasoning process possible. Therefore, \(\hat{y}_{x,I}\) is considered to be a rational prediction. #### Counterfactual-based Contrastive Learning The core of Step II is that MLM needs to integrate the information from both the text \(x\) and the intermediate step \(\hat{I}\) to perform logical reasoning. However, this process of information integration lacks explicit guidance. Therefore, we propose counterfactual-based contrastive learning in step II, to guide MLM's information integration by contrasting the hidden vectors obtained from the factual/counterfactual intermediate step. In most cases, contrastive learning (Gao et al., 2021) requires an anchor as well as positive and negative samples for each instance. In CoTT, it is natural to consider the hidden vector \(h_{x}\) of [A] in step I as the anchor, and the hidden vector \(h_{x,I}\) of [A] in step II as the positive sample. To construct a negative sample, we sample the counterfactual intermediate step \(\hat{I}^{*}\) based on the probability distribution of intermediate steps in Eq. 2 as: \[\hat{I}^{*}\sim p_{\notin\hat{I}}(I|x)=\left\{\begin{array}{ll}\frac{p(I|x)} {1-p(\hat{I}|x)}&I\neq\hat{I},\\ 0&I=\hat{I},\end{array}\right. \tag{5}\] where \(p_{\notin\hat{I}}(I|x)\) refers to the normalized probability after masking the prediction of the intermediate step \(\hat{I}\). Then, similar to step II, we inject counterfactual intermediate step \(\hat{I}^{*}\) as well as the text \(x\) to the template to obtain the counterfactual prompt \(T_{x,I^{*}}\), and feed \(T_{x,I^{*}}\) to MLM to get the hidden vector \(h_{x,I^{*}}\) of [A] as the negative sample. Following (Chen et al., 2020), we design a small neural network projection head \(g(\cdot)\) that maps each hidden vector into the projection space. In this study, we employ a MLP with one hidden layer to obtain the projection vector \(z\) as: \[z=g(h)=W^{(2)}\cdot\sigma(W^{(1)}\cdot h),\] Figure 2: Overview of Chain-of-Thought Tuning (CoTT). CoTT is a two-step reasoning framework: generate the intermediate step (step I) and use the intermediate step (step II). Then, probability rectification is proposed to rectify MLM’s prediction based on the information from both steps. Here, \(T\) denotes the prompt, available information is drawn in \(\mathsf{\tilde{blue}}\), and generated information is drawn in \(\mathsf{orange}\). where \(\sigma\) is a ReLU nonlinearity, \(W^{(1)}\) and \(W^{(2)}\) are the learnable parameters. We take the cosine similarity of projection vectors as the similarity function because it implicitly normalizes each vector. The similarity between two projection vectors \(z_{i}\) and \(z_{j}\) can be described as: \[\text{sim}(z_{i},z_{j})=\frac{z_{i}^{\top}\cdot z_{j}}{\|z_{i}\|\cdot\|z_{j}\|}. \tag{7}\] Then, the counterfactual-based contrastive loss \(\mathcal{L}_{c}\) is defined as: \[\mathcal{L}_{c}=-\log\frac{e^{\text{sim}(z_{x},z_{x},I)/\tau}}{e^{\text{sim}( z_{x},z_{x},I)/\tau}+e^{\text{sim}(z_{x},z_{x},I^{*})/\tau}}, \tag{8}\] where \(\tau\) denotes the temperature parameter. By contrasting the similarity of \(h_{x}\) and \(\{h_{x,I},h_{x,I^{*}}\}\) in the projection space, MLM learns to distinguish whether \(x\) and \(I\) match: when they do, the hidden vectors of two steps are similar, and vice versa. Overall, counterfactual-based contrastive learning forces MLM to perceive the relationship between texts and intermediate steps more fine-grained, and integrates the perceived information into the hidden vector of [A]. ### Probability Rectification In the two-step process mentioned in Section 4.2 & 4.3, we combine \(x\) and \(\hat{I}\) to obtain the label probability \(p(y|x,\hat{I})\). However, this is actually an estimation of the label probability. From the perspective of the total probability theorem, with the consideration of intermediate steps, the exact label probability should be expressed as: \[p_{I}(y|x)=\sum_{I\in\mathcal{I}}p(I|x)\cdot p(y|x,I). \tag{9}\] In step II, \(p_{I}(y|x)\) in Eq. 9 is estimated to be equal to \(p(y|x,\hat{I})\) in Eq. 4. The meaning of this estimation is that even if \(I\neq\hat{I}\), \(p(y|x,\hat{I})=p(y|x,\hat{I})\) still holds true. There is a drawback to this estimation: large estimation errors will occur when MLM is ambiguous about intermediate steps (\(p(\hat{I}|x)\) is relatively low), which is known as exposure bias Yang et al. (2018). However, it is computationally costly to strictly follow Eq. 9 to calculate the exact label probability, as it requires MLM to repeat \(|\mathcal{I}|\) times for step II. Therefore, our proposed probability rectification method aims to simplify Eq. 9 by efficiently estimating the label probability utilizing the information from two steps. Specifically, we assume that when \(I\neq\hat{I}\), we have: \[D_{KL}(p(y|x,I)||p(y|x))<D_{KL}(p(y|x,I)||p(y|x,\hat{I})), \tag{10}\] where \(D_{KL}\) refers to the Kullback-Leibler Divergence. The assumption implies that, compared to using an inconsistent intermediate step, the estimation without using intermediate steps is relatively more accurate. This is consistent with the human perception of intermediate steps. Therefore, we replace \(p(y|x,I)\) in Eq. 9 with \(p(y|x)\) for all cases satisfying \(I\neq\hat{I}\), then the label probability can be estimated more exactly as: \[p_{I}(y|x) =p(\hat{I}|x)\cdot p(y|x,\hat{I})+\sum_{I\neq I}p(I|x)\cdot p(y|x,I) \tag{11}\] \[\approx p(\hat{I}|x)\cdot p(y|x,\hat{I})+(1-p(\hat{I}|x))\cdot p(y|x).\] Eq. 11 is the rectified label probability. Essentially, the probability rectification is an adaptive weighted probability based on the probability of the intermediate step \(p(\hat{I}|x)\), as shown in Fig. 2(c): when \(p(\hat{I}|x)\) is high, we consider that \(\hat{I}\) is more trustworthy, so \(p_{I}(y|x)\) is closer to the prediction in step II \(p(y|x,\hat{I})\), and vice versa, \(p_{I}(y|x)\) is closer to the prediction in step I \(p(y|x)\). Probability rectification is efficient and does not introduce additional computational complexity, but rather judiciously integrates known information from both steps. ### Training Details During the training process, each prediction made in two steps should be optimized, therefore, the loss of CoTT consists of three prediction losses as well as a contrastive loss \(\mathcal{L}_{c}\). We employ the Cross-Entropy loss to calculate the prediction loss. The loss of CoTT \(\mathcal{L}\) is denoted as: \[\mathcal{L}=\frac{1}{|\mathcal{X}|}\sum_{x\in\mathcal{X}} \underbrace{-\alpha\cdot\log p(I|x)-\log p(y|x)}_{\text{sep}\,\text{1}} \tag{12}\] \[+\underbrace{\beta\cdot\mathcal{L}_{c}-\log p(y|x,\hat{I})}_{ \text{sep}\,\text{1}},\] where \(\alpha,\beta\) are the weights to balance each loss respectively, and \(|\mathcal{X}|\) represents the number of instances in the dataset. ## 5 Experiments ### Datasets and Evaluation Metrics To verify the effectiveness of CoTT, we conducted extensive experiments on two traditional NLU tasks: Hierarchical Classification (HC) and Relation Extraction (RE). For HC task, we conducted our experiments on Web-of-Science (WOS) dataset [11]. WOS contains abstracts of published papers from _Web of Science_, and the labels of WOS have two hierarchies: 7 domains and 134 areas. We treated the domain label as the intermediate step of the reasoning process. We measured the results with Micro-\(F_{1}\) and Macro-\(F_{1}\). For RE task, we experimented on three relation classification datasets: TACRED [22], TACREV [14], ReTACRED [23]. TACRED was a large-scale sentence-level relation extraction dataset, which was obtained via crowd-sourcing. TACREV corrected the errors in TACRED, and ReTACRED addressed some shortcomings of TACRED, refactoring its training set, development set and test set. ReTACRED also modified a few relation types. For all these datasets, we considered the NER types of subject and object as the intermediate step (refer to Han et al. (2021)), and we adopted \(F_{1}\) score as the metric for evaluation. The statistic details of datasets are illustrated in Table 1. ### Baselines For comparison with LLMs in NLU tasks, we employed two state-of-the-art LLMs in both tasks: davinci-text-003 and gpt-3.5-turbo via the OpenAI API. For HC task, the focus of the recent methods is to exploit label semantics: HiAGM [14] introduced hierarchy-aware structure encoders for modeling label dependencies; HTCInfoMax [15] improved HiAGM by text-label mutual information maximization and label prior matching; HiMatch [13] matched the text semantics and the label semantics in a joint embedding space; HGCLR [26] directly embedded the hierarchy into a text encoder; HPT [26] handled this task from a multi-label MLM perspective based on prompt tuning. To better compare the performance of CoTT, we also compared our method with two vanilla prompt tuning methods: HardPrompt and SoftPrompt1. For RE task, as fine tuning of MLMs achieved promising results, we fine tuned two traditional MLMs: BERT [14] and Roberta [15], as well as two knowledge-enhanced MLMs: SpanBERT [11] and KnowBERT [21] as baselines. Since CoTT is based on prompt tuning, we also employed two prompt tuning methods HardPrompt and PTR [15] as baselines. Experimental details of LLMs can be found in Appendix A. Footnote 1: The concepts of HardPrompt and SoftPrompt originate from Wang et al. (2022). ### Implementation Details Following the setting of the previous study, we adopted bert-base-uncased and roberta-base in Hugging Face library [12] as the base architecture for HC task and RE task, respectively. We set the batch size to 8, and used Adam [10] as the optimizer, the learning rate was initially 1e-5 and decayed by 0.5 every 2 epochs. The weight decay was set to 1e-2. After training models for 10 epochs, we selected the best checkpoint on the training set for evaluation. For weighting coefficients, we set \(\alpha\) = 0.1, \(\beta\) = 0.1. We set \(\tau\) = 1.0. The manual templates we designed for HC task and RE task are as follows: **HC:**[T], the domain is [C], the area is [A]. **RE:**[T], the SUBJ [C] is [A] of the OBJ [C]. where SUBJ and OBJ refer to the subject and the object of each instance, respectively. Since it was a challenge to design appropriate label words to distinguish different labels in the verbalizer [15], following [15, 26], we created the learnable virtual label word \(v_{y}\) for each label \(y\in\mathcal{Y}\). ## 6 Results and Analysis ### Main Results Table 2 & 3 exhibit the experimental results of CoTT as well as other compared baselines on HC and RE. It can be observed that our proposed CoTT achieved state-of-the-art results on all datasets. This reflected the superiority of CoTT in NLU tasks. In addition, we found that the performance of LLMs was much worse than MLMs on both tasks, which demonstrated that LLMs still cannot master some NLU tasks well. \begin{table} \begin{tabular}{l c c c c} \hline \hline **Dataset** & \# train & \# dev & \# test & \# label \\ \hline **WOS** & 30070 & 7518 & 9397 & 141 \\ **TACRED** & 68124 & 22631 & 15509 & 42 \\ **TACREV** & 68124 & 22631 & 15509 & 42 \\ **ReTACRED** & 58465 & 19584 & 13418 & 40 \\ \hline \hline \end{tabular} \end{table} Table 1: The statistic details of the four datasets, where # represents the number of instances in each set. ### Results on HC As shown in Table 2, CoTT outperformed all baselines on WOS in both metrics. Compared with vanilla fine tuning, CoTT achieved 1.83% and 3.42% improvement on Micro-\(F_{1}\) and Macro-\(F_{1}\), respectively. Besides, vanilla HardPrompt and SoftPrompt can exceed many fine tuning baselines, which revealed that the superiority of prompt tuning remains even in small-scale MLMs. On the basis of prompt tuning, CoTT further improved the performance of MLM, which shows the effectiveness of introducing intermediate steps. In Table 5, the first row shows how CoTT would behave under ideal conditions: in step I, MLM's prediction of intermediate steps _Bio._ (0.87) was correct with high confidence, which implied that MLM clearly understood this case, so the label prediction _Gene_. (0.86) in step II was more trustworthy. In this case, probability rectification can be disregarded. However, MLM was sometimes ambiguous about intermediate steps. In the second row, the probabilities of _Med._ (0.52) and _Bio._ (0.43) were close, according to the analysis in Section 4.4, label prediction _H/A_ (0.30) can be significantly biased. Therefore, probability rectification is necessary here. In this case, the label prediction was rectified to _PCR_ (0.33). The situation in the third row was akin to that in the second row, with the main difference being that the rectified label prediction _Gene_. (0.37) remained unchanged. Essentially, \(p(I|x)\), \(p(y|x)\), and \(p(y|x,\hat{I})\) contain abundant information, while probability rectification can adaptively integrate the information embedded in these probabilities. ### Reasoning Process Monitoring Because of the introduction of intermediate steps, CoTT is no longer an end-to-end method, but can additionally monitor the reasoning process (the distribution of the reasoning process can be found in Appendix C). Therefore, we can detect anomalies from the reasoning process, thus further improving the performance of CoTT. Monitor 1: Self-Consistency of Predictions In step I and step II of CoTT, MLM should make the label prediction with/without the intermediate step \(\hat{I}\), respectively. Ideally, the two predictions (intuitive prediction \(\hat{y}_{x}\) and rational prediction \(\hat{y}_{x,\hat{I}}\)) should be self-consistent. Therefore, we consider the inconsistency of label predictions as an anomaly: if the two predictions are contradictory, it indicates that MLM may have a misunderstanding of the intermediate step \(\hat{I}\), making the predictions relatively less reliable. Monitor 2: Correctness of Intermediate Steps When the true intermediate step is provided, it becomes an option to monitor whether the prediction of intermediate steps in step I is correct. Reasonably, we consider the incorrect prediction of intermediate steps as an anomaly: If MLM cannot predict the correct intermediate step, then the subsequent label prediction is less credible. Following these two monitors, we evaluated the performance of CoTT in monitoring the reasoning process on WOS dataset. Hamming loss (Tsoumakas and Katakis, 2007) was introduced as an additional metric. As shown in Fig. 3, these monitors (M1 & M2) can determine a more reasonable decision scope for CoTT, which can significantly improve the performance of CoTT. In practice, this ability to monitor the reasoning process can be of great use when faced with risk-sensitive tasks (e.g., medical (Quinn et al., 2020), judicial (Laptev, 2021)), since reliability is even more important in these scenarios. ## 7 Conclusion In this study, we propose Chain-of-Thought Tuning (CoTT), a two-step reasoning framework based on prompt tuning to implement step-by-step thinking for MLMs on NLU tasks. Specifically, the two-step framework of CoTT enables MLM to implement task decomposition; and CoTT based on prompt tuning, which allows intermediate steps to be used in natural language form. Experiments demonstrated that CoTT achieved state-of-the-art performance on two NLU tasks. In the future, we aim to scale up MLMs to fundamentally improve their language ability. In addition, using LLMs to implement multi-step reasoning in NLU tasks is also a possible research direction. Figure 3: Performance of CoTT in monitoring the reasoning process. M refers to the Monitor. \begin{table} \begin{tabular}{l l l l l} \hline \hline **Test** & \multicolumn{2}{c}{**Domain**} & \multicolumn{2}{c}{**Area**} \\ \cline{2-5} & \(\hat{\nu}(I)\) & \(p(\hat{\nu}(I)\) & \(p(\hat{\nu}(I)\) & \(p(\hat{\nu}(I))\) \\ \hline - a native european agent & Bio. (0.87) & Gain. (0.85) & Gain. (0.85) & Gain. (0.85) & Gain. (0.85) \\ - cites, suffered a significant evaluation on (0.06) & Web (0.05) & Web (0.05) & Web (0.05) & Web (0.05) \\ - vitation in in geographical - & (0.05) & Web (0.05) & Web (0.05) & Web (0.05) \\ \hline \hline \multicolumn{5}{l}{Individuals in one of the ### Limitations There are two main limitations in CoTT. Firstly, the application of CoTT is relatively narrow, as CoTT can only handle NLU tasks with intermediate steps. Secondly, compared to LLMs, CoTT struggles to implement interaction and cooperation between intermediate steps and to handle reasoning tasks with more steps. ## Acknowledgments This work was supported by the Shanghai Municipal Science and Technology Major Project (2021SHZDZX0102), and the Fundamental Research Funds for the Central Universities.
2301.08531
Computation of Hilbert class polynomials and modular polynomials from supersingular elliptic curves
We present several new heuristic algorithms to compute class polynomials and modular polynomials modulo a prime $p$ by revisiting the idea of working with supersingular elliptic curves. The best known algorithms to this date are based on ordinary curves, due to the supposed inefficiency of the supersingular case. While this was true a decade ago, the recent advances in the study of supersingular curves through the Deuring correspondence motivated by isogeny-based cryptography has provided all the tools to perform the necessary tasks efficiently.
Antonin Leroux
2023-01-20T12:18:42Z
http://arxiv.org/abs/2301.08531v2
# Computation of Hilbert class polynomials and modular polynomials from supersingular elliptic curves ###### Abstract We present several new heuristic algorithms to compute class polynomials and modular polynomials modulo a prime \(P\). For that, we revisit the idea of working with supersingular elliptic curves. The best known algorithms to this date are based on ordinary curves, due to the supposed inefficiency of the supersingular case. While this was true a decade ago, the recent advances in the study of supersingular curves through the Deuring correspondence motivated by isogeny-based cryptography has provided all the tools to perform the necessary tasks efficiently. Our main ingredients are two new heuristic algorithms to compute the \(j\)-invariants of supersingular curves having an endomorphism ring contained in some set of isomorphism class of maximal orders. The first one is derived easily from the existing tools of isogeny-based cryptography, while the second introduces new ideas to perform that task efficiently for a big number of maximal orders. From there, we obtain two main results. First, we show that we can associate these two algorithms with some operations over the quaternion algebra ramified at \(P\) and infinity to compute directly Hilbert and modular polynomials \(\mod P\). In that manner, we obtain the first algorithms to compute Hilbert (resp. modular) polynomials modulo \(P\) for a good portion of all (resp. all) primes \(P\) with a complexity in \(\tilde{O}(\sqrt{|D|})\) for the discriminant \(D\) (resp. \(\tilde{O}(\ell^{2})\) for the level \(\ell\)). Due to the (hidden) complexity dependency on \(P\), these algorithms do not outperform the best known algorithms for all prime \(P\) but they still provide an asymptotic improvement for a range of prime going up to a bound that is sub-exponential in \(|D|\) (resp. \(\ell\)). Second, we revisit the CRT method for both class and modular polynomials. We show that applying our second heuristic algorithm over supersingular curves to the CRT approach yields the same asymptotic complexity as the best known algorithms based on ordinary curves and we argue that our new approach might be more efficient in practice. The situation appears especially promising for modular polynomials, as our approach reduces the asymptotic cost of elliptic curve operations by a linear factor in the level \(\ell\). We obtain an algorithm whose asymptotic complexity is now fully dominated by linear algebra and standard polynomial arithmetic over finite fields. Introduction Hilbert class polynomials and modular polynomials are central objects in number theory, and their computation have numerous applications. One field where these computations are of particular interest is cryptography. The main applications are to be found in elliptic curve cryptography and pairing-based cryptography, but we can also mention, more marginally, the recent field of isogeny-based cryptography. Class polynomials, for instance, play a central role in the CM method, which is the main approach to find ordinary curves with a prescribed number of points over a given finite field (see [1, 2]). This has applications to primality proving with the ECPP method and finding pairing friendly-curves with the Cocks-Pinch method. Modular polynomials are related to isogenies between elliptic curves. Historically, they play a very important role in the SEA point counting algorithm [1, 2] which remains one of the main algorithms used in elliptic-curve cryptography to generate cryptographic curves. Moreover, the interest in isogenies have been renewed with the rise of isogeny-based cryptography. While most applications tend to use the more efficient Velu formulas [10], we can cite a few instances where modular polynomials have been considered. For example, it is used in the CRS key exchange [14, 2], the very first isogeny-based protocol, and we can also mention the OSIDH construction [15]. The goal of this work is to explore theoretical and practical improvements to the best-known algorithms to compute class polynomials and modular polynomials modulo prime numbers through the use of supersingular curves. Related work.One of the main problems behind the computation of class polynomials and modular polynomials is the huge size of their coefficients over \(\mathbb{Z}\). There exists several algorithms of quasi-linear complexity [1, 2, 13], but more often than not, memory is the real bottleneck in the concrete computations of those polynomials. In theory, size is less an issue when the result is needed modulo some prime \(P\), but this is only true in practice if we have a way to skip entirely the computation over \(\mathbb{Z}\), which is not so easy to get. Nonetheless, Sutherland [16] proved that this could be done for class polynomials by a careful application of the CRT method. The result was later applied to modular polynomials by Broker, Lauter and Sutherland (BLS) [1]. The main advantage of the CRT method compared to other approaches is the low memory requirement (almost optimal in the size of the final output), and this is why this method has achieved the best results in terms of scaling. For both class and modular polynomials, the main tools used in the computations are ordinary elliptic curves. The ordinary curves are preferred to supersingular curves because the former have proven over time to lead to more efficient algorithms than the latter. The situation has changed with the recent interest on the connection between supersingular curves and quaternion algebras sparked by isogeny-based cryptography. Since the work of Deuring [1], it is known that endomorphism rings of supersingular curves in characteristic \(p\) are isomorphic to maximal orders in the quaternion algebra \(\mathcal{B}_{p,\infty}\) ramified at \(p\) and infinity, and that, conversely, every such maximal order types arises in this way. This is the first result of what is now called the _Deuring correspondence_. In this work, we are particularly interested in the task of computing the \(j\)-invariants of the (at most 2) supersingular elliptic curves over \(\mathbb{F}_{p^{2}}\) having a given maximal order type as endomorphism ring. The first concrete effort to realize that task is an algorithm of Cervino [1] to compute the endomorphism rings of all supersingular curves in characteristic \(p\). The complexity of this algorithm is \(O(p^{2+\varepsilon})\) and it becomes rapidly impractical. This algorithm was more recently improved by Chevyrev and Galbraith in [1] but the complexity is still \(O(p^{1,5+\varepsilon})\). As part of cryptanalytic efforts to understand the difficulty of various problems related to the Deuring correspondence, a heuristic algorithm of polynomial complexity in \(\log(p)\) was introduced by Eisentrager, Hallgren, Lauter, Morrison and Petit [2]. This algorithm builds upon the previous works of Kohel, Lauter, Petit and Tignol [13] and Galbraith, Petit and Silva [2]. More concretely, these works prove that an isogeny can be efficiently computed between two supersingular curves of known endomorphism ring by translating the problem over the quaternions with the Deuring correspondence, solving the translated problems over quaternions, before translating back the solution as an isogeny. This can be applied directly to compute the \(j\)-invariants of all curves with an endomorphism ring contained in a given maximal order type by using one starting curve \(E_{0}\) of known endomorphism ring (such a curve can always be computed efficiently with the CM method). Contributions.Our main contribution is to reintroduce the use of supersingular elliptic curves in the computation of Hilbert class polynomials and modular polynomials by using the recent progress on the algorithmic Deuring correspondence. The main sub-routine of our method is aimed at translating a set of isomorphism class of maximal orders into their corresponding supersingular \(j\)-invariants under the Deuring correspondence. We introduce two algorithms, with different performance profiles, to perform that task. With these new algorithms, we obtain an improvement over the asymptotic complexity of the class and modular polynomials computation in a wide range of primes below some upper-bounds that depend either on the discriminant of the class polynomial or the level of the modular polynomial. Moreover, we show that our new algorithm can also be used in the CRT method to reach the same complexity as ordinary curves, but with possibly better practical efficiency. ### Technical overview We start by looking at our main-subroutine that consists in the computation of the \(j\)-invariants of supersingular elliptic curves corresponding to some set of maximal order isomorphism classes (called maximal order types, see Definition 1). In the rest of this article, unless specified otherwise, a curve is considered to be a supersingular elliptic curve. Maximal orders to \(j\)-invariants.We propose two algorithms dedicated to that task. Let consider that a set \(\mathfrak{S}\) of types is given in input, together with some prime \(p\). Our first algorithm is called \(\mathsf{OrdersTojInvariantSmall}()\) and it consists merely in a sequential execution of the sub-algorithm (that we call \(\mathsf{SingleOrderTojInvariant}()\)) from [1]. \(\mathsf{SingleOrderTojInvariant}()\) perform the desired translation for one type of maximal order. When everything is done carefully, it can be executed in \(O(\log(p)^{4+\varepsilon})\) under experimentally verified heuristics detailed in [1] and related to the probability for numbers represented by some quadratic form to be prime. Thus, since \(\mathsf{OrdersTojInvariantSmall}()\) consists in \(\#\mathfrak{S}\) executions of \(\mathsf{SingleOrderTojInvariant}()\), the total heuristic complexity of \(\mathsf{OrdersTojInvariantSmall}()\) is \(O(\#\mathfrak{S}\log(p)^{4+\varepsilon})\). For a generic \(p\) and set of maximal order type \(\mathfrak{S}\), we do not know how to do better than that. However, when \(\mathfrak{S}\) is close to be maximal (the maximal size being upper-bounded by the number of supersingular curves), it becomes sub-optimal due to the amount of redundant computation performed along the way. In that case, it becomes much more practical to use an algorithm designed to sieve through the entire set of types, only focusing on the ones in \(\mathfrak{S}\) when they're met along the way. It requires a bit of care to perform this task in the most efficient manner but it can be done, and this leads to the algorithm \(\mathsf{OrdersTojInvariant}()\) of complexity \(O(\#\mathfrak{S}\log(p)^{2+\varepsilon}+p\log(p)^{1+\varepsilon})\). This algorithm requires one heuristic that we detail in Section 3, as Claim 1. It is related to the expansion property of the supersingular isogeny graphs. We stress that both algorithms are designed to work (and analyzed) for a generic prime \(p\) which is why they are so interesting. A direct application.Our heuristic algorithms \(\mathsf{OrdersTojInvariantSmall}()\) and \(\mathsf{OrdersTojInvariant}()\) can be used directly to compute the roots of class and modular polynomials modulo \(P\) (under the assumption that these roots are supersingular). The method is pretty straightforward: find the maximal order types corresponding to the desired roots, then, compute them with \(\mathsf{OrdersTojInvariantSmall}()\) and \(\mathsf{OrdersTojInvariant}()\). With the complexity we have stated, this is already enough to obtain an asymptotic improvement over existing generic methods when \(P\) is not too big (compared to the discriminant or level of the associated class or modular polynomial). If we write \(S\) the "degree" of the polynomial (it is \(h(D)=O(\sqrt{|D|}\log(D)^{\varepsilon})\) for Hilbert polynomials of discriminant \(|D|\) and \(O(\ell^{2})\) for modular polynomials of level \(\ell\)), then we obtain the following complexity with \(\mathsf{OrdersTojInvariantSmall}()\): \[O(S\log(P)^{4+\varepsilon}+S\log(S)^{2+\varepsilon}\log(P)).\] With \(\mathsf{OrdersTojInvariant}()\), the complexity becomes \[O(S\log(P)^{2+\varepsilon}+P\log(P)^{1+\varepsilon}+S\log(S)^{2+\varepsilon} \log(P)).\] In both cases, the latter term comes from the polynomial reconstruction step that must be performed to recover the polynomial from its roots. Note that the size of the output is \(O(S\log(P))\). In terms of space, the requirement is optimal in both cases: so \(O(S\log(P))\). It is clear that the second algorithm will be better when \(P=O(S\log(S))\). However, whenever \(S=o(P)\) (which is often the case in applications), it will be better to use the variant with \(\mathsf{OrdersTojInvariantSmall}()\). In comparison, the best previously known generic methods based on the CRT have complexity \(O(S^{1+\alpha}\log(S)^{3+\varepsilon})\) where \(\alpha=1\) for Hilbert polynomial and \(1/2\) for modular polynomial. Thus, our algorithm based on \(\mathsf{OrdersTojInvariantSmall}()\) will have a better asymptotic complexity when \(P=o(2^{S^{\alpha/4}})\). While this is not enough to give an improvement in all cases, this is still an improvement for a significant range of primes \(P\). When \(P\) becomes too big with respect to \(\ell\), it becomes better to use the CRT method, and we will see that our second algorithm \(\mathsf{OrdersTojInvariant}()\) can be applied to this approach as well. The context of CRT typically makes use of several primes that are in \(O(S)\) and this is why it will be better to use \(\mathsf{OrdersTojInvariant}()\) than \(\mathsf{OrdersTojInvariantSmall}()\). Below, we briefly explain the principle of the CRT method to compute class and modular polynomials, then we detail how to use our new algorithms in that context and outline the differences between our proposed method and the one from Sutherland and BLS. The CRT for class polynomialsLet us take a prime \(P\) and a discriminant \(D<0\). We want an efficient algorithm to compute \(H_{D}(X)\mod P\). Our main algorithm is essentially the same as the one introduced by Sutherland [10]. Let us write \(\mathfrak{O}\) for the quadratic imaginary order of discriminant \(D<0\). We give a brief outline of Sutherland's algorithm. We may assume that the factorization of \(D\) is known as computing it is negligible compared to the rest of the computation. We define \(\mathcal{P}_{D}\) to be a set of primes. We write \(B_{D}\) for the bound on the bit-size of the coefficients of \(H_{D}\) over \(\mathbb{Z}\). Here is how the algorithm works: 1. Select some primes \(p_{1},\ldots,p_{n}\) in \(\mathcal{P}_{D}\) with \(\prod_{i=1}^{n}p_{i}>2^{B_{D}}\). 2. Compute a suitable representation of \(\operatorname{Cl}(D)\). 3. For each \(p_{i}\in\mathcal{P}_{D}\): 1. Compute the coefficients of \(H_{D}\mod p_{i}\). 2. Update CRT sums for each coefficient of \(H_{D}\). 4. Recover the coefficients of \(H_{D}\mod P\). The only difference with the concrete method proposed by Sutherland and ours is in the choice of the set \(\mathcal{P}_{D}\). In [10], the set \(\mathcal{P}_{D}\) is made of primes of the form \((t^{2}-Dv^{2})/4\), whereas in our case \(\mathcal{P}_{D}\) is made of non-split primes that are coprime with the square part of \(D\) (note that those two conditions are mutually exclusive). In both cases, the \(H_{D}\mod p_{i}\) are constructed from their roots. These roots are always \(j\)-invariants of elliptic curves in characteristic \(p_{i}\), but this is where the similarity ends. In the former case, the elliptic curves are ordinary and are defined over \(\mathbb{F}_{p_{i}}\), whereas in the latter case, we obtain supersingular elliptic curves defined over \(\mathbb{F}_{p_{i}^{2}}\). The ordinary and supersingular case are very different and the resulting algorithms are also very different. For the supersingular case, we recover the roots using our second algorithm \(\mathsf{OrdersTojInvariant}()\). We will see that, for the CRT, the size of the primes \(p_{i}\) is such that we are in the regime that favours \(\mathsf{OrdersTojInvariant}()\) over \(\mathsf{OrdersTojInvariantSmall}()\). Note that the case of non-split prime \(p_{i}\) have already been considered in the context of the CRT method by [1], but only for very small primes because of the inefficiency of Cervino's algorithm. For modular polynomials.Let \(\ell\) be a prime distinct from \(P\). We want an efficient algorithm to compute \(\Phi_{\ell}(X,Y)\mod P\). It can be done in a very similar fashion to class polynomials: compute \(\Phi_{\ell}\mod p_{i}\) for some \(p_{i}\) in a set \(\mathcal{P}_{\ell}\) and reconstruct the final polynomial with CRT. The idea introduced by Broker, Lauter and Sutherland (BLS in the rest of this article) is to use primes of the form \((t^{2}-4v^{2}D\ell^{2})\) with \(t,v,D\in\mathbb{N}\) for which there is a very specific volcano structure. This structure implies the existence of two distinct sets of ordinary curves defined over \(\mathbb{F}_{p_{i}}\): the curves with endomorphism ring isomorphic to \(\mathfrak{O}\) for some quadratic imaginary order \(\mathfrak{O}\) of discriminant \(D\) and class number bigger than \(\ell+2\) and the curves with endomorphism ring isomorphic to \(\mathbb{Z}+\ell\mathfrak{O}\). Since the latter are \(\ell\)-isogenous to the former, it is possible to recover the full \(\Phi_{\ell}\mod p_{i}\) by computing the \(j\)-invariants corresponding to these two sets of curves. The volcano structure allows for efficient computation by minimizing the number of \(\ell\)-isogeny computations. For supersingular curves, the choice of primes is even easier than for class polynomials: we can use any primes \(p_{i}\) that is big enough. As long as the number of supersingular curves is bigger than \(\ell+2\) we will be able to recover the full modular polynomial. This idea has already been considered by Charles and Lauter in 2005 [10] but in a rather direct way (where each of the \(\ell\)-isogeny involved is computed using the Velu formulas). We prove that using the Deuring correspondence and \(\mathsf{OrdersTojInvariant}()\), we can avoid entirely any \(\ell\)-isogeny computation and minimize the cost of elliptic curve operations. Generic improvements to the CRT method.There are several ways to improve the CRT method in practical applications. First, alternative class polynomials and modular functions, with smaller height bounds) can be used instead of the standard Hilbert class polynomial and modular polynomials for the same practical purpose. Second, for a number of applications such as the CM method and the SEA point counting algorithm, computing these polynomials is actually not necessary. What is really needed is the ability to evaluate them. Sutherland showed [12, 13] that it was possible to do better than compute-then-evaluate in both cases, providing, in particular, an additional improvement in terms of memory requirement for those applications. Using supersingular curves rather than ordinary ones should not prevent from applying all these practical improvements. For clarity's sake we focus on the simpler computation of the standard polynomials and leave to the reader the task of adapting these improvements to our new setting which should not be too daunting. Organisation of the article.The rest of this paper is organized as follows: in Section 2, we introduce some background on isogenies, quaternion algebras and the Deuring correspondence. Then, in Section 3, we introduce our main new algorithm to compute efficiently \(j\)-invariants corresponding to maximal order types. In Section 4, we explain in details how this algorithm can be applied to the computation of class polynomial with the CRT method. In Section 5, we do the same for modular polynomials. Acknowledgement.We thank Andrew Sutherland for very useful feedback on this work. ## 2 Background material ### Notations Basic complexities.We write \(M_{\mathbb{Z}}(b)\) for the cost of multiplying two integers of less than \(b\) bits. For asymptotic complexities we consider \(M_{\mathbb{Z}}(b)=O(b^{1+\varepsilon})\). For instance, this covers the complexity of all arithmetic operations in a finite field \(\mathbb{F}_{p}\) of characteristic \(p\) of less than \(b\) bits. Similarly, we write \(M_{\mathbb{P}}(b)\) for the cost (in terms of arithmetic operations over \(k\)) of multiplying two polynomials of degree smaller than \(b\) over a base field \(k\). Depending on the size of \(b\) we will either use \(M_{\mathbb{P}}(b)=O(b\log(b)^{1+\varepsilon}\) or \(O(b^{1+\varepsilon})\). Finally, the cost of fast interpolation algorithm for a polynomial of degree \(b\) is \(O(M(\mathbb{P}(b)\log(b))\). ### Elliptic curves, quaternion algebras and the Deuring correspondence More precise references on the topics covered in this section are: the book of Silverman [10] for elliptic curves and isogenies, the book of John Voight [14] on quaternion algebras and theoretical aspects of the Deuring correspondence, the thesis of Antonin Leroux [11] for the algorithmic aspects of the Deuring correspondence. _Supersingular elliptic curves and isogenies._ An _isogeny_\(\varphi:E_{1}\to E_{2}\) is a non-constant morphism sending the identity of \(E_{1}\) to that of \(E_{2}\). The degree of an isogeny is its degree as a rational map (see [1] for more details). When the degree \(\deg(\varphi)=d\) is coprime to \(p\), the isogeny is necessarily _separable_ and \(d=\#\ker\varphi\). An isogeny is said to be cyclic when its kernel is a cyclic group. The Velu formulas [11] can be used to compute any cyclic isogeny from its kernel. For any \(\varphi:E_{1}\to E_{2}\), there exists a unique dual isogeny \(\hat{\varphi}:E_{2}\to E_{1}\), satisfying \(\varphi\circ\hat{\varphi}=[\deg(\varphi)]\). _Endomorphism ring._ An isogeny from a curve \(E\) to itself is an _endomorphism_. The set \(\operatorname{End}(E)\) of all endomorphisms of \(E\) forms a ring under addition and composition. For elliptic curves defined over a finite field \(\mathbb{F}_{q}\), \(\operatorname{End}(E)\) is isomorphic either to an order of a quadratic imaginary field or a maximal order in a quaternion algebra. In the first case, the curve is said to be _ordinary_ and otherwise _supersingular_. We focus on the supersingular case in this article. Every supersingular elliptic curve defined over a field of characteristic \(p\) admits an isomorphic model over \(\mathbb{F}_{p^{2}}\). It implies that there only a finite number of isomorphism class of supersingular elliptic curves. The Frobenius over \(\mathbb{F}_{p}\) is the only inseparable isogeny between supersingular curves and it has degree \(p\). We write \(\pi:E\to E^{p}\). For any supersingular curve \(E\), the property \(\operatorname{End}(E)\cong\operatorname{End}(E^{p})\) is satisfied but we have \(E\cong E^{p}\) if and only if \(E\) has an isomorphic model over \(\mathbb{F}_{p}\). _Quaternion algebras._ For \(a,b\in\mathbb{Q}^{*}\) we denote by \(H(a,b)=\mathbb{Q}+i\mathbb{Q}+j\mathbb{Q}+k\mathbb{Q}\) the quaternion algebra over \(\mathbb{Q}\) with basis \(1,i,j,k\) such that \(i^{2}=a\), \(j^{2}=b\) and \(k=ij=-ji\). Every quaternion algebra has a canonical involution that sends an element \(\alpha=a_{1}+a_{2}i+a_{3}j+a_{4}k\) to its conjugate \(\overline{\alpha}=a_{1}-a_{2}i-a_{3}j-a_{4}k\). We define the _reduced trace_ and the _reduced norm_ by \(tr(\alpha)=\alpha+\overline{\alpha}\) and \(n(\alpha)=\alpha\overline{\alpha}\). _Orders and ideals._ A _fractional ideal_\(I\) of a quaternion algebra \(\mathcal{B}\) is a \(\mathbb{Z}\)-lattice of rank four contained in \(\mathcal{B}\). We denote by \(n(I)\) the _norm_ of \(I\), defined as the \(\mathbb{Z}\)-module generated by the reduced norms of the elements of \(I\). An order \(\mathcal{O}\) is a subring of \(\mathcal{B}\) that is also a fractional ideal. Elements of an order \(\mathcal{O}\) have reduced norm and trace in \(\mathbb{Z}\). An order is called _maximal_ when it is not contained in any other larger order. A suborder \(\mathfrak{O}\) of \(\mathcal{O}\) is an order of rank \(4\) contained in \(\mathcal{O}\). In this work, we will work with isomorphism classes of maximal orders in some quaternion algebra \(\mathcal{B}\) and this is why we introduce the notion of type. Definition 1: The type of an order \(\mathcal{O}\) written \(\operatorname{Typ}\mathcal{O}\) is the isomorphism class of \(\mathcal{O}\). The left order of a fractional ideal is defined as \(\mathcal{O}_{L}(I)=\{\alpha\in\mathcal{B}_{p,\infty}\mid\alpha I\subset I\}\) and similarly for the right order \(\mathcal{O}_{R}(I)\). A fractional ideal is _integral_ if it is contained in its left order, or equivalently in its right order. An integral ideal is _primitive_ if it is not the scalar multiple of another integral ideal. We refer to integral primitive ideals hereafter as ideals. The product \(IJ\) of ideals \(I\) and \(J\) satisfying \(\mathcal{O}_{R}(I)=\mathcal{O}_{L}(J)\) is the ideal generated by the products of pairs in \(I\times J.\) It follows that \(IJ\) is also an (integral) ideal and \(\mathcal{O}_{L}(IJ)=\mathcal{O}_{L}(I)\) and \(\mathcal{O}_{R}(IJ)=\mathcal{O}_{R}(J).\) The ideal norm is multiplicative with respect to ideal products. An ideal \(I\) is invertible if there exists another ideal \(I^{-1}\) verifying \(II^{-1}=\mathcal{O}_{L}(I)=\mathcal{O}_{R}(I^{-1})\) and \(I^{-1}I=\mathcal{O}_{R}(I)=\mathcal{O}_{L}(I^{-1}).\) The conjugate of an ideal \(\overline{I}\) is the set of conjugates of elements of \(I,\) which is an ideal satisfying \(I\overline{I}=n(I)\mathcal{O}_{L}(I)\) and \(\overline{I}I=n(I)\mathcal{O}_{R}(I).\) We define an equivalence on orders by conjugacy and on left \(\mathcal{O}\)-ideals by right scalar multiplication. Two orders \(\mathcal{O}_{1}\) and \(\mathcal{O}_{2}\) are equivalent if there is an element \(\beta\in\mathcal{B}^{\star}\) such that \(\beta\mathcal{O}_{1}=\mathcal{O}_{2}\beta.\) Two left \(\mathcal{O}\)-ideals \(I\) and \(J\) are equivalent if there exists \(\beta\in\mathcal{B}^{\star},\) such that \(I=J\beta.\) If the latter holds, then it follows that \(\mathcal{O}_{R}(I)\) and \(\mathcal{O}_{R}(J)\) are equivalent since \(\beta\mathcal{O}_{R}(I)=\mathcal{O}_{R}(J)\beta.\) For a given \(\mathcal{O},\) this defines equivalences classes of left \(\mathcal{O}\)-ideals, and we denote the set of such classes by \(\operatorname{Cl}(\mathcal{O}).\) The Deuring correspondenceis an equivalence of categories between isogenies of supersingular elliptic curves and the left ideals over maximal order \(\mathcal{O}\) of \(\mathcal{B}_{p,\infty},\) the unique quaternion algebra ramified at \(p\) and \(\infty,\) inducing a bijection between conjugacy classes of supersingular \(j\)-invariants and maximal orders (up to equivalence) [11]. Moreover, this bijection is explicitly constructed as \(E\rightarrow\operatorname{End}(E).\) Hence, given a supersingular curve \(E_{0}\) with endomorphism ring \(\mathcal{O}_{0},\) the pair \((E_{1},\varphi),\) where \(E_{1}\) is another supersingular elliptic curve and \(\varphi:E_{0}\to E_{1}\) is an isogeny, is sent to a left integral \(\mathcal{O}_{0}\)-ideal. The right order of this ideal is isomorphic to \(\operatorname{End}(E_{1}).\) One way of realizing this correspondence is obtained through the kernel ideals defined in [10]. Given an integral left-\(\mathcal{O}_{0}\)-ideal I, we define the kernel of \(I\) as the subgroup \[E_{0}[I]=\{P\in E_{0}(\overline{\mathbb{F}}_{p^{2}}):\alpha(P)=0\text{ for all }\alpha\in I\}.\] To \(I,\) we associate the isogeny \[\varphi_{I}:E_{0}\to E_{0}/E_{0}[I].\] Conversely, given an isogeny \(\varphi,\) the corresponding _kernel ideal_ is \[I_{\varphi}=\{\alpha\in\mathcal{O}_{0}\ :\ \alpha(P)=0\text{ for all }P\in\ker(\varphi)\}.\] In Table 1, we recall the main features of the Deuring correspondence. Effective Deuring correspondenceAfter establishing the nice theoretical results of the Deuring correspondence, it is natural to ask if we can obtain efficient algorithms to perform the translation between the two sides of our correspondence. This trend of work was started by Kohel, Lauter, Petit and Tignol in [13], and developed By Galbraith, Petit and Silva in [14]. In [1], Eisentrager, Haller, Lauter, Petit and Morrison provided the first complete picture of the situation (at least heuristically). It turns out that if we start from the quaternion side (either as a maximal order or an ideal), there are polynomial-time algorithms to compute the corresponding element (j-invariant, or isogeny). In particular, Eisentrager et al. introduced a heuristic polynomial-time algorithm that computes the \(j\)-invariant corresponding to a maximal order type given in input. Henceforth, we call this algorithm \(\mathsf{SingleOrderTojInvariant}()\). It will be a crucial building block in one of our algorithm. ## 3 Computing \(j\)-invariants corresponding to maximal orders. Let us fix some prime number \(p\). In this section, we introduce two algorithms to compute the \(j\)-invariants of supersingular curves over \(\mathbb{F}_{p^{2}}\) corresponding to a set \(\mathfrak{S}\) of maximal order types in \(\mathcal{B}_{p,\infty}\). By the Deuring correspondence, we know that each maximal order type in \(\mathcal{B}_{p,\infty}\) corresponds to one or two \(j\)-invariants of supersingular curve over \(\mathbb{F}_{p^{2}}\). We will explain in Section 3.1 how to represent efficiently maximal order types as elements in some set \(\mathcal{H}\). Concretely, the input \(\mathfrak{S}\) to our algorithms will be given as some subset of \(\mathcal{H}\). Our two algorithms presented in Section 3.2 target two opposite situations with respect to the relative size of \(p\) and \(\mathfrak{S}\). The first algorithm is called \(\mathsf{OrdersTojInvariantSmall}()\). As the name suggests, it targets the case where \(\#\mathfrak{S}/p\) is "small", and it is a quite direct application of standard results on the effective Deuring correspondence. Indeed, it consists in the sequential execution for each element in \(\mathfrak{S}\) of the algorithm \(\mathsf{SingleOrderTojInvariant}()\) introduced in [1] and that can compute the \(j\)-invariants associated to one maximal order type. \(\mathsf{OrdersTojInvariantSmall}()\) will work the best when \(\mathfrak{S}\) is made of a "small" portion of all possible types. Its asymptotic complexity is \(O(\#\mathfrak{S}\log(p)^{4+\varepsilon}+\log(p)^{6+\varepsilon})\) and works for any prime \(p\) and set \(\mathfrak{S}\). The second algorithm is called \(\mathsf{OrdersTojInvariant}()\) and it is more involved in both design and analysis. It targets the case where \(\mathfrak{S}\) is made of a significant portion of all \(O(p)\) possible types and is based on the idea that since \(\mathfrak{S}\) is big enough, the strategy that consists in going through the entire supersingular isogeny graph, collecting the \(j\)-invariants we want along the way, is quite optimal. Its complexity is \(O(\mathfrak{S}\log(p)^{2+\varepsilon}+p\log(p)^{1+\varepsilon})\). Hence, the cutoff between the two \begin{table} \begin{tabular}{l l} \hline Supersingular \(j\)-invariants over \(\mathbb{F}_{p^{2}}\) & Maximal orders in \(\mathcal{B}_{p,\infty}\) \\ \(j(E)\) (up to Galois conjugacy) & \(\mathcal{O}\cong\mathrm{End}(E)\) (up to isomorpshim) \\ \hline \((E_{1},\varphi)\) with \(\varphi:E\to E_{1}\) & \(I_{\varphi}\) integral left \(\mathcal{O}\)-ideal and right \(\mathcal{O}_{1}\)-ideal \\ \hline \(\theta\in\mathrm{End}(E_{0})\) & Principal ideal \(\mathcal{O}\theta\) \\ \hline \(\mathrm{deg}(\varphi)\) & \(n(I_{\varphi})\) \\ \hline \(\hat{\varphi}\) & \(I_{\varphi}\) \\ \hline \(\varphi:E\to E_{1},\psi:E\to E_{1}\) & Equivalent Ideals \(I_{\varphi}\sim I_{\psi}\) \\ \hline Supersingular \(j\)-invariants over \(\mathbb{F}_{p^{2}}\) & \(\mathrm{Cl}(\mathcal{O})\) \\ \hline \(\tau\circ\rho:E\to E_{1}\to E_{2}\) & \(I_{\tau\circ\rho}=I_{\rho}\cdot I_{\tau}\) \\ \hline \end{tabular} \end{table} Table 1: The Deuring correspondence, a summary from [1]. methods will be for some \(\mathfrak{S}\) with \(\#\mathfrak{S}=\Theta(p/\log(p)^{2+\varepsilon})\). Note that our second algorithm will be optimal when \(p/\#\mathfrak{S}=\Theta(\log(p)^{1+\varepsilon})\). In the rest of this work, we will sometimes refer to these two algorithms by calling them the "small" and "big" case respectively. ### Hashing to maximal order types One of the important point for making our algorithms practical is to have a good way to handle sets of maximal order types and test if a type belong in some set of types. For any maximal order \(\mathcal{O}\), we will represent \(\operatorname{Typ}\mathcal{O}\) by an invariant \(H(\mathcal{O})\). The purpose of this section is to introduce an efficiently computable invariant \(H(\mathcal{O})\) and the corresponding function \(H\). To derive an invariant for an isomorphism classes of lattices, it is quite natural to look at the smallest elements of that lattice. This idea was introduced by Chevyrev and Galbraith [10] in a related context. Let us take a maximal order \(\mathcal{O}\). It can be shown (see [10]) that \(1,x_{1},x_{2},x_{3}\) is a basis of \(\mathcal{O}\) where \(2x_{1}-tr(x_{1}),2x_{2}-\operatorname{tr}(x_{2}),2x_{3}-\operatorname{tr}(x_ {3})\) realize the successive minima of the lattice \(\mathcal{O}^{T}=\{2x-\operatorname{tr}(x)|x\in\mathcal{O}\}\). Thus, if we take the Gram matrix of this basis (possibly reordering \(x_{1},x_{2},x_{3}\) so that \(\operatorname{tr}(x1x_{2})\leq\operatorname{tr}(x1x_{3})\leq\operatorname{tr }(x_{2}x_{3})\) if there are some equalities of norm between \(x_{1},x_{2}\) and \(x_{3}\)) we obtain 10 values (the Gram matrix is symmetric) that uniquely represent the lattice \(\mathcal{O}\). This is enough to obtain an invariant of size \(O(\log(p))\) as it can be shown that \(\log(n(x_{i}))=O(\log(p))\) for all \(i\in\{1,2,3\}\). If needed, for compactness, one can then apply some kind of hash function \(h:\{0,1\}^{*}\to\mathcal{H}\) where \(\mathcal{H}\) is big enough to make the probability of a collision negligible over all our maximal order types. For a generic statement, we take an arbitrary function \(h\) (which might be the identity) and assume its computational cost is negligible. Then, we define \(H\) as: 1. Compute \(\mathcal{O}^{T}\). 2. Compute the three successive minimas of \(\mathcal{O}^{T}\). 3. Derive the basis of \(\mathcal{O}\). 4. Compute the Gram Matrix \(M\) associated to this basis. 5. Output \(H(\mathcal{O})=h((M_{i,j})_{1\leq i\leq j\leq n})\). Proposition 1: _The hash function \(H\) presented above can be computed in \(O(\log(X)^{1+\varepsilon})\) when all the coefficients in the decomposition of the basis of \(\mathcal{O}\) over \(\langle 1,i,j,k\rangle\) are smaller than \(X\)._ Proof: This can be achieved using the algorithm to reduce ternary quadratic form of [1], or the \(\tilde{L}^{1}\) algorithm from [14] to perform lattice reduction in small dimension, to compute the successive minimas of \(\mathcal{O}^{T}\). ### Matching maximal orders and \(j\)-invariant Let us fix a prime \(p\). We assume that a function \(H\) as introduced in Section 3.1 is defined and we assume that the underlying hash function \(h\) is such that there is no collisions over all maximal order types in \(\mathcal{B}_{p,\infty}\). _The case of small \(\mathfrak{S}\)._ The algorithm in the small case is quite simple to describe, it is made of consecutive executions of a sub-algorithm \(\mathsf{SingleOrderTojInvariant}()\) described as [1, Algorithm 12]. The complexity of this algorithm was analyzed by Galbraith, Petit and Silva in [1] and it is \(O(\log(p)^{6+\varepsilon})\) for the first execution and \(O(\log(p)^{4+\varepsilon})\) after that. The algorithm we obtain by applying \(\mathsf{SingleOrderTojInvariant}()\) on each element of the input set \(\mathfrak{S}\) is called \(\mathsf{OrdersTojInvariantSmall}()\) and its complexity is \(O(\#\mathfrak{S}\log(p)^{4+\varepsilon}+\log(p)^{6+\varepsilon})\). Note that this result holds under various heuristic that are detailed in [1, 1]. In terms of space, the complexity is optimal: \(O(\#\mathfrak{S}\log(p))\). _The case of big \(\#\mathfrak{S}\)._ The nice thing with \(\mathsf{SingleOrderTojInvariant}()\) is that it allows us to target specifically any maximal order type. This flexibility makes \(\mathsf{OrdersTojInvariantSmall}()\) a good candidate when \(\mathfrak{S}\) is not too big, but it becomes more and more costly as \(\#\mathfrak{S}/p\) increases. To overcome this problem when the ratio \(\#\mathfrak{S}/p\) increases, we can try to mutualize as much computation as possible by using an approach that explores the entire isogeny graph and only targets the specific elements of \(\mathfrak{S}\) along the way. Since, our exploration of the isogeny graph is completely generic, we will be able to do this more efficiently than applying \(\mathsf{SingleOrderTojInvariant}()\) for each maximal order type. More concretely, our idea is the following: take a smooth degree \(L\) such that all supersingular curves are \(L\)-isogenous to some starting curve \(E_{0}\) of endomorphism ring in \(\operatorname{Typ}\mathcal{O}_{0}\) for some maximal order \(\mathcal{O}_{0}\). Compute all the \(\mathcal{O}_{0}\)-ideals of norm \(L\) and their right orders and select the ones contained in the set \(\mathfrak{S}\). Then, enumerate efficiently through all the corresponding isogenies of degree \(L\) and collect the \(j\)-invariants of the codomains. Since quaternion operations cost less, we will do the exhaustive part over the quaternions, while minimizing the cost of elliptic curve operations by "selecting" the \(L\)-isogenies than we cannot avoid to compute. Note that we do the ideal and isogeny phases in a simultaneous manner in \(\mathsf{OrdersTojInvariant}()\). The good complexity we obtain will come from the care we take in computing all the required isogenies in the most efficient way possible, and in particular to avoid as many useless isogeny computations as possible. For that, the choice of \(L\) will be very important. In particular, if \(L=L_{1}L_{2}\), by factorization of isogenies, we can compute all relevant \(L\)-isogenies by computing all \(L_{1}\)-isogenies and only a subset of all the \(O(L_{1}L_{2})\)\(L_{2}\)-isogenies. This subset obviously depends on \(\mathfrak{S}\) and the \(L_{1}L_{2}\)-isogeny computations account for the \(\mathfrak{S}\log(p)^{2}\) terms in the complexity (because we will choose \(L_{2}\) to be smooth). The \(p\log(p)\) covers the costs of the quaternion operations (which does not depend on \(\mathfrak{S}\) since we cover all maximal order types) and \(L_{1}\) isogeny computations. Before describing the algorithm in itself, we need to provide several properties and one heuristic claim that are going to be crucial for analyzing the algorithm complexity and proving its correctness. Note that our heuristic is different from the ones used in the analysis of \(\mathsf{SingleOrderTojInvariant}()\). _Preliminary results and a heuristic assumption._ Our first results target the degree \(L\) of the isogenies that we will use. As we explained above, this degree is crucial for optimizing the algorithm. To enable the fast computation of a lot of \(L\)-isogenies at the same time, we need it to be power-smooth (for efficient application of the Velu formulas), but with coprime factors that are not too small. Let us write \(\Phi(N)\) the number of cyclic isogenies of degree \(N\) for any \(N\in\mathbb{N}\). Lemma 1: _There exists a bound \(B\) and constants \(C_{0},1<C_{1}<C_{2}\) such that for every number \(N>B\), there exists a set of \(n\) coprime factors \(L_{1},\cdots,L_{n}\) with \(\sqrt{C_{0}\log(N)}\leq L_{i}<C_{0}\log(N)\) for all \(1\leq i\leq n-1\), \(C_{0}/2\log(N)\leq L_{n}<C_{0}L_{n}\) and \(C_{1}N\leq\prod_{i=1}^{n}\Phi(L_{i})<C_{2}N\)._ The proof of Lemma 1 is not hard but it is quite tedious so it is given in Appendix 0.A. Remark 1: In practice, for a given \(N\) we will call such a set of factors \(L_{1},\cdots,L_{n}\), a _degree-basis_ for \(N\). When each \(L_{i}\) is a prime power of the \((n-i+1)-th\) smallest prime number \(\ell_{n-i+1}\), we say that \(L_{1},\ldots,L_{n}\) is a **good**_degree-basis_ of \(N\). The choice of sorting the prime factors of the \(L_{i}\) in descending order will be important for our algorithm. We derive the following result that will be useful in our analysis. Proposition 2: _Take an integer \(N>B\) with good degree-basis \(L_{1},L_{2},\cdots,L_{n}\) with \(n>6\). Then, there exists a constant \(k\leq 6\) and a constant \(C_{3}\) such that_ \[\sum_{i=1}^{n-k}(n-i)\prod_{j=1}^{i}\Phi(L_{j})<C_{3}\frac{N}{\log(N)^{2}} \tag{3.1}\] Proof: We are going to show the bound for \(k=6\). For that we are going to use the equalities \(\sum_{i=0}^{m}X^{i}=\frac{X^{m+1}-1}{X-1}\) and \(\sum_{i=1}^{m}iX^{i}=X(\frac{mX^{m+1}-(m+1)X^{m}+1}{(X-1)^{2}})\) for any \(m\) and \(X\). By our definition of a degree-basis in Lemma 1, we have that \(\Phi(L_{j})\geq L_{i}\geq\sqrt{C_{0}\log(N)}\) for all \(1\leq j\leq n\). Thus, \(\prod_{j=1}^{i}\Phi(L_{j})<C_{2}N/\prod_{j=i+1}^{n}\Phi(L_{j})<C_{2}N/(\sqrt{C _{0}\log(N)})^{n-i}\). Let us take \(X=\sqrt{C_{0}\log(N)}\). We have \(\sum_{i=1}^{n-6}(n-i)\prod_{j=1}^{i}\Phi(L_{i})<C_{2}N\sum_{i=1}^{n-6}(n-i)X^{ n-i}\). Then, we have \(\sum_{i=1}^{n-6}(n-i)X^{n-i}=\sum_{i=6}^{n-1}iX^{i}=X^{6}\sum_{i=0}^{n-7}(i+6)X^{ i}\). Then, we apply our two equalities to get \(\sum_{i=1}^{n-6}(n-i)X^{n-i}=X^{6}(6\frac{X^{n-6}-1}{X-1})+\frac{(n-7)X^{n-5}-( n-6)X^{n-6}+X}{(X-1)^{2}})=X^{6}\frac{(n-1)X^{n-5}-(n-12)X^{n-6}-5X+1}{(X-1)^{ 4}}\). Since, asymptotically, we will have \(X<1\) while \(n\) will increase, we have that the leading term in the numerator of the fraction is \(1\), and so there exists \(C_{3}^{\prime}\) such that \(\sum_{i=1}^{n-6}(n-i)X^{n-i}\leq C_{3}^{\prime}X^{4}\). Thus, \(\sum_{i=1}^{n-6}(n-i)\prod_{j=1}^{i}\Phi(L_{i})<\frac{C_{2}C_{3}^{\prime}}{C_ {0}^{2}}\frac{N}{\log_{2}(N)}\). Our heuristic claim is about the size of \(L\) required to meet the condition that all supersingular curves are \(L\)-isogenous to some curve \(E_{0}\). We rewrite this under the Deuring correspondence as a condition on maximal orders and ideals. **Claim 1**: _There exists a constant \(C_{4}\) such that for any prime \(p\) and maximal order \(\mathcal{O}_{0}\) in \(\mathcal{B}_{p,\infty}\), given any number \(N>pC_{4}\), every maximal order type in \(\mathcal{B}_{p,\infty}\) is obtained as the type of the right order of a left integral \(\mathcal{O}_{0}\)-ideal of norm \(N\)._ Remark 2: Claim 1 is consistent with experiments regarding the diameter of the graph of supersingular \(2\)-isogenies made in [ACNL\({}^{+}\)21]. We also made some small experiments that seems to be consistent with that idea. In case our claim fails, it is possible to use several starting curve \(E_{0}\) to decrease the probability of missing some curve. If this is still not enough, and a few types are not obtained in this manner, it is always possible to apply SingleOrderTojInvariant() to compute the remaining \(j\)-invariants without damaging too much the complexity. For a given input \(p\) to OrdersTojInvariant(), we assume the knowledge of a good degree-basis \(L_{1},\cdots,L_{n}\) (see Remark 1) of \(C_{4}p\) (where \(C_{4}\) is the constant in Claim 1). In OrdersTojInvariant(), we will use the \(L_{i}\) torsion points for all \(i\). For supersingular curves, these points can always be defined over an extension \(\mathbb{F}_{p^{m_{i}}}\). We have the following lemma to bound the value of all the \(m_{i}\) from \(L_{i}\). **Lemma 2**: _Let \(p\) be a prime number and \(E_{0}\) a supersingular curve over \(\mathbb{F}_{p^{2}}\). For any integer \(N\), coprime with \(p\), the torsion subgroup \(E_{0}[N]\) is defined over an extension \(\mathbb{F}_{p^{2m}}\) of degree \(m\leq N\) over \(\mathbb{F}_{p^{2}}\)._ ``` 0: A prime \(p\). A good degree-basis \(L_{1},\cdots,L_{n}\), and a set of maximal order types in \(\mathcal{B}_{p,\infty}\). 0: The set of \(j\)-invariants corresponding to the maximal orders of \(\mathfrak{S}\). 1: Compute a supersingular curve \(E_{0}\) over \(\mathbb{F}_{p^{2}}\) with known endomorphism ring \(\mathcal{O}_{0}\). 2: Compute \(I_{0}=\mathcal{O}_{0}\langle\alpha_{0},L\rangle\) one left integral \(\mathcal{O}_{0}\)-ideal of norm \(L\). 3: Find \(\alpha\in\operatorname{End}(E_{0})\) such that \(\gcd(n(x+y\alpha),L)=1\) for all \(x,y\) with \(\gcd(x,y,L)=1\). 4:for\(i\in[1,\ldots,n]\)do 5: Compute a basis \(P_{0,i},Q_{0,i}\) of \(E_{0}[L_{i}]\) over \(\mathbb{F}_{p^{m_{i}}}\). 6: Compute the generator \(R_{0,i}\) of \(E[I_{0}+\mathcal{O}_{0}L_{i}]\) from \(P_{i},Q_{i}\). 7: Compute \(S_{0,i}=\alpha(R_{0,i})\). 8:endfor 9: Set \(\mathtt{List}=[\{E_{0},\mathcal{O}_{0}\langle 1\rangle,[R_{0,1},\ldots,R_{0,n}],[S _{0,1},\ldots,S_{0,n}]\}]\). 10: Set \(M=\text{ and }m=0\). 11:if\(H(\mathcal{O}_{0})\in\mathfrak{S}\)then 12:\(M=M\cup\{(H(\mathcal{O}_{0}),j(E_{0}))\}\) and \(m=m+1\). 13:endif 14:for\(i\in[1,\ldots,n]\)do 15: NewList= \([]\). 16:for\(x\in\mathtt{List}\)do 17: Parse \(x\) as \(E,I,[R_{i},\ldots,R_{n}],[S_{i},\ldots,S_{n}]\). 18:for all cyclic subgroups \(\langle CR_{i}+[D]S_{i}\rangle\) of order \(L_{i}\) in \(\langle R_{i},S_{i}\rangle\)do 19: Compute \(P:=[C]R_{i}+[D]S_{i}\). 20: Compute \(J\) the ideal \(\mathcal{O}_{0}\langle\alpha_{0}(C+D\overline{\alpha}),L_{i}\rangle\). 21: Set \(K:=I\cap J\). 22: Let \(\mathcal{O}=\mathcal{O}_{R}(K)\). 23:if\(i<n\)then 24: Compute \(\varphi:E\to E/\langle P\rangle\). 25: Compute \(\mathtt{List}_{1}=[\varphi(R_{i+1}),\ldots,\varphi(R_{n})]\). 26: Compute \(\mathtt{List}_{2}=[\varphi(S_{i+1}),\ldots,\varphi(S_{n})]\). 27:endif 28:if\(H(\mathcal{O})\in\mathfrak{S}\) and not contained in \(M\) already then 29:if\(i=n\)then 30: Compute \(\varphi:E\to E/\langle P\rangle\). 31: Set \(\mathtt{List}_{1}=[]\), \(\mathtt{List}_{2}=[]\). 32:endif 33:\(M=M\cup\{(H(\mathcal{O}),j(E/\langle P\rangle))\}\) and \(m=m+1\). 34:endif 35:if\(m=\#\mathfrak{S}\)then 36: Return \(M\). 37:endif 38: Concatenate NewList and \([E/\langle P\rangle,K,\mathtt{List}_{1},\mathtt{List}_{2}]\). 39:endfor 40:\(\mathtt{List}=\mathtt{NewList}\). 41:endfor 42:endfor 43:return\(M\). ``` **Algorithm 1**OrdersTojInvariant() **Proposition 3**.: _Assuming Claim 1, \(\mathsf{OrdersTojInvariant}(p)\) is correct._ Proof.: By Claim 1 and our choice of \(L\), we see that each maximal order types in \(\mathcal{B}_{p,\infty}\) is obtained as the right order of a \(L\)-ideal. Thus, we need to prove that our algorithm goes through every possible \(L\)-ideal and that it computes correctly the \(j\)-invariant associated with the right orders of those ideals. We will make our reasoning over isogenies and the Deuring correspondence will allow us to conclude the result over ideals. Every cyclic \(L\)-isogeny can be factored as \(\varphi_{n}\circ\ldots\circ\varphi_{1}\) where \(\varphi_{i}\) is an isogeny of degree \(L_{i}\). When \(R_{i},S_{i}\) is a basis of \(E[L_{i}]\), then all cyclic subgroups of order \(L_{i}\) are generated by an element \([C]R_{i}+[D]S_{i}\). There is a 1-to-1 correspondence between cyclic subgroups of order \(L_{i}\) and cyclic isogenies of degree \(L_{i}\). Since \(R_{0,i},S_{0,i}\) is a basis of \(E_{0}[L_{i}]\) and \(\varphi_{i-1}\circ\cdots\circ\varphi_{1}\) has degree coprime with \(L_{i}\), the two points \(R_{i},S_{i}\) are a basis of \(E[L_{i}]\) and so this proves that our enumeration covers all possible isogenies of degree \(L_{i}\) at each iteration of index \(i\) of the loop in line 18. Thus, at the end of the loop we have covered all \(L\)-isogenies, and this means that if our ideal computation is correct, then we have covered all maximal order types and our algorithm is correct. Remains to prove that the ideal \(I_{1}\cap\ldots\cap I_{i}\) where each \(I_{j}=\mathcal{O}_{0}\langle\alpha_{0}(C_{j}+D_{j}\overline{\alpha}),L_{j}\rangle\) is the ideal corresponding to the isogeny \(\varphi_{i}\circ\ldots\varphi_{1}\) where each \(\varphi_{j}\) has kernel \(\varphi_{j-1}\circ\cdots\circ\varphi_{1}([C_{j}]R_{0,j}+[D_{j}]S_{0,j})\) or equivalently that the kernel of \(\varphi_{i}\circ\ldots\circ\varphi_{i}=\sum_{j=1}^{i}([C_{j}]R_{0,j}+[D_{j}]S_ {0,j})\). For that, it suffices to prove the result for each coprime factor of the degree, so we need to prove that \(E_{0}[I_{j}]=\langle[C_{j}]R_{0,j}+[D_{j}]S_{0,j}\rangle\). Let us go back to the definition of an ideal kernel given in Section 2. We have \(E_{0}[I_{j}]=\{P,\beta(I_{J})=0\forall\beta\in I_{j}\{\). Since \(I_{j}\) contains \(L_{j}\mathcal{O}_{0}\), it is clear that the kernel must be a subgroup of \(E_{0}[L_{j}]\). Since multiplication in \(\mathcal{B}_{p,\infty}\) amounts to composition of the corresponding isogenies, it suffices to verify that \(\ker\alpha_{0}(C_{j}+D_{j}\overline{\alpha})\cap E_{0}[L_{j}]=\langle[C_{j}] R_{0,j}+[D_{j}]S_{0,j}\rangle\). First, note that we have \(\ker\alpha_{0}\cap E_{0}[J_{j}]=\langle R_{0,j}\rangle\) by definition of \(\alpha_{0}\) and \(R_{0,j}\). Then, we have \(([C_{j}]+[D_{j}]\overline{\alpha})([C_{j}]R_{0,j}+[D_{j}]S_{0,j})=([C_{j}]+[D_{ j}](\overline{\alpha})([C_{j}]+[D_{j}]\alpha)(R_{0,j})=[n(C_{j}+D_{j}\alpha)]R_{0,j}\). This proves that we have \(\langle[C_{j}]R_{0,j}+[D_{j}]S_{0,j}\rangle\subset\ker\alpha_{0}(C_{j}+D_{j} \overline{\alpha})\cap E_{0}[L_{j}]\). By definition of \(\alpha\) the scalar \(n(C_{j}+D_{j}\alpha)\) is coprime with \(L_{j}\) and so the endomorphism \(C_{j}+D_{j}\alpha\) is a bijection on \(E_{0}[L_{j}]\). Thus, there cannot be another subgroup than \(\langle[C_{j}]R_{0,j}+[D_{j}]S_{0,j}\rangle\) that is sent to \(\langle R_{0,j}\rangle\) and this concludes the proof that \(\ker\alpha_{0}(C_{j}+D_{j}\overline{\alpha})\cap E_{0}[L_{j}]=\langle[C_{j}] R_{0,j}+[D_{j}]S_{0,j}\rangle\). With that last fact, we have proven the point. Complexity analysis.Below, we give as Theorem 1, a complexity statement for Algorithm 1. We derive this result from smaller statements for all the main Steps of Algorithm 1. The proofs of these statements include a more detailed description of the steps when needed. When a step has a complexity that will end-up being negligible before the total cost, we will sometimes not bother with a precise statement. Note that all operations involving manipulations of the list \(M\) can be done efficiently in \(O(1)\) using adequate data structure so we do not analyze this part of the computation. Since we look for an asymptotic statement, we assume that the size \(n\) of the \(\log\) factor basis used is bigger than \(6\) so we can apply Proposition 2. Proposition 4: _Under GRH, Step 1 can be executed \(O(\mathsf{poly}(\log(p)))\)._ Proof: The first step can be performed using an algorithm that was described as part of the proof of [1, Proposition 3]. The idea is the following. Select the smallest fundamental discriminant \(d\) such that \(p\) is inert in the ring of integer \(R_{d}\) of \(\mathbb{Q}(\sqrt{d})\). Note that it can be proven that \(d=O(\log(p)^{2})\) under GRH. Then, we know that there are supersigular curves that admit an embedding of \(R_{d}\) in their endomorphism ring. The \(j\)-invariants of these curves are the roots to the Hilbert class polynomial \(H_{d}\). It suffices to find one root of \(H_{d}\) over \(\mathbb{F}_{p}\) to get a supersingular curve \(E_{0}\) whose endomorphism ring will contain the Frobenius \(\pi\) and an endomorphism \(\iota\) of norm \(d\). This endomorphism can be found by computing all the isogenies of degree \(d\) with the Velu formulae. Since the suborder \(\langle 1,\iota,\pi,\iota\circ\pi\rangle\) has an index in \(O(d)=O(\mathsf{poly}(\log(p)))\) inside \(\mathrm{End}(E_{0})\), and recovering the full endomorphism ring can be done in \(O(\mathsf{poly}(\log(p)))\). This proves the result for Step 1. Proposition 5: _Step 2 and Step 3 can be executed in \(O(\mathsf{poly}(\log(p)))\) and the output \(\alpha_{0}\) and \(\alpha\) have coefficients in \(O(\mathsf{poly}(p))\) over the canonical basis of \(\mathcal{B}_{p,\infty}\)._ Proof: First, note that we can fix a basis of \(\mathcal{O}_{0}\) with coefficients (over the canonical basis of \(\mathcal{B}_{p,\infty}\)) in \(O(p)\). The Steps 2 and 3 can be solved in a similar manner despite a final goal that is quite different. For Step 2, it is sufficient to find an \(\beta_{0}\) such that that the matrix of the action of \(\beta_{0}\) on a basis of the \(L\)-torsion has two distinct eigenvalues. Then, we can take \(\alpha_{0}=\beta_{0}-\lambda\) where \(\lambda\) is one of the two eigenvalues. Note that this is equivalent to saying that \(\beta_{0}\) needs to have two distinct eigenvalues mod \(L_{i}\) for all \(1\leq i\leq n\). For Step 3, a sufficient condition to obtain \(\alpha\) such that \(\gcd(n(x+y\alpha),L)=1\) for all \(x,y\) with \(\gcd(x,y,L)=1\) is to have that the matrix of \(\alpha\) over a basis of the \(L_{i}\)-torsion with no eigenvalues for all \(1\leq i\leq n\). The existence and value of these eigenvalues \(\mod L_{i}\) for a quaternion element \(\beta\) can be verified directly by computing the roots of the polynomial \(X^{2}+\mathrm{tr}(\beta)X+n(\beta)\mod L_{i}\). Thus, to solve the two steps, we can apply the following method. First, for each \(L_{i}\) dividing \(L\), find one element \(\beta_{0,i}\) (resp. \(\alpha_{i}\)) in \(\mathcal{O}_{0}/L_{i}\mathcal{O}_{0}\) with two distinct (resp. no) eigenvalues \(\mod L_{i}\). Since the ring \(\mathcal{O}_{0}/L_{i}\mathcal{O}_{0}\) is isomorphic to \(\mathbb{M}_{2}(\mathbb{Z}/L_{i}\mathbb{Z})\) it is clear that a solution can be found by enumerating over the \(L_{i}^{4}\) elements of \(\mathcal{O}_{0}/L_{i}\mathcal{O}_{0}\). Then, the final element \(\beta_{0}\) (resp. \(\alpha\)) can be obtained by CRT (doing this coefficient-wise over the basis of \(\mathcal{O}_{0}\)). The coefficients of \(\beta_{0}\) (resp. \(\alpha\)) will have size \(O(L)\) over the basis of \(\mathcal{O}_{0}\) and so we get the desired result by taking into account the coefficients of this basis over the canonical basis of \(\mathcal{B}_{p,\infty}\). For each \(L_{i}\), we have to enumerate at most \(L_{i}^{4}\) quaternion elements, so the final complexity statements follows from the complexity of computing modular squareroot and the complexity of CRT. Proposition 6: _The FOR loop in line 4 can be executed in \(O(\mathsf{poly}(\log(p)))\)._ Proof: The loop is repeated \(O(\log(p))\) times. Computing a basis is a very standard task and it can be done in \(O(\mathsf{poly}(\log(p)))\) since \(m_{i}=O(\log(p))\). The generator \(R_{0,i}\) can be computed in \(O(\mathsf{poly}(\log(p)))\) using the algorithm described in [1] to find the kernel of an ideal. Evaluating the endomorphism \(\alpha\) can be done by evaluating a basis of \(\mathrm{End}(E_{0})\) and then performing the scalar multiplications corresponding to the coefficients of \(\alpha\) in this basis. The first part can be done in \(O(\mathsf{poly}(\log(p)))\) by choice of \(E_{0}\) and the second can also be done in \(O(\mathsf{poly}(\log(p)))\) by the size bound given on the coefficients of \(\alpha\) in Proposition 5. The main computational task of \(\mathsf{OrdersTojInvariant}()\) is quite clearly performed during the loop in line 14. This FOR loop contains two inner loops. We will incrementally provide complexity statements for each of these loops in orders to clearly decompose the cost of each operations. Proposition 7: _At index \(i<n-6\), Step 24 produces a polynomial over \(\mathbb{F}_{p^{2}}\) of degree smaller than \(L_{i}\) in \(O(M_{\mathbb{P}}(L_{i})\log(L_{i}))\) operations over \(\mathbb{F}_{p^{m_{i}}}\). This polynomial defines uniquely the isogeny \(\varphi\). At index \(n-6\leq i\leq n\), \(L_{i}=\ell_{i}^{e_{i}}\) for some prime \(\ell_{i}\leq 13\) (the \(6\)-th prime) and Step 24 produces \(e_{i}\) polynomials of degree \(\ell_{i}\) over \(\mathbb{F}_{p^{2}}\) in \(O(e_{i}^{2})\) operations over \(\mathbb{F}_{p^{m_{n}}}\). These polynomials uniquely defines the isogeny \(\varphi\)._ Proof: It can be shown with the Velu formulaes [15], that to represent an isogeny it suffices to compute its kernel polynomial, i.e., the polynomial whose roots are the x-coordinate of the points of the kernel and that this polynomial is always defined over \(\mathbb{F}_{p^{2}}\) even if the kernel points are not. From a single kernel point, the \(O(L_{i})\) points of the kernel can be generated in \(O(L_{i})\) operations over \(\mathbb{F}_{p^{m_{i}}}\). Then, the kernel polynomial can be constructed with complexity \(O(M_{\mathbb{P}}(L_{i})\log(L_{i}))\) from its roots. When \(n-6\leq i\leq n\), we can write \(L_{i}=\ell_{i}^{e_{i}}\) for some prime \(\ell_{i}=O(1)\) and then, we can factor our isogeny of degree \(L_{i}\) as \(e_{i}\) isogenies of degree \(\ell_{i}\). All these isogenies can be computed in time \(O(e_{i}^{2})\) from a kernel generator (see [1] for more on this topic). Proposition 8: _There exists a constant \(C\) such that, at any index \(i\leq n-6\), the number of \(\mathbb{F}_{p}\)-operations executed in Steps 25,26 of the FOR loop in line 18 is upper bounded by \(C(n-i)\log(p)M_{\mathbb{P}}(\log(p))\). When \(i>n-6\), we have the upper-bound \(C(n-i)\mathrm{llog}(p)^{2}M_{\mathbb{P}}(\log(p))\)_ Proof: For each \(i<j\leq n\), we need to evaluate the polynomial produced by Step 24 on the points \(R_{i+1},\ldots,R_{n}\) and \(S_{i+1,\ldots R_{n}}\). By Proposition 7, when \(i\leq n-6\), each evaluation costs \(O(L_{i})\) operations over \(\mathbb{F}_{p^{m_{j}}}\). The cost of arithmetic over \(\mathbb{F}_{p^{m_{j}}}\) in operations over \(\mathbb{F}_{p}\) is upper-bound by \(C^{\prime}M_{\mathbb{P}}(m_{j})\) for some constant \(C^{\prime}\). So we get the result from \(L_{i}=O(\log(p))\) and \(m_{j}=O(\log(p))\) by Lemma 2. When \(i>n-6\), there are \(e_{i}=O(\mathrm{llog}(p))\) polynomials of degree \(\ell_{i}=O(1)\) and we get the desired result. Remark 3: The Velusqrt algorithm from [1] cannot be applied here because the kernel of the isogenies and the point on which the evaluation is performed do not live in the same extension. Proposition 9: _There exists a constant \(C\) such that, at any index \(i\leq n-6\), the number of binary operations executed in each execution of the FOR loop in line 18 is upper bounded by \(C\Phi(L_{i})(n-i)\log(p)M_{\mathbb{P}}(\log(p))M_{\mathbb{Z}}(\log(p))\). When \(n-6<i<n\), the number of binary operations is smaller than \(C\Phi(L_{i})\mathrm{llog}(p)^{2}M_{\mathbb{P}}(\log(p))M_{\mathbb{Z}}(\log(p))\)._ Proof: For each execution of this loop, the number of iteration is exactly \(\Phi(L_{i})\). Arithmetic over quaternion orders and ideals such as intersection and right order computation can be performed in \(O(M_{\mathbb{Z}}(\log(p)))\) (because the coefficients have size in \(O(\log(p))\) and these operations are simple linear algebra in dimension 4). Then, the hash can be computed in \(O(\log(p)^{1+\varepsilon})\). Thus, the total cost of the loop is \(O(\Phi(L_{i})\log(p)^{1+\varepsilon})\) for quaternion operations for any \(i\). This is negligible compared to other operations when \(i<n\). The verification that \(H(\mathcal{O})\in\mathfrak{S}\) and insertion in the hash map operation can be done in \(O(1)\) with the appropriate hash-map structure. Then, there are the cost of operations over \(\mathbb{F}_{p}\) for the isogeny computation. To derive the total complexity, we apply Propositions 7 and 8. Arithmetic over \(\mathbb{F}_{p}\) takes \(O(M_{\mathbb{Z}}(\log(p)))\) binary operations. For any \(i\), the kernel computation is negligible. When \(i\leq n-6\), the isogeny computation takes \[O(\Phi(L_{i})\log(p)M_{\mathbb{P}}(\log(p))M_{\mathbb{Z}}(\log(p))),\] then the evaluations take \(O((n-i)\Phi(L_{i})\log(p)M_{\mathbb{P}}(\log(p))M_{\mathbb{Z}}(\log(p)))\). Similarly, when \(n-6<i<n\), this cost is replaced by \[O(\Phi(L_{i})\mathrm{llog}(p)^{2}M_{\mathbb{P}}(\log(p))M_{\mathbb{Z}}(\log(p ))).\] Proposition 10: _There exists a constant \(C\) such that, at any index \(i\leq n-6\), the number of binary operations executed in the FOR loop in line 16 is upper bounded by \(C\prod_{j=1}^{i}\Phi(L_{i})(n-i)\log(p)^{3+\varepsilon}\). When \(n-6<i<n\), it is upper bounded by \(C\prod_{j=1}^{i}\Phi(L_{i})(n-i)\log(p)^{2+\varepsilon})\). When \(i=n\), it is upper-bound by \(C(\#\mathfrak{S}\log(p)^{2+\varepsilon}+p\log(p)^{1+\varepsilon})\)._ Proof: There are \(\Phi(L_{i})\) cyclic subgroups of order \(L_{i}\). Thus at index \(i<n\), the size of \(\mathtt{List}\) is \(\prod_{j=1}^{i-1}\Phi(L_{i})\) and the result follows directly from Proposition 9. When \(i=n\), we perform the quaternion computations (intersection, right order and computation of the hash value) for all \(\prod_{j=1}^{n}\Phi(L_{i})\) subgroups. Thus, since we have \(\prod_{j=1}^{n}\Phi(L_{i})=O(p)\) by Lemma 1 and Claim 1, and the cost of quaternion operations is \(O(\log(p)^{1+\varepsilon})\) as in the proof of Proposition 9, we get that the cost for quaternion operations is \(O(p\log(p)^{1+\varepsilon})\). The isogeny computation is only performed when the right maximal order is contained in \(\mathfrak{S}\), thus, we can upper-bound the number of times where an isogeny is computed by \(\#\mathfrak{S}\). As for all the \(i\geq n-6\), the cost of each \(L_{n}\)-isogeny computation is \(O(\operatorname{\mathrm{l}\!\log}(p)^{2}M_{\mathbb{P}}(\log(p))M_{\mathbb{Z}} (\log(p)))\) and this proves the result. Proposition 11: _The loop in line 14 can be executed in \(O(\#\mathfrak{S}\log(p)^{2+\varepsilon}+p\log(p)^{1+\varepsilon})\) binary operations._ Proof: The total cost of the loop is directly obtained by summing over all \(i\) the bounds in Proposition 10. We start by summing over all \(i\leq n-6\). We get an upper-bound on the number of binary operation by \(C\sum_{i=1}^{n-6}(n-i)\prod_{j=1}^{i}\Phi(L_{i})\log(p)^{3+\varepsilon}\). So we get \(O(p\log(p)^{1+\varepsilon})\) after applying Proposition 2. For \(n-6<i<n\), we get the upper-bound \[C\sum_{i=n-5}^{n-1}(n-i)\prod_{j=1}^{i}\Phi(L_{i})\log(p)^{2+\varepsilon}.\] Since by Lemma 1, \(L_{n}>C_{0}/2\log(p)\), we have \(\prod_{j=1}^{i}\Phi(L_{i})\leq C^{\prime}p/\log(p)\) for some constant \(C^{\prime}\) and any \(i<n\). Thus, since there is a constant number of summands, we get the cost is in \(O(p\log(p)^{1+\varepsilon})\) for those indices. Finally, at \(i=n\), we can apply directly the bound from Proposition 10. The final cost is \(O(\#\mathfrak{S}\log(p)^{2+\varepsilon}+p\log(p)^{1+\varepsilon})\). All the results above lead directly to the following theorem. Theorem 1: _Under Claim 1, on input \(p\) and \(\mathfrak{S}\), \(\mathsf{OrdersTojInvariant}()\) can be executed in_ \[O(\#\mathfrak{S}\log(p)^{2+\varepsilon}+p\log(p)^{1+\varepsilon})\] _binary operations._ Remark 4: Similarly, we can show that the space requirement of \(\mathsf{OrdersTojInvariant}()\) is in \(O(\#\mathfrak{S}\log(p)+p\log(p))\). Good choice of primes.The analysis we provided above for both \(\mathsf{OrdersTojInvariantSmall}()\) and \(\mathsf{OrdersTojInvariant}()\) does not assume anything on the prime \(p\). There are some "nice" choices of primes \(p\) for which we could basically gain a factor \(\log(p)\) over all elliptic curve operations by having all the required torsion point defined over \(\mathbb{F}_{p^{2}}\) (thus saving the cost of operations over big \(\mathbb{F}_{p}\)-extensions). Since we are interested in a generic statement, we do not bother with these marginal gains. In the context of applying \(\mathsf{OrdersTojInvariant}()\) to the CRT method, this idea will have its importance in the concrete choice of primes \(p_{i}\). However, due to the linear dependency in \(p\), it does not appear possible to select all CRT primes among these "nice" primes. And so the CRT complexity will depend on the worst-case complexity of our algorithm \(\mathsf{OrdersTojInvariant}()\). Computation of the Hilbert class polynomial Let \(D\) be a negative discriminant. As explained in the introduction, the CRT method (whether it is to compute \(H_{D}\) in \(\mathbb{Z}\) or modulo a big prime \(P\)) consists mainly in computing \(H_{D}\mod p_{i}\) for a bunch of small primes \(p_{i}\). In Section 4.1, we introduce an algorithm to perform that computation when the prime \(p_{i}\) is such that the roots of \(H_{D}\) over \(\overline{\mathbb{F}_{p}}\) are \(j\)-invariants of supersingular curves. This algorithm uses the algorithm \(\mathsf{OrdersTojInvariant}()\) from Section 3.2 as its main sub-routine. This algorithm has the best known complexity for a generic prime when \(P\) is "small" (relatively to \(D\)). However, if the prime is big, it will be more efficient to use \(\mathsf{OrdersTojInvariantSmall}()\) instead as explained in Section 3. When modified to use \(\mathsf{OrdersTojInvariantSmall}()\) instead of \(\mathsf{OrdersTojInvariant}()\), the algorithm we present in Section 4.1 can also compute directly \(H_{D}\mod P\) for any primes \(P\) that is not split and coprime with the square factor of \(D\), and its complexity is in \(O(h(D)\log(P)^{4+\varepsilon})\). Depending on the respective size of \(P\) and \(D\) (in particular when \(\log(P)=o(|D|^{1/8})\)), this algorithm has a better asymptotic complexity than all known algorithm. In Section 4.2, we compare the gain of using our algorithm with supersingular curves in the CRT method over the best known algorithm from Sutherland [10] based on ordinary curve. ### Computing the class polynomial modulo a prime. Let us fix some prime \(p\) and some negative discriminant \(D\). We write \(h\) for the class number of \(D\). Our goal in this section, is to explain how to compute the Hilbert class polynomial \(H_{D}(X)\mod p\). In all cases, this polynomial is reconstructed from its roots. When \(p\) can be written as \((t^{2}-Dv^{2})/4\) for two integers \(t,v\), the prime \(p\) is split in the quadratic order \(\mathfrak{O}\) of discriminant \(D\), and the roots of \(H_{D}\) are \(j\)-invariants of ordinary curves in \(\mathbb{F}_{p}\). This case was treated by Sutherland in [10]. In the opposite case where \(p\) is non-split and coprime with the square part of \(D\), the roots are \(j\)-invariants of supersingular curves over \(\mathbb{F}_{p^{2}}\) and we explain below how to compute them. In the ordinary case, the interesting curves are obtained in two main steps: start by identifying one interesting curve, and then enumerate through all the interesting curves using the action of the class group \(\operatorname{Cl}(\mathfrak{O})\). For supersingular curves, we have three main steps. We start to do something very similar to what is done for ordinary curves, but working over the maximal orders of the quaternion algebra \(\mathcal{B}_{p,\infty}\). The idea is that this step is much more efficient when working directly over \(\mathcal{B}_{p,\infty}\) because the operations are much simpler. The Deuring correspondence states that the set of maximal order we obtain in this manner are isomorphic to the endomorphism rings of the elliptic curves we want to compute. Thus, we constitue a set \(\mathfrak{S}_{D}(p)\) of maximal order type and we can apply \(\mathsf{OrdersTojInvariant}()\) on this set to compute the \(j\)-invariants we need. This execution constitutes our third step. The fourth and final step in our algorithm is common with the third step of the ordinary case: recover the polynomial \(H_{D}\) from its roots. Note that this is done using standard polynomial arithmetic. At a high level, our algorithm works in the following way : 1. Find a maximal order \(\mathcal{O}\) in \(\mathcal{B}_{p,\infty}\) with \(\mathcal{O}\hookrightarrow\mathcal{O}\). 2. Use the action of \(\operatorname{Cl}(D)\) to find \(\mathfrak{S}_{D}(p)\), the set of isomorphism classes of maximal orders in \(\mathcal{B}_{p,\infty}\) with an optimal embedding of \(\mathfrak{O}\) and record their multiplicities. 3. Compute the roots of \(H_{D}\mod p\) with \(\mathsf{OrdersTojInvariant}(p,\mathfrak{S}_{D}(p))\). 4. Recover \(H_{D}\mod p\). Step 1This task has already been solved in the context of generating backdoor curves to the SIDH scheme [11] and generating keys for the Seta encryption scheme [10]. First, we need to solve a quadratic equation over \(\mathbb{Q}\) to find \(a,b,c,d\in\mathbb{Q}\) such that \(\mathbb{Z}[a+ib+jc+kd]\) is the quadratic order of discriminant \(D\). This can be done using Simon's algorithm [14] to solve quadratic forms in dimension \(4\). The complexity of Simon's algorithm is polynomial once the factorization of the determinant is known. In our case, the quadratic form we consider is basically \(b,c,d,e\mapsto(qb^{2}+p(c^{2}+qd^{2})-e^{2}D\) and its determinant is equal to \(p^{2}q^{2}D2^{f}\) for some small integer \(f\). Hence, the full factorization is easy to compute because we know the factorization of \(D\). Now that \(\theta=a+ib+jc+kd\) has been computed, we need to find a maximal order \(\mathcal{O}\) containing it. Let us take \(A\) as the smallest common denominator of \(a,b,c,d\), we have \(A=O(\mathsf{poly}(\log(pD)))\). Then \(A\theta\in\mathcal{O}_{0}\) where \(\mathcal{O}_{0}\) is any maximal order containing the sub-order \(\langle 1,i,j,k\rangle\). Since \(A\theta\in\mathcal{O}_{0}\), the right order of the ideal \(I=\mathcal{O}_{0}A\theta+\mathcal{O}_{0}C\) contains \(\theta\). We can set \(\mathcal{O}=\mathcal{O}_{R}(I)\) and \(\mathcal{O}\) can be computed in \(O(\mathsf{poly}(\log(p|D|)))\). Hence, this step can be performed in \(O(\mathsf{poly}(\log(p|D|)))\) and is negligible compared to the rest of the computation. Step 2We go from one maximal order type to all maximal order types of interest by using the group action of the class group in a manner similar to what is used by Sutherland in [12]. But, in our case, instead of isogeny computation, we can simply use arithmetic over quaternions through the action of ideals of the form \(\mathcal{O}(\theta-\lambda)+\mathcal{O}\ell\) on the set of maximal orders containing \(\theta\) which cover all maximal order types we need. Any group action computation for an ideal of norm \(\ell\) takes \(O(\log(\ell))\). Thus, using the same estimates than in [12], we see that this part can be performed in \(O(h\log(|D|)^{\varepsilon})=O(\sqrt{|D|}\log(|D|)^{\varepsilon})\). We can hash (with the function introduced in Section 3.1) all the maximal order types obtained in this manner to create the set \(\mathfrak{S}_{D}(p)\) in \(O(\sqrt{|D|}\log(|D|)^{\varepsilon}\log(p)^{1+\varepsilon})\). Step 3This step consists simply in the execution of \(\mathsf{OrdersTojInvariant}()\) on the set \(\mathfrak{S}_{D}(p)\) computed in Step 2. Thus, by Theorem 1 and the estimates on \(h\), the complexity of this step is \(O(\sqrt{|D|}\log(|D|)^{\varepsilon}\log(p)^{2+\varepsilon}+p\log(p)^{1+ \varepsilon})\). Alternatively, it is possible to use the \(\mathsf{OrdersTojInvariantSmall}()\) algorithm and obtain a complexity of \(O(\sqrt{|D|}\log(|D|)^{\varepsilon}\log(p)^{4+\varepsilon})\). _The reconstruction step._ The complexity of this step is \(O(\sqrt{|D|}\log(|D|)^{2+\varepsilon}\log(p)^{1+\varepsilon})\) as was proven in [10]. _The total complexity._ Putting together all the results above, we get that, when using the \(\mathsf{OrdersTojInvariant}()\) algorithm, the complexity is \[O(\sqrt{|D|}(\log(|D|)^{2+\varepsilon}\log(p)^{1+\varepsilon}+\log(|D|)^{ \varepsilon}\log(p)^{2+\varepsilon})+p\log(p)^{1+\varepsilon}).\] With \(\mathsf{OrdersTojInvariantSmall}()\), the complexity becomes \[O(\sqrt{|D|}(\log(|D|)^{\varepsilon}\log(p)^{4+\varepsilon}+\log(|D|)^{2+ \varepsilon}\log(p)^{1+\varepsilon})).\] Thus, we see that the first algorithm will be better for small values of \(p\). There will be a cut-off around a value of \(p\) in \(O(\sqrt{|D|}\log(|D|)^{3+\varepsilon})\). In that range of prime, our algorithm with \(\mathsf{OrdersTojInvariant}()\) has the best known generic complexity. For primes, bigger than that, it is better to use the variant with \(\mathsf{OrdersTojInvariantSmall}()\) due to the quasi-linear dependency in \(p\). For a range of medium-sized primes, this algorithm will have the best known complexity. The cut-off with the CRT method (whose complexity is \(O(|D|^{1+\varepsilon})\)) will happen for \(p=O(2^{|D|^{1/8}})\). _Space complexity._ In terms of memory requirement our two algorithms are optimal and require \(O(h(D)\log(p))\). ### Application to the CRT method and comparison with existing method. In this section, we analyze the benefit of using supersingular curves in the CRT method compared to ordinary curves as done by Sutherland in [10]. We refer the reader to the algorithm outlined in Section 1.1. _The choice of primes._ The final crucial parameter for stating a complexity estimate for the CRT method with the algorithm of Section 4.1 is the choice of primes \(p_{1},\ldots,p_{n}\in\mathcal{P}_{D}\). In fact, for supersingular curves, this part is quite easy. It suffices to take all non-split primes coprime with the square part of \(D\). For practical efficiency, some primes might be better than others so it might be worth considering a finer metric than size, but for a first and simple estimate, it is easier to consider that we take the \(n\) smallest primes satisfying the reduosity constraint. Under GRH, it can be shown that we have \(n=O(B_{D}/\log(B_{D}))\) and \(\max_{1\leq i\leq n}p_{i}=O(\log(B_{D}))\). With the usual \(B_{D}=O(\sqrt{(|D|)}\log(|D|))\) that holds under GRH, we get that we can take \(O(\sqrt{|D|})\) primes with \(\max_{1\leq i\leq n}p_{i}=O(\sqrt{|D|}\log(|D|))\). _The final heuristic complexity estimate._ Thus, under GRH and the heuristic of Section 3, if we sum the complexity estimate given in Section 4.1 over all \(p_{i}\); the final complexity estimate of the CRT method with supersingular curves is \(O(|D|\log(|D|)^{3+\varepsilon})\). This is the same asymptotic complexity as the CRT method for ordinary curves introduced by Sutherland [12]. The dominant step is also the same: the polynomial reconstruction (Step 4 in our algorithm). However, note that in practice this part might not be the bottleneck due to better hidden constants. Comparison of supersingular and ordinary cases.Let us start by the reconstruction step. We remind that the concrete complexity of this step is \(O(\sqrt{|D|}\)-\(\log(|D|)^{2+\varepsilon}\log(p)\). It is pretty similar in both cases and we argue that the practical cost should be roughly the same. This is not completely obvious since the primes will not have the same size in the two cases and the roots are defined over \(\mathbb{F}_{p}\) for ordinary curves against \(\mathbb{F}_{p^{2}}\) over supersingular curves. First, the size of the primes does not really matter because the product \(\prod_{i=1}^{n}p_{i}\) have roughly the same size in the two cases and the complexity of the reconstruction is linear in \(\log(p_{i})\) for all \(i\). Second, since in the supersingular case, the Galois conjugate (by the action of the Frobenius) of a root of \(H_{D}\mod p_{i}\) is also a root, by building the remainder tree from polynomials of the form \((X-j)(X-j^{p_{i}})\in\mathbb{F}_{p_{i}}[X]\), we see that we can make the entire computation over \(\mathbb{F}_{p_{i}^{2}}\) as in the ordinary case (and thus avoid the constant overhead brought by multiplications over \(\mathbb{F}_{p_{i}}\)). We conclude from this brief reasoning that the reconstruction cost will be essentially the same in the two cases. Now, if we forget the reconstruction step, we see that using supersingular curve offers an asymptotic advantages. Indeed, in Steps 1, 2 and 3 of our algorithm, the dominant step is the execution of OrdersTojInvariant() in Step 3, which has a \(O(|D|\log(|D|)^{2+\varepsilon})\) complexity (if we consider the executions over all primes \(p_{i}\in\mathcal{P}_{D}\) and we use \(\log(p_{i})=O(\log(|D|))\)). In particular, this is smaller than the \(O(|D|\log(|D|)^{5/2+\epsilon})\) that dominates that part of the computation in Sutherland's algorithm (corresponding to the computation of one curve with the correct endomorphism ring). This is the first reason that suggests that the supersingular case might be more efficient than the ordinary one, but this is not the main one. The main reason behind the practical speed-up we hope to obtain is that we can use smaller primes. Indeed, the expected maximum of our primes is in \(O(\sqrt{|D|}\log(|D|))\) (against \(O(|D|\log(|D|)^{1+\varepsilon}\) for ordinary curves). Moreover, we can take all the small primes that satisfy the reduosity condition. In particular, we will be able to use a good portion of primes significantly smaller than \(\sqrt{|D|}\). We hope that the very small primes will give a nice improvement in practice because for these primes, some of the roots will have big multiplicities, which should help perform every steps more efficiently in practice (for example there will be less than \(O(\sqrt{|D|})\)\(j\)-invariants to compute in that case) and it should improve the practical efficiency. Note that there is also a good potential for practical improvement by carefully selecting the primes in \(\mathcal{P}_{D}\) and choosing the log-factor-basis used for each of those prime in order to minimize the degree of the extension required to compute the isogenies in OrdersTojInvariant(). A selection is also performed in the algorithm of Sutherland to help improve the cost of finding one curve with the good cardinal, so it would be natural to do the same thing in our case. Even if OrdersTojInvariant() proves to be too slow to beat the version of Sutherland by using only supersingular primes, it is clear that it is worth considering an hybrid set of primes \(\mathcal{P}_{D}\) containing a mix of supersingular and ordinary primes to obtain the best efficiency as the computation will be definitely very fast for a lot of small non-split primes. Further improvement: batching class polynomial computation.OrdersTojInvariant() can be easily modified to handle several sets \(\mathfrak{S}_{1},\ldots,\mathfrak{S}_{k}\) more efficiently than \(k\) executions of OrdersTojInvariant() for each \(\mathfrak{S}_{i}\). Thus, if we have several discriminants \(D_{1},\ldots,D_{k}\), and a prime \(p\) in \(\bigcap_{1\leq i\leq k}\mathcal{P}_{D_{k}}\). A good part of the computations performed to compute \(H_{D_{1}},\ldots,H_{D_{k}}\mod p\) can be done at the same time at a reduced cost. Moreover, if some \(H_{D_{i}}\) have some common roots, the common divisors could be constructed once and for all. Thus, our new method could be used to batch efficiently the computation of several class polynomial at the same time. ## 5 Computation of the modular polynomials Let \(\ell\) be a prime number. As explained in the introduction, the CRT method (whether it is to compute \(\Phi_{\ell}\) in \(\mathbb{Z}\) or modulo a big prime \(P\)) consists mainly in computing \(\Phi_{\ell}\mod p_{i}\) for a bunch of small primes \(p_{i}\). In Section 5.1, we introduce an algorithm to do so for any big enough prime \(p_{i}\). This algorithm uses the algorithm OrdersTojInvariant() from Section 3.2 as its main sub-routine. And it has the best known complexity when \(P\) is "small" (relatively to \(\ell\)) However, if the prime is big, it will be more efficient to use OrdersTojInvariantSmall() as explained in Section 3. When modified to use OrdersTojInvariantSmall() instead of OrdersTojInvariant()), the algorithm we present in Section 5.1 can also compute directly \(\Phi_{\ell}\mod p\) for any prime \(P\) and its complexity is in \(O(\ell^{2}\log(P)^{4})\). Depending on the respective size of \(P\) and \(\ell\) (in particular when \(\log(p)=o(\ell^{1/4})\)), this algorithm has a better asymptotic complexity than all known algorithm. Note that this method works for every \(p\) and \(\ell\) (as soon as \(p\) is big enough). In particular, it can be used to compute \(\Phi_{\ell}\mod p\) in applications where we need to find ordinary curves that are \(\ell\)-isogenous. This was not the case for Hilbert polynomial where the roots are either all ordinary or all supersingular. In Section 5.2, we compare the gain of using our algorithm with supersingular curves in the CRT method over the best known algorithm from Broker, Lauter and Sutherland [1] based on ordinary curve. ### Computing modular polynomial modulo a prime The idea of our algorithm for modular polynomials follows the same principle as the class polynomials algorithm. There is a slight difference because modular polynomials are bivariate but it does not change the generic principle of the algorithm. Indeed the full polynomial \(\Phi_{\ell}\) is interpolated from \(\Phi_{\ell}(j,Y)\) for enough \(j\)-invariants \(j\). The univariate polynomial \(\Phi_{\ell}(j,Y)\) is reconstructed from its roots, that are the \(j\)-invariants of curves \(\ell\)-isogenous to \(j\). Once again, we start by identifying the interesting curves through the Deuring correspondence. More concretely, for a set of prescribed maximal orders, we will compute all the \(\ell\)-ideals associated to these maximal orders and compute their right orders. Then, we apply \(\mathsf{OrdersTojInvariant}()\) to find the \(j\)-invariants corresponding to the maximal orders computed during the previous steps, finally we interpolate the modular polynomial \(\Phi_{\ell}\mod p\). Here is how it can be done concretely: 1. Compute a set of maximal order types \(\mathcal{O}_{1,0},\mathcal{O}_{2,0},\ldots,\mathcal{O}_{m,0}\subset\mathcal{B} _{p,\infty}\) for \(m\geq\ell+2\). 2. For each \(1\leq i\leq m\), compute the types \(\mathcal{O}_{i,1}\ldots\mathcal{O}_{i,\ell+1}\) of maximal orders connected to \(\mathcal{O}_{i,0}\) with an ideal of norm \(\ell\). 3. Create the set \(\mathfrak{S}_{\ell}\) made of the hashed values of all types in \(\bigcup_{1\leq i\leq m,1\leq k\leq\ell+1}\mathcal{O}_{i,k}\). 4. Compute each \(j\)-invariant associated to the elements of \(\mathfrak{S}\) with \(\mathsf{OrdersTojInvariant}(p,\mathfrak{S}_{\ell})\). 5. For each \(1\leq i\leq m\) compute \(\Phi_{\ell}(j_{i,0},X)\) from its roots \(j_{i,k}\) for \(1\leq k\leq\ell+1\). 6. Reconstruct \(\Phi_{\ell}(X,Y)\mod p\) from the \(\Phi_{\ell}(j_{i,0},X)\). With what we saw in Section 4 all the steps of the algorithm above are pretty straightforward. Note that we have \(m=O(\ell)\) and we assume that \(p\) is big enough so that there exists more than \(m\) maximal order types over \(\mathcal{B}_{p,\infty}\). We briefly recall the complexities of each step. 1. By Claim 1, the complexity is \(O(\ell\log(p)^{1+\varepsilon})\) by computing the \(m\) distinct types \(\mathcal{O}_{i,0}\) as right orders of \(L\)-ideals for some \(L=O(p)\) (computing right orders can be done in \(O(\log(p)^{1+\varepsilon})\) in that case). 2. \(O(\ell^{2}(\log(p)+\log(\ell))\) as there are \((\ell+1)m\) ideal computation and each one takes \(O((\log(p)+\log(\ell))^{1+\varepsilon})\). 3. \(O(\min(p,\ell^{2})(\log(p)+\log(\ell))^{1+\varepsilon})\) as there are at most \(O(\min(p,\ell^{2}))\) maximal order types in \(\mathfrak{S}_{\ell}\) and it takes \(O(\log(p)+\log(\ell))^{1+\varepsilon})\) to compute their hash with the function \(H\). 4. \(O(p\log(p)^{1+\varepsilon}+\min(p,\ell^{2})\log(p)^{2+\varepsilon})\) as \(\#\mathfrak{S}_{\ell}=O(\min(p,\ell^{2}))\). 5. \(O(\ell^{2}\log(\ell)^{2+\varepsilon}\log(p)^{1+\varepsilon})\) as shown in [1]. 6. \(O(\ell^{2}\log(\ell)^{2+\varepsilon}\log(p)^{1+\varepsilon})\) as shown in [1]. Total complexity.In conclusion, the complexity of the algorithm to compute \(\Phi_{\ell}\mod p\) with \(\mathsf{OrdersTojInvariant}()\) is \[O(\ell^{2}(\log(\ell)^{2+\varepsilon}\log(p)^{1+\varepsilon}+\log(p)^{2+ \varepsilon})+p\log(p)^{1+\varepsilon}).\] When using \(\mathsf{OrdersTojInvariantSmall}()\) instead, we obtain the following asymptotic complexity: \[O(\ell^{2}(\log(p)^{4+\varepsilon}+\log(\ell)^{2+\varepsilon}\log(p)^{1+ \varepsilon})),\] Thus, we see that the first algorithm based on \(\mathsf{OrdersTojInvariant}()\) will be better for small values of \(p\). We can estimate a cut-off for a value of \(p\) in \(O(\ell^{2+\varepsilon})\). In that range of primes, our algorithm with \(\mathsf{OrdersTojInvariant}()\) has the best known generic complexity to compute \(\Phi_{\ell}\mod p\). For primes bigger than that, it is better to use the variant with \(\mathsf{OrdersTojInvariantSmall}()\) to avoid the quasi-linear dependency in \(p\). For a range of medium-sized primes, this algorithm will have the best known complexity. The cut-off with the CRT method (whose complexity is \(O(\ell^{3+\varepsilon})\)) will happen for \(p=O(2^{\ell^{1/4}})\). Space complexity.In terms of memory requirement our two algorithms are optimal and require \(O(\ell^{2}\log(p))\). ### Applications to the CRT method and comparison with existing method In this section, we analyse the benefit of using supersingular curves in the CRT method and compare it with the algorithm described by Broker, Lauter and Sutherland in [1]. We refer the reader to the algorithm outlined in Section 1.1. Since the CRT is typically based on a lot of very small primes, we use the variant with \(\mathsf{OrdersTojInvariant}()\). Choice of primes.The only constraint on our CRT primes \(p_{i}\) is the size. We need to have enough maximal order types, which means that \(p/12\) must be slightly bigger than \(\ell\). As for Hilbert polynomials, we analyze the case where we simply select the set of primes \(\mathcal{P}_{\ell}\) as made of the smallest primes satisfying the constraint and such that \(\prod_{i=1}^{n}p_{i}>2^{B_{\ell}}\) where \(B_{\ell}\) is the bound on the bit-size of the coefficients of \(\Phi_{\ell}(X,Y)\). In practice, it might be better to select primes that will enable an efficient execution of \(\mathsf{OrdersTojInvariant}()\) but this is harder to analyze. Since we have \(B_{\ell}=O(\ell\log(\ell))\), we expect to have \(n=O(\ell)\) primes with \(\max_{1\leq i\leq n}p_{i}=O(\ell\log(\ell))\). The final heuristic complexity estimate.With everything we said above, under GRH and the heuristic of Section 3.2, the asymptotic cost of the CRT method with our algorithm is \(O(\ell^{3}\log(\ell)^{3+\varepsilon})\). This is the same as the method based on ordinary curves. In both cases, the asymptotic bottleneck is the polynomial reconstruction step. Practical comparison between supersingular and ordinary cases.Let us start by the reconstruction step. We remind the reader that the complexity of this step is \(O(\ell^{2}\log(\ell)^{2+\varepsilon}\log(p))\). As for the class polynomial case, these two steps are pretty similar in the supersingular and ordinary cases. Similarly, we expect the practical cost to be the same. It is less easy to see in the modular polynomial case, but the symmetry of \(\Phi_{\ell}\) with respect to the action of the Galois group of \(\mathbb{F}_{p^{2}}\) allows us to perform almost all computations over \(\mathbb{F}_{p_{i}}\). The linear dependency in \(\log(p_{i})\) concludes the claim. For the rest, we expect our algorithm to outperform the BLS method. There are various explanations behind that claim, but all of them are basically implications of the following fact: we can consider primes \(p_{i}\in\mathcal{P}_{\ell}\) with \(p_{i}=O(\ell\log\ell)\). For each \(p_{i}\), there are \(O(p_{i})\) supersingular curves (and so \(O(p_{i})\) supersingular \(j\)-invariants over \(\mathbb{F}_{p^{2}}\)), which is enough to reconstruct the modular polynomial \(\Phi_{\ell}\), even though intuitively, it should require \(O(\ell^{2})\) distinct points. The first implication of that fact, is that we drastically reduce the asymptotic cost of the expensive elliptic curve operations in the execution of the CRT method. Indeed, with supersingular curves, this part only requires \(O(\ell^{2+\varepsilon})\) operations over various \(\mathbb{F}_{p_{i}}\) (against \(O(\ell^{3+\varepsilon})\) in BLS). The polynomial reconstruction should also be positively impacted by the fact that the \(O(\ell)\) univariate polynomials required to interpolate \(\Phi_{\ell}\) have a lot of common roots. Moreover, if we remove the common polynomial reconstruction part, we see that the asymptotic cost is also in favour of the supersingular case with \(O(\ell^{3}\log(\ell)^{1+\varepsilon})\) against \(O(\ell^{3}\log(\ell)^{3+\varepsilon})\) in BLS. And the hidden constants should also be in favour of supersingular case since the dominant step in our algorithm consists in basic operations over lattices in the quaternion algebra \(\mathcal{B}_{p,\infty}\) while these are \(\mathbb{F}_{p}\) operations in BLS. All in all, it is very likely that using our algorithm in the CRT method will bring a practical improvement over the BLS algorithm. Contrary to the class polynomial case, where we need to be more careful, we do not expect that it could be favourable to use a mix of supersingular and ordinary curves instead of supersingular curves only. However, this improvement will be less and less noticeable as the value of \(\ell\) increases, since the asymptotically dominant step is the polynomial reconstruction which is the same in both methods. Batching the computationSimilarly to the modular polynomial case, the set of small primes can be reused in the computations over various \(\ell\). Once again, the situation is even better in the modular case, because, apart from size, there are no restrictions on the primes. Thus, if we want to compute \(\Phi_{\ell_{i}}\) for primes \(\ell_{1},\ell_{2},\ldots,\ell_{k}\) of the same size, we will be able to use the same exact set \(\mathcal{P}_{D}\) for all the computations. Thus, only the polynomial reconstruction phase will be specific to each \(\ell_{i}\), and the rest needs to be done only once. On the computation of \(\Phi_{\ell}(E,Y)\).In an application like the SEA algorithm, computing the full \(\Phi_{\ell}\) is actually useless. What we need to do is to evaluate \(\Phi_{\ell}(E,Y)\) for some curve \(E\) defined over \(\mathbb{F}_{p}\). Sutherland [13] showed how to adapt the CRT method to that purpose. The complexity is essentially the same as the one to compute the full polynomial, but the memory requirement is smaller and the practical complexity is better as well. Thus, our new algorithm yields an improvement for that task as well (when \(p\) is not too big). However, for a generic \(E\) (that can be ordinary for instance), it is not clear that we can do better than the complexity to compute the full \(\Phi_{\ell}\). We have an obvious improvement to \(O(\ell\log(p)^{4})\), when \(E\) is a supersingular curve of known endomorphism ring but this case is not so useful for the usual applications (in that case, we can simply apply the Deuring correspondence to circumvent the need for the modular polynomial). ## 6 Conclusion We have introduced several new algorithms to compute modular polynomials of level \(\ell\) and Hilbert polynomials of discriminant \(D\) modulo a generic prime number \(P\) from supersingular curves. When used directly, we see that we obtain the first algorithms of complexity in \(\ell^{2}\) and \(\sqrt{|D|}\) for generic primes. Depending on the relative size of \(\ell\),\(|D|\) and \(P\), we exhibit improvements over the best known asymptotic complexities for a significant range of primes. Moreover, when applied to the CRT method, we obtain an algorithm whose complexity is the same as the version with ordinary curves, but with the potential to give a practical improvement (in particular in the case of modular polynomials). It remains to see how efficient our new algorithms are in practice. There are several practical challenges to overcome before providing an implementation of the proposed algorithms (in particular related to the field extensions involved in the computations of some isogenies), and this is why we leave the concrete implementation to future work.
2308.04600
Model of models -- Part 1
This paper proposes a new cognitive model, acting as the main component of an AGI agent. The model is introduced in its mature intelligence state, and as an extension of previous models, DENN, and especially AKREM, by including operational models (frames/classes) and will. This model's core assumption is that cognition is about operating on accumulated knowledge, with the guidance of an appropriate will. Also, we assume that the actions, part of knowledge, are learning to be aligned with will, during the evolution phase that precedes the mature intelligence state. In addition, this model is mainly based on the duality principle in every known intelligent aspect, such as exhibiting both top-down and bottom-up model learning, generalization verse specialization, and more. Furthermore, a holistic approach is advocated for AGI designing, and cognition under constraints or efficiency is proposed, in the form of reusability and simplicity. Finally, reaching this mature state is described via a cognitive evolution from infancy to adulthood, utilizing a consolidation principle. The final product of this cognitive model is a dynamic operational memory of models and instances. Lastly, some examples and preliminary ideas for the evolution phase to reach the mature state are presented.
Shimon Komarovsky
2023-08-08T21:56:52Z
http://arxiv.org/abs/2308.04600v2
# Model of models - Part 1 ###### Abstract This paper proposes a new cognitive model, acting as the main component of an AGI agent. The model is introduced in its mature intelligence state, and as an extension of previous models, DENN, and especially AKREM, by including operational models (frames/classes) and will. This model's core assumption is that cognition is about operating on accumulated knowledge, with the guidance of an appropriate will. Also, we assume that the actions, part of knowledge, are learning to be aligned with will, during the evolution phase that precedes the mature intelligence state. In addition, this model is mainly based on the duality principle in every known intelligent aspect, such as exhibiting both top-down and bottom-up model learning, generalization verse specialization, and more. Furthermore, a holistic approach is advocated for AGI designing and cognition under constraints or efficiency is proposed, in the form of reusability and simplicity. Finally, reaching this mature state is described via a cognitive evolution from infancy to adulthood, utilizing a consolidation principle. The final product of this cognitive model is a dynamic operational memory of models and instances. Finally, some examples are presented, and some preliminary ideas for the evolution phase to reach the mature state. ## 1 Introduction Our consistent goal is to construct a basic realistic model for _AGI_ (Artificial General Intelligence). First we illustrate how AGI is perceived by us. It is this data processing tool, without any awareness or consciousness. In addition, it thinks, perceives and acts as human. It also have different channels for input and output, again as human. All for the purpose to make an agent that understand humans and humans understand it. To accomplish that, there exist many ways to approach the _AGI_ problem, e.g. via emulation. Although, a human brain emulation (copying the brain into digital form) is a good idea, there are some deficiencies to this approach: 1. the brain is a very complex organ, especially when it is accessed at its mature state. 2. there is no direct access to the human experience (consciousness) with correlation to the brain activity (or neural processing). It is mostly performed in a very indirect process, like questioning a human participator. Also it is done in a very coarse resolution of the brain (large clusters of neurons). 3. it is difficult to model the brain, since it works throughout the whole network. Hence, simple models cannot be extracted out of it by local isolation. See more in [https://medium.com/nontrivial/neuro-nonsense-2acc209d42c3](https://medium.com/nontrivial/neuro-nonsense-2acc209d42c3) Hence, here the _AGI_ problem is tackled from a human design direction instead, though neuroscience can act as a source for ideas and inspiration [Zhao et al., 2023]. This design is a gradual process with many versions along the way. Therefore, this paper presents _MOM_ (Model Of Models), the next version of _AKREM_ (Associative Knowledge Representation). See its short version [Komarovsky, 2022b] or its full version [Komarovsky, 2022c]. This paper is actually the full version of a preliminary short paper [Komarovsky, 2023]. _AKREM_ is a mature-state knowledge representation model, based mainly on the assumption that communication is about encoding the sender's will into a sequence of words (a message), and then decoding it by the recipient. The model proposes a representation of any message, in a hierarchical form based on grouping, by generating some essence in a given level, from details in the lower level. The lowest level details are founded upon some DNN (Deep Neural Network), generating the basic concepts and actions (from which the details are made of) from unstructured input. _AKREM_ is also backed by neuroscience, such as the idea of memorizing techniques such as the Memory Palace, which turns random objects into memorable ones by converting them into some consistent sequence or tale. Also, [Hahamy et al., 2023] supports the idea that story or any message is partitioned into micro-events, which are also inter-connected to allow comprehending current part of the story based on previous parts of it (a phenomena called "replay"). Finally, while the will concept exists in _AKREM_, it will be expanded upon in this paper. The will is very fundamental to our intelligence. It is the source of everything. It dictates our actions, our interpretation of reality, thus affect the meaning of things to us. Nevertheless, it is almost not mentioned in AI literature. Our main cognitive will is to comprehend the reality, in order to manifest other types of will in it. More generally, it is expressed via problem-solving state of mind, in which we react in reality. Nevertheless, will can be also affected by reality. Furthermore, we assume the will is a field, similar to the "will force" phrase, i.e. it is represented as vector in some state space. Then, in order to affect reality, this will has to create some connection to knowledge. We assume that this connection is learned through experience, and it is manifested by alignment with the actions operating on objects that comprise the knowledge. This way, whenever we have a will, the appropriate actions are triggered, similarly to when we want to raise our arm - the appropriate physical actions operate. However will is an intriguing phenomenon. It can be realized via laws of physics, or derived from emotions, or even derived deeply from high (moral) values, such as justice. This makes the full knowledge model of anything, as observed from outside, very stochastic, due to these hidden factors. Following this, new additions to the presented model, including new associations, operationability, modeling, consolidation, and reusability, are introduced. First, while _AKREM_ assumes that the learned _elements_ are either objects or static actions (verbs), new associations are introduced: object's attributes and relations. Next, these connections are all static representations of knowledge, i.e., the hierarchies cannot be changed. Therefore, operationability introduces a new type of association to objects: actions that act upon them, thus producing new knowledge _elements_. This makes the connections in _AKREM_'s hierarchies dynamic, hence it allows the freedom to update and create new hierarchies. Next, modeling introduces some basic cognitive operations, e.g. abstraction and grouping1. Both gather many details into fewer. Grouping specifically is about connecting _elements_ via some common property. It could be for example a chronology in a plot or other common properties/actions grouped into classes. Finally, consolidation is a process in time, that collapses a huge amount of possibilities into small set of patterns, of any kind. Footnote 1: Philosophically, these operations drive us to theory of everything, and religiously to the notion of God. All the operations above are considered to be bidirectional, i.e. everything lies in some range between extremes, i.e. everything has its inverse (dualism)2. In grouping, it is from the whole to its parts and vice versa, and in abstraction, it is from instances to classes and vice versa. Consolidation is an operation in time, creating models and memory, while its inverse is forgetting, which is also an operation in time. Will also lies in the dichotomy of determinism and randomness. Footnote 2: Can also be regarded as symmetries or invariants Moreover, if in the early epoch of AI, symbolic reasoning was dominant, and nowadays connectionism dominates, then we come to a new era (Sheth and Thirunarayan, 2021), where we should combine and include many conflict perspectives, in cooperation and competition. Our holistic perspective embrace this duality and other dualities also (Reggia, 2013). An exhaustive but not complete list is presented: * top-down and bottom-up model learning, * consolidation verse forgetfulness, * generalization verse specialization, * grouping and dismantling, * determinism and randomness, * problem-solving verse designing perspectives, * connectionism verse symbolism, * soul and brain (idealism), * convergent verse divergent thinking, * object-oriented verse functional programming paradigms, * past verse present perceptions, * induction verse deduction, * recognition verse creativity, * serial thinking verse parallel thinking (as described be de Bono, e.g. in [De Bono, 2016]), * left-brain verse right-brain theories, * neuron excitation verse inhibition, * exploration verse exploitation, and more. The product of these cognitive operations and dualities is a dynamic memory of models, formed as a semantic network of _elements_. It is dynamic by being evolving through time, by including spatio-temporal modeling, and by being operational, i.e. by possessing the possibility to update the models.Additionally, it encourages a holistic approach for _AGI_ designing: one simple system for multiple functions, such as short-term memory (STM) and long-term memory (LTM), problem-solving, communication, learning, and any cognitive function. This approach is also encouraged in neuroscience, i.e., though the brain seems complex, but its basic mechanism is simple [Mountcastle, 1997]. This approach is especially needed when resources are limited. By using this approach, our _MOM_ puts everything in one place, instead of common cognitive architectures (CAs) approach to separate into modules. Here there is a dynamic memory, which is also used for thinking, planning, imagining, and more. While the Working Memory (WM) is simply loading old memories of LTM "into the surface". Following this dynamic representation of knowledge - its evolution is presented. That is, operational modeling presents _AGI_ agent's intelligence in its mature state, which is the state of how its knowledge should be represented. However, to accomplish this state, a cognitive evolution over time is required, which utilizes the consolidation principle. Along with this primary consolidation process, that act in many different things, there are also some fundamental learning approaches. These approaches are the learning from examples and the learning from logic, use consolidation as a tool, to reach the desired mature state we present in this paper. The two parts of the _MOM_ cognitive model are illustrated in Fig. 1. They represent the duality above, especially our Neuro-Symbolic approach. _MOM_ in theory includes objects, relations, actions, abstractions, etc - mostly logic and classic AI. On the other hand, _MOM_ in practice based mostly on _DL_, and includes concepts/principles like consolidation, attention, splitting & merging as in _DENN_, and more. Eventually, this framework has different approach to learning compared to usual _DL_, such as Large-Language-Models (LLMs), in the journey to reach _AGI_. _DL_ based on learn-everything-at-once kind of approach, or batch learning. In this approach the expert AI agent supposed to learn everything mixed up with different topics and complexities. Hence, the way to reach _AGI_, is by larger models and data. Conversely, we support incremental curriculum type of learning as in humans. The learning is built in layers or stages. When the current layer is established, then and only then that we can proceed to a new layer on top of the previous layers. Usually it is done by extending analogies (distant connections), which can be regarded as breadth extension, and abstraction, which can be regarded as extension in height. Finally, after establishing a firm theoretical framework of _MOM_, a proposed implementation is presented. That is, the implementation of the evolution phase that strive to reach the mature intelligence state. It is based on the principle of model separation/isolation, then the use of attention splittability feature to realize this, then the use of multi-version to allow flexibility in learning, finishing with program-search to model methods of consolidated classes/objects. ### Motivation and Problem definition #### 1.1.1 General motivation The general motivation is to reach _AGI_. We consider LLMs as the first step towards this goal. However, the deficiencies of these LLMs is what inspire us to succeed where they fail. For example, ChatGPT performs quite well in many tasks, hence GPT can be regarded as general purpose AI model, and can generalize to new tasks. However, it hallucinates with confidence, and do not perform well in reasoning tasks, that require logic and step-by-step thinking. One of the reasons for this, is that masking or next word prediction is not enough as a prior knowledge induced in the neural network architectures, to align with humans, even though some studies in neuroscience showed correlation of brain activity and LLMs. Also LLMs do not have a world model (unlike JEPA [Dawid and LeCun, Figure 1: _MOM_ two parts. 2023]). They simply mimicking text or images (like Midjourney) [https://medium.com/@ignacio.de.gregorio.noblejas/world-models-the-next-frontier-in-our-path-to-agile-](https://medium.com/@ignacio.de.gregorio.noblejas/world-models-the-next-frontier-in-our-path-to-agile-) Although some papers use probing of the LLM, to prove that some sort of model/representation is learned in the LLM. Many papers have tried to improve reasoning. They used many methods, e.g. Chain-of-thought (CoT) prompting, self-consistency, role prompting, voting by agreement, Tree-of-Thoughts (ToT), Graph-of-Thoughts (GoT) [Besta et al., 2023], and other prompting techniques. However, these do not show ultimate solution. Another example, in [Ling et al., 2023], they perform CoT but with accompanying an evaluation for each step. Specifically, perform a deductive reasoning through evaluating each step. In deductive reasoning, a (premise, conclusion) pair is given, and the goal is to determine whether the conclusion follows from the premises. This method demonstrates also the need in a much stronger prior, since the proposed method searches for "fixes" to the CoT method, though the CoT method in LLMs is fundamentally wrong approach to model reasoning. Additionally, there was an effort towards alignment with human intentions, e.g. via Reinforcement-Learning through Human Feedback (RLHF) in LLMs as GPT. However, it is mostly used for constraining the ChatGPT replies, e.g. being polite, filter out harmful requests, being unbiased, avoiding hateful, violent, or rude speech and more. A recent paper by Google Brain [Schuurmans, 2023] proves how adding a memory to LLMs can produce Turing complete machine, i.e. simulate any algorithm. Similarly, is the inclusion of vector databases with LLMs. This demonstrates that memory is definitely part of the needed prior, but it is not all of it. In our opinion, this prior should integrate different brain's cognitive capabilities/functions. Such a prior would allow the agent to learn and act more rationally, thus produce the most valuable trait, that's missing in today's (mostly black-box) AI: explainability (through external communication with it. Not by analyzing its internal model). Hence, a more promising approach we advocate for and try to apply, is the hybrid approach of Neuro-Symbolic AI. It is a sub-field of AI, where the strengths of the two disciplines are combined to overcome their weaknesses. Finally, the lack of understanding affects also vision comprehension, which results sometimes in a workaround like LIDAR sensors in autonomic driving vehicles, see [https://futurism.com/the-byte/elon-musk-furious-autopilot-tried-kill-him](https://futurism.com/the-byte/elon-musk-furious-autopilot-tried-kill-him). #### 1.1.2 Specific motivation This paper is the continuation of the previous paper's model _AKREM_. The context of the previous and the current paper is the same: assuming some module that converts sub-symbolic data to symbolic one and vice versa. In _AKREM_, it is a temporal hierarchical DNN, that processes the incoming data from senses via different temporal rates, where the slowest processing layers suppose to deal with thinking, then finally produces the appropriate output. In this paper however, this module is more vague. On the one hand it can be part of the whole system proposed in the paper: sub-symbolic to symbolic and vice versa can use the same mechanism of learning as described in Section 7.3. On the other hand, we can assume an external module similar to _AKREM_ settings, where some DNN, either flat or also temporal hierarchical, is responsible for some fundamental processing of incoming and outcoming raw data. During this processing neurons are triggered in the DNN. These neurons stimulate classes of objects and actions, representing agent's knowledge map. This is where a transition from connectionist AI to classic/logic AI occurs. Our main goal in this paper is to concentrate on the logical part, or the knowledge map, to model the thinking/cognitive processes, as it is in our opinion the bottleneck for _AGI_, or the more urgent issue in AI nowadays, such as it appears in the problems revealed in LLMs. This modeling, _MOM_, presented as the mature state of an intelligent agent, is described in details in Sections 4, 5, 6. Of course, this model is only the ideal intelligence state, hence how to reach this state is discussed in Section 7.1. Additionally, we figured out that thinking is very dependent upon will, either the will encoded in the perceived message or the agent's own will. This topic is elaborated in Section 3. Finally, we define _thought_ as operating in the knowledge map, e.g. by inferring class features/attributes, or by performing actions on objects, or by overall supervising the thought process. ### Contribution The contribution of this paper is expressed within two comparisons: the comparison of the proposed model to its previous versions, see Section 9.3, and the comparison to other CAs, see Section 9.1. _What we propose:_ In essence, what is presented in this paper is a cognitive model, or a model describing how thinking works. In this model we describe how different thinking processes occur, including learning and knowledge representation. Even if it is not how our brain works, it is still can be utilized to build an AI agent, like other virtual assistants: ChatGPT or MIT's ELIZE or Amazon's Alexa or Apple's Siri. These assistants are communicating with people and perform different tasks. _What exists:_ From the early days of AI, there are expert systems which are based on classic AI, that is mainly based on logic. Nowadays, there are LLMs that are based on Machine Learning (ML) but more specifically on Deep Learning (DL). Each of them has advantages and disadvantages. In one relevant aspect to us they differ: the perception of will and intention throughout communication. The logic-based systems are based almost solely on cold and precise logic. This is mechanical and technical perspective on intelligence. The deep-learning models are usually do learn will and intention from data. How ever, it is done very implicitly3, especially lack the logical structure prior, hence cannot be generalized to any random and flexible conversation. It makes these both approaches lack of common sense and especially of full understanding and comprehension of the situation and of the requests from the human user, in both directions. Both in understanding human's intent and in replying, in human-like logical way. Footnote 3: It can be learned when will is expressed in the data, e.g for data the includes will in its input while the output is the appropriate response. Therefore, LLMs for example are good at task generalization, while image label/caption classifiers are not. See good discussion about it in [https://medium.com/swlh/what-makes-neural-networks-fragile-676fe7cf230a](https://medium.com/swlh/what-makes-neural-networks-fragile-676fe7cf230a). _What is the novelty here:_ The new thing compared to existing approaches in AI, is that we explicitly learn will in the models. The models that we construct in our mind while interacting throughout our lifetime: from inanimate objects like basket and table, through animals, humans, upto abstract theories. This will becomes essential part of any model, and in order for it to be part of our physical reality and affect things, it is aligned with our knowledge, be it in a logic-based form or embedded in neural networks. This alignment is useful, since it creates intuition between problems and admissible actions. It provides kind of heuristics for solving new problems. _MOM's assumptions:_ There are many assumptions or fundamental axioms in _MOM_, similarly as in any model. Some of these are embedded and still hidden, while others are more explicit and scattered in this article. Some of them are also expressed within the different definitions. Here are some assumptions: * Causality and action affecting the environment: basic assumption here is that actions applied from any object or living thing are affecting things in the environment. Thus causality is derived. Moreover, due to this assumption we do not necessary need explicit RL mechanism for learning. * Will and knowledge: the assumption here is that will and knowledge are two separate entities. However, they can be aligned for effective operation of the agent, to accomplish its goals. * Multiple modalities: the original vision and audio channels are assumed. But eventually it can be extended to any other modalities, such as digital domain (sending/receiving data files), action domain (what is/was done), and more. Action domain means that we can input the AGI agent also full experiences, skills, without actually go through all the steps and all the training it involves. Similarly, instead of sketching on a piece of paper, hand-waving and long explanations, the digital domain can transfer whole books, presentations, images, and any digital file. Perhaps since this domain is so general, it can also contain action domain within it. ## 2 Timeline The following Sections are ordered like this: * Section 3 starts from some terminology, then discuss will as it was presented in previous model: _AKREM_, then continues with will expression in a constrained environment, especially in problem solving and design, then presents will as a field and its consequences/implications, and then finishes with different levels or types of will. * Section 4 describes briefly the new Associations added to our current model. * Section 5 is about operationability and modeling, two extensions of the previous _AKREM_ model. First it describes them both as simple classes/frames, then details about each of them in our framework. Finally, few definitions are introduced and then will derivation is discussed. * Section 6 is engaging in further details about _MOM_'s knowledge structure, and then displaying several examples of _MOM_ model. * Section 7 discusses temporal processes of consolidation in its multi-faced forms, reusability principle as an additional form of consolidation, and finishes with learning approaches. * Section 8 proposes one possible implementation of ideas presented in previous chapters. It is founded upon the model separation capability, then the utilization of attention mechanism to realize this capability. Then the multi version for flexibility is used, and finally the program search for the actions is discussed. * Section 9 is reviewing relevant literature in different topics, e.g. Neuro-symbolic AI, CAs, and hierarchical planning and AI. Finally it is comparing previous models with the current proposed _MOM_ model. * Section 10 is a summary of the paper including some key takeaways, and later discusses future work. * Finally, Appendix A includes: discussion of advantages for adopting a hybrid approach for AGI; how will can be learned; existing knowledge representation compared to _MOM_; the issue of Bias in any intelligence; a the role of language in the AGI background; DNN's equivalence to programming languages; general evolution in _AKREM_; the aspects of Logic and Creativity in _MOM_; and AGI additional characteristics. Note that the paper is not so rigorous as any serious academic paper. It is mostly philosophic, since this theory is foremost presented for the sake of acting as a starting point for discussion and adaptations. It is not that important to define every detail in the theory just yet. Nevertheless, along the paper few definitions are spread, to ease on the reader, i.e. to make things more clear and less ambiguous. Either way, the reader should come open-minded, since this is not technical paper. We invite and welcome notes, suggestions, reviews, corrections, and any type of comments and discussion. ## 3 Will In this section, the importance of will as an essential element in human intelligence4 is elaborated upon, starting from some general motivation, then continuing from the previously presented model, _AKREM_. The detailed discussion about how will is expressed in a constrained environment is performed, via problem-solving setting. Finally, some basic types of will are presented. Footnote 4: One clue for this found in [https://theconversation.com/](https://theconversation.com/) great-mysteries-of-physics-5-will-we-ever-have-a-fundamental-theory-of-life-and-consciousness-203127, where they try to characterize how the living different from non-living structures. They show that living structures for some reason can defy entropy and can produce even more complex structures through evolution, while non-living cannot. This may indicate the external factor of will is what can cause this difference. We first start with some terminology, so that the reader would have clear context of the discussed in the rest of this chapter and other chapters. ### Terminology First we collect the relevant interpretations from the dictionary: Will = power of the mind in choosing/controlling one's own actions/emotions. Goal = the result or achievement toward which effort is directed. Purpose = the reason for which something is done, exists, created, made, used, etc; something set up as an object or end to be attained; a subject under discussion or an action in course of execution; an intended or desired result; end; aim; goal. Objective = something that one's efforts or actions are intended to attain or accomplish. Intention = an act or instance of determining mentally upon some action or result; a determination to act in a certain way; the end or object intended; meaning or significance; purpose. Similarly, our definition of will is the source of actions. We refer purpose as a goal. However, its other interpretations such as a reason or a meaning of something and an action in course of execution are related. Since it is like describing the how it got to be, or the will that derive it, or the will in the process. And taken from social studies [10, 11, 12]: Will = refers to an individual's mental faculty or capacity to make choices and decisions. It involves the conscious and deliberate exercise of one's volition or intention in order to initiate or refrain from certain actions. Will is often associated with personal agency and the ability to exert control over one's behavior, thoughts, and emotions. Goal = a desired outcome or result that an individual or a group of individuals strive to achieve. It represents a specific target or destination towards which efforts are directed. Goals can be short-term or long-term and can be related to various aspects of life, such as personal development, career aspirations, relationships, health, or academic achievements. They provide a sense of direction and purpose, guiding individuals' actions and decisions. Purpose = a sense of meaning or significance in one's life. It involves having a clear understanding of one's values, passions, and overarching aims. Purpose goes beyond specific goals or achievements and often involves a deeper reflection on one's identity, values, and contribution to the world. It provides a sense of fulfillment and a motivation to engage in activities that align with one's purpose. Objective = a specific, measurable, and time-bound step or milestone that is designed to contribute to the attainment of a larger goal. Objectives are concrete and quantifiable, allowing individuals or organizations to assess progress and success. They serve as the building blocks that help individuals or groups move closer to their overall goals. Intention = a conscious and deliberate mental state characterized by a planned or anticipated course of action. It involves the formation of a specific purpose or objective in one's mind and a commitment to act in accordance with that purpose. Intentions can range from simple everyday actions to more complex and long-term endeavors. They play a crucial role in guiding behavior and directing efforts towards the achievement of desired outcomes. Interestingly, these definitions are quite different from the dictionary's ones. Here they are more human-inclined or social, while the dictionary is more general. Finally, note that we use _logic_ most often, throughout this paper. However, we mean the most basic type of logic, that is involved in all cognitive operations. This is not includes the highly abstract rigorous mathematical/scientific logic, which involves proofs, exact definitions, and more. This is because we assume that this type of logic is acquired or learned. I.e., not embedded as a necessary element, in our cognition throughout its all different forms. ### Motivation One aspect of will, is that it determines our behavior, or more precisely - our actions. Will derives or creates somehow a set of actions, i.e. like a plan, for example a will causes for constructing an algorithm such as search in a knowledge base or in an _AGI_/brain network. Inversely, will can be inferred from an action or from a sequence of actions. This process of producing actions and retrieving their original will is expressed in the sequence encoding and decoding in _AKREM_, respectively. As seen in Fig. 2, will or purpose is expressed both in single action (individuality), and in a sequence of actions or a trajectory (grouping). More generally, it is in the different levels of _AKREM_. Another aspect of will, is its affect on our knowledge. Philosophically, will + objective data produce meaning and bias (Nietzsche, 1974), or more generally perception and interpretation. Or: \[\text{will + objective data = subjective data.} \tag{1}\] In other words, meaning is not an objective concept in non-human reality, but rather it is "tailored" specifically to humans, or more accurately it is a function of will - or derived from it. I.e. it is related to the function that the concept has to the human. For example, "cat" perceived as a pet while wild cat excluded from this concept since it is not under human control. Similarly, any tool like a hammer or a screwdriver has a meaning as a function for fulfilling some will, and not just as a particular shape and matter properties. Along with this, it is natural that meaning would have a bias. Hence, the bias and fairness which researchers battle with, is impossible to overcome, since there will always be some bias, i.e. some preferred opinions over others, again - as a function of the will. Best thing we can do is externally change the will, which will yield different outcomes. Moreover, meaning is the will's manifestation, or its destination. It is the end-product of will, like role in RRG model (Van Valin, 1995), or the target/objective in optimization (the destination of some process) (Lewis et al., 2012). Hence, meaning imply some designer's (accomplished) will, e.g. a chair is designed for sitting (and this is its meaning). See meaning as the end-product of a message in a conversation in Fig 3(b). Additionally, will is an important element in human intelligence, that is almost not mentioned in the modern AI literature. It is expressed in today's machines, via the requirement from the designer to express his will in a very limited way. For example, in control systems or in software - everything must be designed in a very clear and accurate manner. It is expressed in the form of an objective function in control systems or in AI. In Natural Language Processing (NLP) will exists in its highest hierarchy, in speech acts, above descriptive-only logical forms. Speech acts contains the will in the message and not only its informative part, which forces the receiver to react accordingly (to a request/question/claim/promise, etc). Figure 2: Actions representing a meaning/purpose Deep Learning (_DL_) tries to override inclusion of explicit will by supplying examples to express will. It is hidden in the need for big data and attention in _DL_ models, disguised as an implicit will that determines what should be the correct output. It is also hidden in the curiosity-driven Reinforcement Learning (RL) models, where curiosity is only one specific type of will. Instead, will should be addressed directly and explicitly in AI. One intuitive way to insert will is externally, in a specific input channel only designed for that. This is the explicit way. Complementary or as an alternative, the communication channel can be used, accompanying with its recent context, to infer the will from the message that an _AGI_ agent receives. This is the implicit way. It is practiced via prompting in language models (LMs) and it is supported in _AKREM_ and in _MOM_. ### Extending will beyond _Akrem_ Will in _AKREM_ is represented in the levels of any specific hierarchy. Starting from the most detailed aspects of will, at the lowest level, and finishing at the most abstract will or its essence, at the top. The top level represents some kind of experience uniqueness to differentiate it from other memories that use the same low-level structures. This hierarchical will is especially demonstrated in a constrained environment, as our reality, for topics like problem-solving and communication. In problem-solving, the main will produces sub-wills in lower levels, till it reaches the final solution at the bottom level (Fig 3(a)). The final result is a plan or a sequence of actions. See more in Section 3.4. Similarly, in communication, the sender encodes/converts his will into a sequence of actions (in a language form), while the recipient on the other side decodes the intention/will from this sequence (Fig 3(b)). In both cases, evaluation is necessary, hence this top-down process is cyclic and non-linear. All the above are specific cases of will, but there is also a more general will. As recipients of reality, our main cognitive will is to find the most appropriate/simple model to fit all the pieces/details in the right place or to make the most sense of them5. This is similar to decoding a will from a message/mystery/middle, and it can be rephrased as a general problem-solving Figure 3: Two cases of will in a constraint environment task to comprehend reality (Black and Bower, 1980). First, it is done internally, by reorganizing our models (mostly during a sleep phase), and later it is done externally, in any kind of problem-solving, or in understanding a message/story/middle/situation/phenomena. Moreover, it is claimed in psychology ([https://www.psychologiststudy.com/um/hide/the-revisistent-brain/202302/th-samp-mental-benefits-of-dcluttering](https://www.psychologiststudy.com/um/hide/the-revisistent-brain/202302/th-samp-mental-benefits-of-dcluttering)): _"Not only is it hard to physically function in a cluttered space, but clutter bombards the mind with excessive stimuli. Addressing the heaps of paperwork, laundry, and thoughtfully organizing helps to calm the mind."_. Meaning that the physical cluttering generate our discomfort, since our inner cognitive objective is to reduce entropy, or organize everything in our mind. The best model will allow us to move from place to place in it easily, perform new actions, and produce conclusions/solutions with less effort. In summary, everything can be considered as a problem-solving task. This is also known as the problem-solving hypothesis, originated in Global Problem-Solver (GPS). For example in communication: in converting will into action and vice versa, in answering a question or in explaining, and in organizing knowledge to make sense of it. More about in Section 3.4. This main cognitive will results with an understanding, or the ability to control any aspect in the complex model. So in a sense, we have two wills governing our cognition: controlling and subsequently making sense. In other words, to accomplish what you want you need to learn how to manipulate your environment. Obviously, these wills are enforcing each other. It also relates to the learning process. Either it should be performed via learning some model (e.g. by observation), and then learning its control model (by interaction/intervention), or they can be learned simultaneously, i.e. learn a model efficiently so that it would be utilized in the future. ### Problem-solving and Designing #### 3.4.1 Problem-solving **Problem-solving via System 1&2 perspective** A particular topic that involves will is **problem-solving**(Blocks, 2010). It is the expression of human will in a constrained reality (otherwise any will could be realized immediately). It is a broad topic, which is about handling any given situation, and not only solving puzzles/mysteries/science. Unlike classical AI methods to solve problems, e.g. by searching for similar ones in memory (recording cases), or by finding similar ones and adapt to them in various ways (case based reasoning), here it has much more flexibility. In a given problem, a response could be either to recognize a previous similar pattern (System 16), and apply automatic reaction, i.e. immediate resolution, or try to generate a new solution (System 2). Footnote 6: Based on (Daniel, 2017) System 2, in our opinion, is representing the cases when wrong top-down prediction requires higher level adjustments, see (Hawkins and Blakeslee, 2007), i.e., System 2 is the learning mode, tackling new or complex situations, requiring special attention. After enough training and repeating of the same problem in system 2, it becomes automatic and decline down to system 1 for fast response. Moreover, System 2 is generative. It means that more generally, problem-solving is generative, while only if possible is discriminative (in System 1). This generation can be expressed for example in explaining. One evidence for generative nature can be seen in _DL_, where explainability is expressed sometimes via special DNN, e.g. in image captioning (Vinyals et al., 2015), where the caption is the explanation of the image. However, explanation may include also summarization or finding the essence of a given group of elements. Hence, another evidence for the generative nature is in dimensionality reduction techniques, as Principal component analysis (PCA) (Abdi and Williams, 2010), where high-dimensional input is mapped into a lower one which in turn predicts the correct class, thus the low-dimensional space acts as the explanation space. Similarly, sparse or denoising auto-encoders (Bank et al., 2020) reproduce the input after corrupting it and compressing it. These two examples from _DL_, strengthen our _AKREM_ (or _MOM_) hypothesis about grouping. It is since we regard essence of a group to be a lower dimension of a higher one that includes all the details in the group. Note that, explanation or summarization in such group hierarchy could be done in any level of the hierarchy. That is, most concise is in the higher level, while the most detailed is in its lowest level. System 1 can be viewed as an effortless bottom-up triggering system, coming externally to us, while system 2 requires our effort, i.e. it involves our own will coming from above, so it is a top-down system in this sense. Note that system 1 is not stimulus\(\rightarrow\)action, or simple model recognition and a response, but rather there is always a will present and a hierarchy, representing the context of the situation, only that we decide whether to take the fast response (system 1) or the slow one (system 2). In summary, as illustrated in Fig. 4, you can either decide upon a set of actions, without any cognitive processing, a kind of straight-way response (System 1), or you can have possible cognitive operations, before deciding upon the most proper response (System 2). System 2 can be regarded as problem solving, i.e. via first divergent thinking, finding possible solutions/approaches, then applying convergent thinking, to decide upon most appropriate solution.7 Footnote 7: Similarly to competing _AGI_ modules over WM attention, in Global Workspace Theory (GWT). A side note, is that we can view System 1 as automatic first response, that occurs always, while System 2 can be activated afterwards if necessary. Methods like Relaxed-Planning-Graph (RPG) from classical AI can be used to find an unconstrained fast solution of shortest path between current and goal state, to act as illuminating the different paths from the initial state to a goal one. Divergent thinking could be counter-factual one, i.e. "what if" planning, like in chess, where you contemplate about what will happen if you decide to do some action.8 Which is why in our modeling the knowledge is dynamic, and can always produce new outcomes by the set of admissible actions each model has. This thinking could obviously form a tree-like branching, since we could perform few steps of "what if", in series. Also it can be hierarchical, as any memory, evole not also in breadth (tree-like), but also in height. Footnote 8: In this case we are aware of it. In other cases it could be sub-conscious. As can be seen in Fig. 4, intuition is the shortest path for response. Problem-solving in _MOM_ In this section, the System 2 response is discussed, since System 1 response is a simple pattern-matching. This process is modeled via 2-phase process in state space: * Refinement: starting from a general will to get out of a problematic state, then deciding upon some goal states to be reached. It makes the will more definite (directional/purposeful), see Fig 5. * Realization: searching for a solution via coarse-to-fine hierarchy of models (Fig 3(a)). In refinement, a vague will turns to be more precise and directional. Since will derives action(s), it is represented similarly to an action in the state space - as a vector, transmitting one situation to another. Meaning, will defines the direction the agent wants to move, before it found the admissible/legitimate/allowable way to realize it, in the given environment. Finally, the agent starts to Figure 4: Short to Long paths between inputs and action plan how to solve the given problem, under given constraints, i.e. where one cannot fulfill its will directly, but instead is looking for some legitimate way to accomplish this, in the given circumstances. In summary, Fig 5 describing the gradual change, where at first the purpose is vague, hence a large space can represent the goal state. Then, as the will is refined, it means more constraints are added or more descriptions of the goal. In other words, the goal is shrink from something general and vague, to something more specific. In perception it is the opposite. We perform inverse engineering. The basic understanding is to know what happened/was done. Better understanding is to know what was the purpose (behind the actions). But the full understanding is also from what problem this purpose was yielded/created from. Note, that will, after refinement, is merely a sketch of initial and goal states. To realize it, the will is used to scan for models of different levels. It is like an attentive beacon(s), focusing on potential models, see more in 8.2. The Fig 3(a) figure demonstrates a coarse-to-fine hierarchy, where the will along with its refinement, is placed at a top level. This level is vague, since nothing is perceived clearly about the ground level. However, descending the levels reveal more and more details, and get the will more closely to realization. It is similar to the process of zooming in on a geographical map. Higher levels propose general models as a potential stations in a possible trajectory from a problem state to a goal state. Then at descending, finer models (with more details) are proposed, consistent with the upper levels, to move from a problem to a goal state. This idea is similar to the idea used commonly in problem-solving (Zelikman et al., 2023): breaking a problem to smaller sub-problems, and then solving each separately, and finally combining them into one whole solution. The search at any level can be performed by any heuristic/learned model, such as back/forward chaining, Depth-First Search/Breadth-First Search techniques (Al-Ajlan, 2015), or any combination of those. However, more generally, it is dictated by will. Note the mismatch between the lowest model level (proposed solution) and Figure 5: Gradual will refinement in problem-solving data level (detailed solution), in Fig 3(a). It is due to our inclination towards abstracting, i.e. memorizing the essences and less the details [1], which is essential for efficient learning.9 It is described also in 7.1, where it is better to learn several patterns than to loose yourself in a non-pattern realm, where all we see are details. Footnote 9: Philosophically, it extends into separating models from reality totally. For example, the assumption of real figures behind the shadows in Plato’s cave, or the idealism where presumed ideal objects out of this world, where only coarse instances of those are experienced in our world. Or the space-time curvatures of Einstein’s general relativity, a model that cannot be detected directly. Also note that there are other configurations of problem-solving, such as inference: given specific start state and specific end state, to infer what happened in between. For example, at 12:00pm child left alone in a room with a closed cookie jar containing a cookie, and at 12:05pm it is observed that the jar is closed, but the cookie is nowhere to be seen. This inference also happens in every information perception (as in a story), where we required to fill in missing details. Also, in retrieval from memory, many details fade with time, e.g. time indexes, hence inference is used to extract the time something happened (e.g. if it happened in elementary school, with Sarah as teacher, the I must have been in 4th grade). But most likely, inference occur not only in interpolating (inner missing information), but also in extrapolating, i.e. forecasting how the plot develops, or proposing several plausible developments, while reducing the wrong ones as the plot is revealed. Just like investigators. All these examples are also supported in neuroscience, e.g. [Hahamy et al., 2023], where just like as in _AKREM_, _MOM_ is also inferring by traveling the different hierarchies/memories, within them and between different ones. Other configuration could be also in cases where the problem requires a long-term process, such as succeeding in educational course. These long-term goals, can be partly active in the background, either to be solved (or partially solved) during a mundane tasks, or through acquisition of new data and extracting the relevant information from it to the background problem. In summary, this approach is non-local, i.e. similar to Means-Ends Analysis [Sweller and Levine, 1982], it is looking simultaneously at the whole region, only within different resolutions. It is also cyclic and non-linear, both in the will-refining stage and in the realization stage. At will refining, it is since sometimes the goal states cannot be reached, so other states are needed to be generated, sometimes as a compromise. At the realization stage, it is since descending in levels might result in conflicts or failures, due to misalignment between the lowest models of reality and the actual reality. Hence, returning to higher levels for trying different solutions is needed. #### 3.4.2 Designing While in problem-solving, the will was generated from a problem, i.e. growing from an initial state, in **designing** it is the opposite. Here, instead, it is growing from the final state(s), searching for the best state to start the full solution from. It is like creating a story backward: starting from the end, to reach some beginning (Fig 6). In this case the final states are given, while unlike problem-solving the initial states are extended into a region (of possibilities). In designing there is a goal and a will to go there, but no specification of the problem or the initial states. Hence, it is an iterative process, see general algorithm 1. Each iteration starts from searching for a problem to reach the goal, then continues with a specific will connecting the problem with the goal, resulting with a problem to solve, thus formalizing it as a usual problem-solving task (Fig 6). ``` while not satisfied do search a problem to reach the goal try to resolve problem-solving task endwhile ``` **Algorithm 1** Designing Algorithm #### 3.4.3 Summary of Problem-Solving and Designing One can spot a duality here also. On the one hand, problem-solving is an analytical perspective, looking for a resolution or finishing a problem, by usually a systemic view, breaking it to parts and then looking for some appropriate solution, that serves as a better state than the one we started with (problem state). On the other hand, designing is based on a holistic perspective, where instead of finding a fast/analytical resolution to a problem ("to make everyone happy and go on with our lives") it is about empathy/consideration, i.e. it is about looking for the roots of the problem, and not just shutting it down quickly. It takes the opposite approach: instead of reducing the problem, it tries to track its sources and thus solving the causes that generated this problem state/situation in the first place. By doing so, it searches for a better problem to solve, a root problem, which solves multiple other problems too. Figure 6: Designing approach turns into problem-solving task Nevertheless, both problem-solving and design are non-linear in the sense that at each stage, we can skip backward to previous stages or cycle through the same stage several times. Also they are both have evaluation mode where the agent stops what it does to evaluate the whole process efficiency. For example, by checking if the current state can be considered as the final state, i.e. whether it holds with all the requirements for the solution. Or by checking how far the current states of the different trajectories to the solution. This non-linear cycling is like moving up and down in the hierarchy of walls. Additionally, we can see designing as specific and private case of problem-solving, only now the "given" is different. In summary, will is either starts from getting somewhere (goal) or getting out of something (problem), similarly to our basic wills to feel good and to get away from what is considered bad. In terms of explaining, "problem" is explaining the will's outward directions, while "purpose" is explaining the will's inward directions (or its realization process). Side note: searching for a trajectory to solve a problem can be also performed by inversion. For example, proving by negation, or explaining something not by usual causality but via contrastive manner, i.e. how other things are invalid in order for deriving the desirable outcome (Gilo and Markovitch, 2022). ### Will Field As been mentioned previously, if actions are vectors, then will is also a vector. This fact may imply that the direction property of the action is will, i.e. will is aligned with actions, to solve problems efficiently and fast. Thus it creates a field in the state space, which helps us to solve the problems we encounter. Hence, we have separate attributes of an action: consolidation measure and directionality of an action. It is like electrical/magnetic field, where the problem state can be considered as "+" charge for example, see Fig 5(a), and a goal state can be considered as "-" charge, see Fig 6(a). Consequently, when both appear, it defines problem-solving task, as seen in Fig 5(b). Of course, more complex structures are allowed, i.e. with multiple "particles". We can see demonstration of will field in searching for solutions in a given problem-solving task, as described in 3.4. See Fig 7(a) for System 1 solution. It is the first most intuitive solution that pops up, and it is almost the shortest path from problem to goal. However it is not considering constraints of the problem and of logic. Hence, for the real (or a better) solution we try different paths, either close to the intuitive solution or far from it, see Fig 7(b) for System 2 possible solutions. Note, that in this figure we see the alignment of will and knowledge models, in the same state space (see the will field in the background). This alignment support organization of knowledge (models), which is necessary for effective problem solving, instead of blind search (such as A*, BFS, DFS, Means-end analysis and more). So the AI works hard in organizing/aligning the actions, in order to work with minimum effort in the future, for fast response. The directionality can be in high dimension, since will can be a complex multi-facet vector with different aspects. Similarly, it could be like Capsule Nets (Sabour et al., 2017). Still you should note, that we represent will as an arrow/vector, thus assuming its basic form: either moving away from something or moving towards something. This idea can be attached to different human feelings, such as disgust, horror, nostalgia, relief, surprise, and more. See more in Fig 10. where in green: "+" will, i.e. will to go to (goal); in red: "-" will, i.e. will to go from (problem); in blue: unrelated emotions to AGI, like meaning of life, etc. Also the sense of self model, within AGI and other agents. This idea may explain why most of our problems are already aligned with will Figure 8: Human’s emotions and their relation to will Figure 7: Planning – From Will to Knowledge models. from previous experiences, mainly being instructed or taught by other people. Only very limited amount of problems are actually being solved from scratch, without alignment with will, which makes it feel like searching in the dark. It is an exhaustive search, where we are the first pioneers to create this alignment for others. As if we chart/pave a new path in a geographic map for others to use. This idea may be contemplated philosophically or religiously, claiming that the real human is a non-physical spirit/soul, attached to a physical body. This spirit has will. Thus in order to manifest this will, the spirit must somehow align it to the models it learns from the physical reality10. Footnote 10: There is the materialist theory, claiming that consciousness is settled in the brain. It is supported by neuroscience studies that show correlation between mental experiences with specific neural activities. However, it can also support dualism, since the alignment is definitely yield such correlations. Another issue, is what constitutes a state in state-space. Probably it is a collection of classes (and their states), but then how a collection of objects acquire a direction, and how is it implemented to its items? One option to solve this question: actions are already operate on several objects, so we could say that they have direction, to change the state, even if partially, towards the goal. Hence, it is ok to define single action's direction. Alternatively, we can set a direction to sequential action, i.e. action that is a recipe-like (composed of sequence of actions). Regarding problem-solving, after a will is settled, the first reaction is of System 1 type. This intuitive solution is usually very close to the real solution. It is often only needs a small tweaks. But if it is not enough, System 2 starts to search for partial paths (Divergent thinking) which finally combined into one solution (Convergent thinking). This alignment is what makes organization of knowledge (models), which is necessary for effective problem solving, instead of blind search. Hence we work hard in organizing/aligning the actions, off-line, in order to work with minimum effort in real-time (on-line), for fast response. More basically, the idea of will being aligned with actions is evident in new born infants. Their basic learning is about relating the will to body control, e.g. when one wants to raise its arm. Finally, as seen in Section 3.4, there are two separate spaces: models state space and will space. However, just as complex processes occur in models state space, there could be complex processes in will space. For example, the current will may derive from primitive emotions or from higher values, such as justice, respect, empathy to others (consideration, cooperation), success, equality, and more. Nevertheless, our hypothesis that the knowledge map itself is dynamic and dependent on will imply also that anything can be justified. That is, the basic problem-solving platform most often can find a trajectory to accomplish this will. Hence, the will dictates our perception and conclusions, while the rationality is merely a tool to realize and justify will. ### Basic Wills Although there are wills to solve problems, as discussed in previous sub-sections (will field and problem solving), there are fundamental wills that guide throughout our whole live in acquiring knowledge. It starts from the moment we born and continuous all the time. Our curiosity and wonder drive us and thus generate basic questions, e.g. Wh-questions (where, who, by whom, what, how, when, what, why, etc) or any other type of questions. These questions represent our basic wills to understand what we perceive. For example, in reading comprehension of young students. This idea resembles to the RRG method for language parsing [Van Valin, 1995]. It means, that we induce these basic wills during perceiving a story/message, even when no goal states exist. However, they are also guide us when a problem is conceived. In such case they can act as micro-wills, serving the macro will in the scope of the whole problem. These basic wills serve as guiding wills in inference, especially for completing missing information and more generally for the most important reason of all: understanding. It is like anything we perceive is like a mystery or a riddle, and our mind always tries to make sense of everything in the perceived message. Part of this process is also in cases where inference is confident and tries to predict what comes next, in order to evaluate how the current models are really confident (which if not - require some resolution or model update). With such interpretation we can picture these basic wills as small scanning/scouring devices/bots or probes, that are being sent first to examine the surface (see best analogy in _"Minority Report"_ 2002 movie with Tom Cruise, sending spider robots to scan people's eye for identification). In our case, scanning the surface is for all relevant facts and knowledge that can be gathered to solve the given mystery. Then these probes fill in missing information and inconsistencies. Perhaps these basic wills are the most primeval wills, rooted deep into what defines us human. Hence, these are usually aligned in early development, and frequently aligned also throughout our lives. Moreover, since they can be thought of as preceding the knowledge acquisition, they can be predefined in _MOM_, as prior knowledge of the system. See how inference is executed while perceiving data in Fig 9. Figure 9: Inference while perceiving Given a knowledge map (gray network in the figure), we assume that at some instance we perceive an input, which triggers some object in this map (the circles in red). Then an inference process starts to characterize this object, by propagating and activating relevant features11, either via 1 step or more. But the inference scanning is not blind. As mentioned above, it is usually also guided by will, the basic will, in a form of a simple question. This inference occurs not only in understanding, but also in planning, specifically to comprehend the problem and the goal. Footnote 11: We are a bit ahead of ourselves: this is Forms or _OOP_ representation of knowledge, with objects such as concepts, and their attributes/features that characterize them. More about this formulation is in 5.3. As we will see later, these basic question-like wills are guiding us not only in the micro of inference, but also in stitching larger components in a given story, to make sense also in the macro. Meaning, not only its function is in filling missing common data, but also in filling missing parts of a story, finally resulting with a fully-explained story. This explains why children are so often asks a lot of questions: for the purpose to align the micro and macro inference processes with the will. While as adults we continue to ask questions, to solve bigger problems. Also, in the opposite, many psychological methods are help people to get out of problematic or stuck situation, by guiding questions they ask themselves. This may be, how previous alignment, may illuminate the knowledge hidden and darken that already exists with us. This also explains the motivation to design curiosity loss in AI agents, such as embedding it in their reward via RL. Or presenting an extra reward for asking questions in a conversational settings, such as in dialogue or in some game. Finally, there are many guides for optimal prompts for LLMs, it is mostly tuned to response in a service orientation, i.e. to accomplish what is was asked for. However, usually it is very hard to understand exactly what was asked. Hence prompt engineering is such a difficult skill to acquire. Instead, what humans do in these situations is simple: they do not assume that everything clear right from the first interaction, and ask questions for clarification. Similarly an AI agent should ask questions to align itself with the user will. ### Types of Will Additionally, there are different categories of will, such as chronology, causation, and purposefulness. In stories, they are very intertwined/mixed. It is because purposefulness is a higher manifestation of will (usually applied in humans), while causation is a lower one, usually applied to animals/objects (e.g. _"A causes B"_), and chronology is simply the way will is implemented: in a delay. You first want, and then you try to accomplish it. Or in the case of causation, there is a law, as a fixed kind of will (e.g. gravity), and then it is realized. See more in Fig 10. These types of will are different qualitatively: objects do not will at all. It is an enforced will, that lacks any freedom to resist it. Animals have instincts which at least externally present some kind of choice, or at least some degrees of freedom, even if it is highly predictable and physically-constrained. Humans on the other hand, have much higher freedom (free-will), to be aware of their will in the first place, and then to govern it. In other words, we could say that _Desire_ is "a strong wish", which is usually not controlled by you, but instead controls you, hence it is more closer to physical level, since it is more predictable, and less vague/random. While will is assumed to be free, i.e. uncontrolled-by/independent-from the state of environment. Therefore, will can be an internal variable in a model (similar to Einstein's famous rejection to Quantum Mechanics, assuming that there are hidden variables). It is a hidden independent variable, due to the assumption that it is not affected or dependent on other variables, such as input variables perceived by the senses. But as discussed above, its independence has measure (from being highly independent in free-will, to being highly dependent with physical phenomena). Finally, will is everywhere, in all of our daily interactions. We try to figure out animals' intentions or people's hidden will, to make sense, or in our case to make our model complete, i.e. enable prediction.12 Since a specific human has his own will, he tries to figure out other people's will to enforce its own will over theirs. Sometimes it is via competition (if wills are in conflict), while sometimes it is through cooperation (common will, when it is more beneficial to work together then solely or in competition). Hence the rise of lying/deceiving phenomena (either learned from observation or inherited), which introduces new models to learn. It is beyond good or bad. It is simply modeling for the purpose of control, or the manifestation of my will, whatever it is. Moreover, will might be a complex phenomena. That is, it is dynamic like the wind (as spirit in spirituality). For example, it often can be a mix of wills, even wills in conflict [Heckhausen and Heckhausen, 2008]. Nevertheless, in this paper will is Figure 10: Types of will considered as a single entity at each moment, to simplify the discussion13. Footnote 13: Although it can be modeled simply by superposition, i.e. by summing different wills as any mathematical fields. Also, will can be fundamental, i.e. in the background of all discussed instances of will above. For example, for animals and humans: survival which includes protection and gaining energy mainly by food (again the going away from and to types). In humans it can be expressed via livelihood or a comfort and pleasant life, etc. These wills can be long-range, and without explicit actions derived from them, though these wills can be regarded as suspended, i.e. not always relevant. For example, while solving some problem, their are ignored. However, how is will actually included in _MOM_? It is discussed in 5.3 and Section 8.2. Another possible categorization of will is represented in Fig. 12, see the summary sub-section below. But additionally, we can have other categorization of will. In this case, we assume that will or some kind of objective is always presented in human/agent, in all cases. Hence, we are not passive while hearing/learning/perceiving. Similarly, there is no encoding/decoding will, as illustrated in Fig 3(b), since our main will is always exists, and the deciphered will is still under this main will. See this kind of categorization in Fig 11. Example for outward will is explaining. It is like going over the trajectory of an hierarchy (see _AKREM_). While in acting, it is like performing physically or executing a given hierarchy. ### Will chapter summary In conclusion of this chapter, we can define cognition a shown in Fig. 12. Figure 11: Cognitive operations under a category of the main will. Following are some additional definitions. * Will = what affects the actions in state space. * State = full world information at a given time instance. * Action = transform a given state to a new state. All these elements are coexist, aligned, in the same state space. These definitions are typical definition in many science fields, such in classic AI, RL, and control theory. Eventually, will, like action, are both vectors in state space, only we assume that usually will is affecting the preference of which action to be selected to change the current state. Note that the states are discrete while actions can be either continuous or discrete also, and the will is acting like a filter to reduce the amount or the range of possible actions we should apply to change the current state. This allows a more efficient and faster problem solving, since the solution space is reduced significantly. Also note that the state represent only all relevant elements of knowledge, and not everything that is in the world, since we have a limited WM. Later, in 5.4, we will define the full model, which defines also the states and actions themselves, which are composed of basic objects/actions. ## 4 New Associations Inspired by semantic nets, some general connections are proposed, such as inheritance (is-a relation), instance property (is-instance-of relation), part-whole property (part-of relation), an attribute of an object property (has relation), assigning a value to attribute (value relation)14, synonyms, antonyms/opposites and more, see [26]. All these connections are static and can represent stories as separate items, as it is in _AKREM_. But to connect these details, as a sequence of applied actions, operationability is introduced. Figure 12: Cognition definition in _MOM_. ## 5 Operational modeling The two principles added to _AKREM_ are operationability and modeling. Both are formalized via frames or classes in _OOP_ (Object-Oriented Programming) [21]. Operationability represents the methods that can operate on classes and change their state, while modeling in our terms is the abstraction capability15. Footnote 15: In philosophy, abstraction is like idealization [13], i.e. like “treeness” is the perfect idea of a tree, while all trees we actually see, are some instances of it. Similarly is the idea of perfect geometric shapes, like circle or cube. Both are utilized to construct a general model that can implement different cognitive functions humans have, and the different models representing interactions, see _"Models in communication"_ section in [14] and Fig. 38. To reach this utilization, this model representation should be transformed into a learnable model such as DNN. ### Operationability It is hypothesized that thinking is operational. Meaning, it is a process, generating new facts along the way, via some set of actions. Hence, the first principle added to _AKREM_ is operationability, which turns it from static to dynamic knowledge representation. That is, unlike static connections as in _AKREM_ and knowledge/scene-graphs (representing facts or a scene shot) - action connections in _MOM_ can also be productive (produce new _elements_). I.e., other knowledge representations, including _AKREM_, have pre-defined (or independently learned) actions and objects, constructing their knowledge base. Similarly is in rule-engine of an expert system, where there are rules and data they operate on. Operationability also allows for maximum flexibility in reasoning, problem solving, or in finding the proper response in a given situation. It does it by associating each concept with maximum related operations and attributes. Thus creating highly connected network of possible actions/paths for huge amount of scenarios to deal with. Consequently, operationability adds degrees of freedom to the current cognitive model, to move in new directions along the hierarchy, i.e. to create new hierarchies on the fly or update old ones, via admissible actions. It can be seen in Fig. 13(a), where the previous model, _AKREM_, had the ability to move on static memories/hierarchies (instances of associations). But the current model, is not limited to the options in a given instance, but rather it can always apply any of its available operations, thus it can create new associations, see Fig. 13(b). This demonstrates also the effect of past memories changing over time. To achieve operationability, a minimal set of primitive operations is proposed, to function as basic operations, which can be the building blocks for more complex and composite operations/actions. Operations such as logical relations (AND, OR, NOT, "for all" universal quantifier, "there is" existential quantifier, (in)equalities, exists, count), flow operations16 like loop operations (while, for) and if-else conditionals, mathematical operations (+,-,*,/,min,max,norm,log), and other relations. Footnote 16: Note that loop operators and if operators can be formalized as classes, with general methods and properties. See for example at the bottom of the website: [https://www.w3schools.com/python/python_iterators.asp](https://www.w3schools.com/python/python_iterators.asp), where an iterative class with _iter_() and _next_() methods is defined. Note, that loop operations are temporal ones, which should enable both using the past (current/past variable calculations) and the future (current/future variable calculations) for planning/simulating goals. For example, future calculations performed by halting the perception from senses, or ignoring it, and continuing in an imaginative state of mind. Although it exhibits the problem of also predicting future sensory input to this artificial reasoning process. Neuroscience claim a different mechanism to determine real verse not (imaginative/planned): by the strength of the input signals, i.e. exceeding some threshold [11], based on the assumption that imagery is generally weaker, or less vivid, than perception. Note, that "for all" and "there is" operations are first-order-logic binary quantifiers. These operation could become continuous features of classes, defined in some continuum, such as "all", and "there is" is a range includes: "most", "some", "few", "one"; and finally "none". Thus traverse it into fuzzy logic. Using De-Morgan laws we can deduct that "all" and "there is" complete each other, while "none" is the opposite of "all". See in Fig 14. Figure 13: Current model allow more degrees of freedom Similarly is in space and time, such as "always", "often"/"usually", "sometimes", "seldom"/"rarely", "never". And in belief: "necessarily", "possibly" (are complete each other like "all" and "there is"). And in not operation: from "yes" to total "no". Eventually, all these examples, and different measures (such as of consolidation, certainty, relevance, etc), are all demonstrate continuous features. This fact may drive us to rethink _DL_-only representation. This set of tools described above, can replace DNN units and DNN's fixed structure, in a program-search process. It can be implemented for example via DL [Chollet, 2019] or via other AI tools. For example, via Reservoir network, a random mix of basic rule components/blocks, yielding an algorithm best describing an operation, within models or between them. Another way to perform program-search is via Genetic Programming (GP). Also we could use the AlphaDev framework [Mankowitz et al., 2023]. Another option is Probabilistic Program Induction, which defines a Domain-Specific Language (DSL). DSL consists of primitives and grammar for programs that learn to generate data, similar to GP. Another option is Neural Module Networks [Andreas et al., 2016], where small modules perform on specific tasks, and they rearrange on the fly depending on the current problem. Another option is DreamCoder [Ellis et al., 2021].We could also consider the idea of Liquid Neural Networks [Hasani et al., 2022] or Neural Ordinary Differential Equations (NODE) [Chen et al., 2018] as an alternative to the modeling of temporal operations. These NNs learn spatio-temporal space at the core, unlike regular DNNs or RNNs. Other possibilities are described in the Subsection 8.4.1. One work tried the Bayesian program search did it in the Omniglot challenge [Lake et al., 2015]. However few limitation discovered: it needs lot of built-in primitives, lot of data to learn prior and likelihood distributions, and a lot of search (it is combinatoriaclly explodes), especially as the DSL becomes more expressive. One solution proposed is a neuro-symbolic one [Feinman and Lake, 2020]. Another way is CopyCat [Hofstadter et al., 1995] that uses current knowledge and top-down and bottom-up processes to answer an analogy task. The top is the concepts in the knowledge, while the bottom is the perceived information. At the same time, prior knowledge is needed to be inserted, within operations and other _elements_, by including: number of visits, uncertainty, rate of update, measure of dependence, and measure of consolidation. These variables Figure 14: Quantifiers as features in a continuum are needed for meta-cognition (Ackerman and Thompson, 2017). The measure of consolidation is to prioritize different options associated to some _element_, to separate the relevant from the irrelevant, e.g. admissible actions, which can be used inversely in creativity mode - by picking the less expected directions to follow. Also, evaluating actions (basic or composite) in planning can be embedded in such relevancy measures. Similar idea is presented in neuroscience, e.g. (Anderson and Barbey, 2022), where the researches state: _"Strong connections involve highly connected hubs of information-processing that are established when we learn about the world and become adept at solving familiar problems, while Weak connections have fewer neural linkages but enable flexibility and adaptive problem-solving"_. Note the difference between the level of uncertainty of a class (feature or object) and unknowing. In the former, it is the dilemma when several options are possible (like multiple versions), and you cannot rule between them. The latter is similar, only the number of options is different. We can either regard it as the case of no options, or all possible options are allowed. This notion is important for formulation of goal states. For example, "x=?" means x is an object, it has a feature of equality (=) but the other side of it is unknown. See math example for this in Fig. 27. Similarly, is in the riddle problem: _"what am I? I don't talk, but I reply when spoken to"_. In this case there is a general class with the assignment feature to some unknown specific class, e.g. a watermelon or a cat. This idea can extend further to any inference through exploration in knowledge map. That is, the filling of missing information, there are small problem-solving tasks, in which the missing information of some feature is unknown and in the process of become known. Consolidation, see Section 7.1, can be regarded as the level of (un)certainty in the class itself. Moreover, action's admissibility is needed for several reasons. Firstly, due to the elimination of entry conditions necessary for an action to be performed, e.g. on which types (integer, string, etc). This is common in languages like C#, where all variables' type must be pre-defined. Secondly, it is due to the ability to use High-Order Logic (HOL) (Miller and Nadathur, 2012), as in \(\lambda\)-calculus, which removes any restrictions on an object's slot or action's argument. For example, there could be actions on actions, e.g. function as an argument in a function or a class decorators in python which are classes that can modify functions. Hence, relevancy is needed to constrain action's admissible space. Additional constraints can be embedded in relevancy: such as different reward-s/costs for applying an action, time/resources constraints, and more. Note that the problem specifications determine the constraints, while the will (which can also be extracted from the problem state) determines action's features such as relevancy and reward. And thirdly, it limits the search space in a given problem/request/situation, by restricting the number of possible sequences of actions. Hence, previous experience comes to help. Other action's features include previous evaluation measures. Thus, perhaps, this is the trade-off between system 1 and 2, when automatic response is based solely on previous experiences, while system 2 comes to play when you want to rely less on these experiences, and to create new ones. ### Operationability in Deep Learning In this section we present an alternative approach, that exists in literature, to learn objects and actions - via embedding learning. In _DL_ objects or classes are usually learned via Supervised Learning (SL) and actions are learned via RL, see Figure 15(a). However, they should be learned together, where they are inter-wined (unlike scene graphs with pre-defined actions). Moreover, DNN models should be learned/updated on the micro and the macro levels, i.e. models of simple objects/actions or their complex combinations, see Figure 15(b). If they are learned jointly, then the next challenge, is how to represent operational data, i.e.: objects, actions, attributes, etc. See e.g. in Fig. 16, where the representation is embedded in some feature space, consisting of objects as points or other shapes, and actions as vectors. One known theory [10], is that neocortex was evolved from or on the basis of the old brain, i.e. the basic/instinct/primitive/reptile brain. Similarly, later in [10], Jeff Hawkins extends the physical senso-motor system to the abstract thinking. In our opinion, infant's first training is over the same principles in a physical environment. It is what prepares him for next level of abstracting similar principles beyond physical realm (as in Piaget's development theory [11]). Figure 15: Object and Action learning Interestingly, in [Bordes et al., 2013] they perform embedding learning over dataset of triples, each consisted of two objects \((h,t)\) and a relation \(l\) between them. \(h,t,l\) are high-dimensional vectors, or tensors. This embedding tries to learn in a high-dimensional space where each triplet will follow the distance equation of \(h+l\approx t\). Note, that what discussed above is a fixed embedding (such as word2vec). This is representation of knowledge (e.g. words, sentences) in a vector form. Which can make tasks like text retrieval or usage much more efficient than regular keyword search. However, contextual embedding is preferred, as in BERT, though it is difficult to encode such a space, as dynamic one. Nevertheless, this is encoding, i.e. implicit representation of knowledge, of sub-symbolic features, not interpretable to us. Hence, _MOM_ proposes a more explicit representation, of symbolic features. It may be better, since perhaps all actions cannot be mapped into one feature space, if some actions have no connection between them (although perhaps all actions could be encoded in one highly dimensional space, providing a huge amount of features, such as SDR and VSA, see explanation below). _MOM_ represents similar hypothetical space, where individual actions change the system's state. It supports different and unrelated types of actions, such as temporal-only, spatial-only, spatio-temporal, logical, etc. However, this type of state-space representation is comfortable for transforming it to other representation, by a simple mapping. This could be beneficial for example in cases when to solve a particular problem, an appropriate representation is required. More generally, this is just one option to represent state space, actions and will. Other options are also possible. For example, the vector space representation of state space above, with vector operations within it is one option. Another representation could be by attaching some function to every state and thus representing the state space as some high-dimensional manifold or surface, where problem solving is occurs not via going from state to state but by optimizing this function and moving along Figure 16: Operational features space, e.g. in word embeddings its gradient to reach some optimum. If we include the will field, than this function become a vector field function, where each state returns a vector, and the movement is according to the field and not according to the gradient. Finally, this function, as a measure of state, is dynamic, and it changes as a function of will, constraints, settings, etc. Eventually, yielding a manifold that contains all information of the current problem (will, constraints, settings) in its shape. This idea resembles the idea of connectionism verse awareness in neuroscience [13], where they claim that in macro the brain operates more in propagating waves then message-passing among neurons. Similar idea exists in general relativity, where the (gravitational) force field is the result of mass distribution in the universe. Note that though gradient-descent (GD) method, in function representation, is easier to reach the goal state, compared to classic search algorithms, however, one enormous problem with it is local optimums and saddle points. Hence, one way to deal with it, is the ability to see the function not only locally, but also in higher resolution, if we climb to higher levels. All proposed representation so far is summarized in Fig. 17. ### Modeling If operationability is considered, as the addition of freedom to move in a 2D knowledge representation, then modeling is an addition of a new dimension, i.e. converting it to 3D. It is about extending the "is-a" operation into a programming abstraction, as in _OOP_ (Object-Oriented Programming), or in abstract mathematics, such as algebra or category theory17. Meaning, that while usually semantic networks represent this operation in a 2D graph, here the instances are totally separated from classes, and from classes of classes, and so on. Resulting with a multi-level of abstractions, while for simplicity two types of levels can be distinguished, in the final LTM, see Fig. 40. Figure 17: Different representations of state space, actions and will. We can see these classes and instances also in problem-solving or planning. If we disregard for a second the will level in Fig 3(a), that directs all the models below, then we can see that the models are also comprised of levels, which are separated by the amount of abstraction or grouping. For example, we can see an abstraction or inheritance of classes, and an instance in the reality below them. We see the cat image in reality, then above it the "cat" class, then "animal" class above it, then "object" class above it. See this example in Fig. 18. In summary, at first all different associations including operationability are describing objects, and then abstraction extends objects as instances into classes, which represent models. Especially, for complex cases like learning-to-learn, i.e. meta-learning, where abstraction is over similar tasks, or in transfer learning among tasks. Consequently, unlike _AKREM_, these full models are important for natural communication, e.g. for context-based conversations, where undleivered missing information (common sense) is needed to be filled. For example, the sentence _"The car was uncomfortable. The seat was too low."_, indicates that "the seat" refers to the seat of the car. Or in the sentence _"The kid put his ball in a basket. Then the basket was moved to another room. Where is the ball?"_. Or in the sentence _"Nick is in the kitchen. Jim is in the garden. Later Jim went to the kitchen. Who is in the kitchen?"18_. These both cases involve inference to fill up the missing information, which can be regarded as mini problem-solving Figure 18: Example of how the models ordered in a hierarchy by abstraction or grouping. tasks, as mentioned in the ending summary of Section 3.3. Note that common sense is necessary also for intention comprehension, i.e. when we request something for an AI agent, in the usual minimalistic form, it should infer all the relevant constraints to accomplish its goal, e.g. for minimizing SPAM, it will not try to kill all people for that matter. 19 Footnote 19: Similar idea is developed by Shannon, defining information as a measure of surprise. Meaning, that while communication and perhaps while remembering, we tend to obtain and emphasize what is new. Moreover, to establish context, we must define "settings", or the context describing the overall situation. Settings can have also sub-settings and so on. For example, we can catch a ball moving fast in a game, since we define the settings we are in. In driving a car, we react fast with controlling the vehicle and attending the environment, since we first aware of our settings. This idea also aligns with the temporal hierarchy in [Komarovsky, 2022c], where in parallel there are the lower layer of high-frequency (fast) perception-response, then above it and based on it is the mid-frequency (fast) perception-response layer, and so on. It means that the fast temporal scale work efficiently if the settings perceived in the layer above are correct, or fully "ready" for any data entering. Similar are the settings above the mid layer, and so on. Settings include also the producing/receiving of language. The settings are highly important to interpret correctly the situation and respond appropriately. In other words, the settings or the context dictate which features are relevant to interpret objects, and it relates or uses only the relevant actions for these objects. However, we could replace the "settings" with general planning for broad class of situations. Meaning, the driving or the sports game above can represent examples of abstract classes and actions. For example, in driving: _"if I see a stop sign I must stop; if I see a crosswalk I must watch for pedestrians and slow down"_, and so on. This extension has several implications. * First, in a broader perception from senses: from the basic recognition of instances in _AKREM_, to a multi-level recognition of instances and classes. * Next, every _element_ is learned and can be abstracted (into class), i.e. objects, actions, relations, and attributes. For example, action with attributes (such as bi- or uni- directionality of an action), relation with attributes such as strength, numeric attribute/values as class and its different types as sub-classes (integer, real, complex) in different bases, group types (strings, sets, tuples, lists, arrays, dictionaries) with their group operations (slicing, concatenation, union, intersection, difference, indexing, searching, sorting, replacement, etc.), and more. Although different types have also their specific attributes (e.g. set is a collection which is unordered, unchangeable, and unindexed), and their specialized operations, such as lower/upper case for strings, ordering operations for non-set types, key-value operations and nesting in dictionaries, etc. Moreover, there could be also non-deterministic classes, such as fuzzy sets and stochastic variables (characterized by some distribution, such as uniform/Gaussian distribution). * Next, related to attributes of objects or classes, treated as variables, we assume that there is no garbage collector in the WM. Hence, legitimate variables could be only attributes of classes, and no temporal variables are allowed (unless they are part of temporal classes/methods). Subsequently, our system's state is defined as the state of all active classes in WM, which leaves the methods as purely functional/declarative (not imperative), see more about Programming Paradigms (Chen, 2019; Van Roy et al., 2009). Moreover, as stated here [SCALFANI], Functional Programming (FP) is much better for maintenance and updating of existing code. Which in our case means the ability to learn with as least modifications to previous learned "code", because naturally it requires too much effort. * Next, this extension enables answering the question about how will is implemented in _MOM_, see question at the end of Section 3. Alternatively to _AKREM_'s hierarchy by will, which it is not clear how it can be implemented, it could be generated by abstraction, while the will/intention is serving as an additional and independent variable in the models that construct the hierarchy. The assumption here is that humans have will independent of their situation, while if will is deprived from _AGI_ agent, then this variable is a function of its sensory input, since it is embedded in the request from the human user. Additionally, feeling measures could be included, as influencing will. Our six basic emotions (Snoek et al., 2023) are: happy, surprise, fear, disgust, anger and sadness. Alternatively, they can be represented as some high-dimensional vector, e.g. via valence-arousal model in 2D (Cui et al., 2023). Although one paper (Coppini et al., 2023) claims that the six universal basic emotions are too simplified to describe the wealth of realistic range of emotions, hence propose special ontology for that. Moreover, since there are several levels of will, correspondingly there are the main variable and secondary variables representing these wills, perhaps with different significance intensities, depending on the abstraction level. An example for different abstractions could be demonstrated in the word "bridge" (Mitchell, 2023). The simplest form is learned from physical examples of bridges, but a higher one could extend into function and purpose. Hence it can be detected also as a log connecting two river shores or ants that form a bridge to fill a gap, or more abstract notions such as bridging a social gap. Note, that both modeling (abstracting) and instantiating are localized energy-efficient memory structures. Meaning, that we do not separate episodic memory from the base or knowledge memories, nor the need to have separate memory for abstractions. All of those reside together. This may explain why it is not so simple to recall events (since they are all embedded in the classes and only small additional unique cues distinguish specific memories from them). Similar idea is stated in [13]: _"To conserve resources, a brain must therefore be able to distinguish when it's worth the cost to form a memory and when it's not."_. It may mean that we have the trade-off between being based on current knowledge and updating it. Finally, the novelty here, is that unlike _DL_ which performs program-search in an un-interpretable way, here however, additional inductive bias is introduced: separating of models (Section 8.1) and performing program-search to relevant actions (Section 8.4.1), in consistency with other models and actions. This makes _MOM_ both usable and interpretable. Moreover, _DL_ copes with data complexity by simple memorization, hence it requires huge amount of data, to represent all different scenarios in the world. Humans on the other hand, try to learn the basic ingredients that are common to all the knowledge they receive, to construct any new information, such as through inference and aggregation (grouping and abstraction). This is what separating models means, along with compositionality to construct complex models. Side note: Meta-cognition is a similar phenomena to abstracting, since it is also in levels. The difference, that it is level of thoughts. See also Marvin's "Emotion machine" [14] about the levels of thinking20. First-order thinking is simple experience of sensory input, while second-order is analyzing/contemplating a memory in which such experience occurred. Eventually high-order thinking is just pilling up memories of memories (or hierarchies), watching one over each other. Hence, it is like extension of the hierarchy representing a memory, into a hierarchy of hierarchies. This occurs also in stories and any messages, e.g. _"He said that he was thinking about sailing"_. Note that meta-cognition, as probably any hierarchical process (as abstracting), evolve gradually from childhood. Footnote 20: This yet again enforcing the idea of humans as being merely observes. Not only of physical reality through senses, but also of the thoughts themselves, which is why the thoughts can get hierarchical. Since we can view always from an upper layer on any thought. Alternatively, high-order thinking can be regarded as any hierarchy, i.e. connected to different parts, and not separated. For example, the sentence _"Bill claims his cat chases mice"_, is attaching Bill to a group object, which consists of the details in "his cat chases mice". See this example illustrated in Fig. 26. More generally, for simplicity, there are only two cognitive operations with memories: construction and retrieving. This way the examples above can be explained as following: * 1st order thinking (observing) \(\rightarrow\) construction, from basic LTM concepts. * 2nd order thinking (recalling a memory) \(\rightarrow\) retrieving or traveling the different hierarchies/memories. * 3rd order thinking (meta-cognition) \(\rightarrow\) construction, attached to past memories. Finally, just as any abstract thinking, e.g. "what if" counterfactual thinking, or high-order thinking (meta-cognition), knowing what others knows, etc - are all based on the same mechanism of constructing in height (hierarchy). This operations used in many cases, e.g. in comparing old knowledge to new incoming knowledge, and update it accordingly, or in any problem-solving scenario, such as imposing time constraints for solving a problem in high-order hierarchies. ### Summary of operational modeling In _MOM_, we represent each object or class along with its features and actions, or in the equivalent graph view on the right, see Fig. 19(a). Also we can see how each element of an object has its own measure of relevance, or more generally it could be other measures, all treated as features in the spoken class. Also, as in 3.8, we present in the following some additional definitions. * State = group of relevant object classes and their feature classes. * Action = learnable algorithmic class. * Will: derived implicitly from state, and it dictates relevant actions. Figure 19: State, object and action definitions in _MOM_. After the definitions of knowledge basic elements as described in 3.8, we can represent a state as group of objects, and actions as those that operate on one or more objects in a state. The actions are learned, and the will is specified usually in the first states. How will is derived from the state is elaborated upon in 5.4.1. As we showed previously, the will and the knowledge models are aligned, so that the will would be guiding the state to select the most relevant actions. Additionally, actions operate in a continuous spatio-temporal space. Meaning, just as it can act in any direction and region in space, similarly it is in time: it can be either immediate or include past/future of any scale. This also enables causality modeling. Finally, the inference process, described in 3.6, occurs not only in understanding, but also in planning, specifically to comprehend the problem and the goal. Therefore, we can view understanding/planning as the macro processes, with external will supplied to the agent from the outside, either in the form of some problem to solve or in the form of hidden will that needed to be extracted. On the other hand, inference is a micro process, derived from internal will (to make sense), and it occurs during the construction of the macro process described above. The micro process evolves within states, while the macro process evolves between states. See summary in Fig 20. In conclusion, every change, either in features or due to actions, is always guided by will, or as frequently as possible. Also, note that hierarchy is what makes any partial model to be a full one. For example, we can describe a robot/person as: _get out of apartment \(\rightarrow\) went down an elevator \(\rightarrow\) walked 100m straight \(\rightarrow\) turned 90 degrees left \(\rightarrow\) walked 200m straight \(\rightarrow\) enter a door \(\rightarrow\) walked... \(\rightarrow\) picked onions \(\rightarrow\)..._ This example is how any machine can grasp sequence of actions, as seen in the lower right part of Fig 20. But for human it is clearly not telling much. We want to understand what was the purpose of this sequence. Hence, telling him _"The robot went to buy onions in a nearby store"_, is what makes this model full, see the upper right part of Fig 20. Figure 20: Every change (small or big) is accompanied with a will. #### 5.4.1 Will derivation from state As mentioned above, will is derived implicitly from state, i.e. from perception. From the most direct approach, as in the examples in 6.2.2, with will derived from the first constructed state(s), to the most indirect one, where it is very difficult to reconstruct it, and it is usually derived from long process of aggregation21. Footnote 21: Due to the complexity of state to will derivation, it is also suggested to use flexible tool as DL. Hence, we can derive the following equation: \[\text{state}\Longrightarrow\text{will}\Longrightarrow\text{action} \tag{2}\] Our puzzle here is to figure out how the will is derived from state(s). As stated previously, will is learned (given only the states), in the evolution phase of _MOM_. However, this claim is too general, while at the same time in many examples the will is specified explicitly in the state, i.e. the learning is very straightforward. See examples of the math problem, the robot request or the simple problems in 6.2.2. This is true mostly for inanimate objects, see Fig 10. But there are many cases, that the will is implicit and hidden from us, such Figure 21: Will derivation from state. as for animals and humans. See "state\(\rightarrow\)will" row in Fig 2122, where animals are driven by hidden inner emotions. Though humans can also act upon emotions23, however some people acquire a second nature, that override emotional responses. It is similar to the hierarchical mesh theory of free will [12]. It is usually derived from some set of values or beliefs. This set is represented by rules, and it is similar to language rules (see Fig. 24), acting as an interface between two entities. Also, just like language, this interface may be external tool set to the knowledge map, just like any additional constraints (time and other resources, and values for admissible actions to use) in problem-solving tasks. Hence, all of these may be placed in the top-down region as described in Fig. 41. Footnote 22: Fig 21 is an extension of Fig 10. Footnote 23: While Descartes’s philosophy of humans states that there are two modes of thinking: understanding and will, Hume’s theory on the other hand makes the rationality the slave of emotions, instead of the opposite common view where it is who’s in control. This is our assumption also: the will dictate everything. It then use cognition as merely a tool to fulfill itself or to justify itself, no matter how selfish or horrible it could be. However, any set of rules is also deterministic, and so is the derivation of will from emotions in animals. See "to it/him-self" row in Fig 21. So how animals and humans models are still stochastic? Indeed these three mechanisms are deterministic by definition. However it is not the case for an outside observer or listener, see "to an outsider" row in Fig 21. For the animal it is usually hidden, while for the human there is also a second factor, the free will, struggling between these mechanisms, to choose to behave accordingly to values or not. Finally, all these mechanisms as perceived by an outsider can be modeled in three forms of states and their admissible actions, ranging from totally deterministic as an "if-then" rule system, up to totally stochastic, see "inside state" row in Fig 21. This represents admissible actions as either multiple or single. Note, that rule-based system deals only with the deterministic rules, while ignoring the fuzzy rules, where more then one outcome exists. For some examples of these three forms, see sub-section 6.2.4. Note also, that studies show that including emotions in some event assist in memorization of it. It can be seen when monotonic type of talking reduce the motivation to remember or even to listen (may even cause drowsiness). This effect might just support the importance of will in any message. ## 6 Practical MOM in Mature state In this section we start with some thorough examination of the proposed knowledge structure in _MOM_'s mature intelligence state, especially grouping (either in space or in time or in both), following with different examples. ### Knowledge structure Note that causality is a special case of modeling, a spatio-temporal one, where re-occurrence is consolidated. More generally, re-occurrence helps in learning both static objects and dynamic basic/composite events (equivalent to scenarios/scripts in _OOP_). Usually learned events are only the basic ones, i.e. in micro scale, because compositions of these are too many, and it becomes a combinatorial problem to remember all encountered compositions. Only few, very common composite events remain. Note that events can also represent common scripts, recipes and algorithms (can be used in problem-solving for example). Which is why in our opinion, basic algorithms, may use _FP_, while time-sequences, event-related algorithms, that similar to planning, are of _OOP_ type. See more in Fig 22. Last sentence means, that aggregation in Fig 22 is not only temporal for actions, as in Macro in Fig 20, but can be also compositional, as shown in Fig. 36, via _FP_ for example. We can see an example for action compositionality in Fig 23. A given plan or composite action is illustrated in Fig 23(a) as a directed graph, with the python code of the algorithm in red rectangle. The composition of actions is illustrated in Fig 23(b). Figure 22: Aggregation of basic elements to composite ones. More generally, as seen in Fig 23(a), actions as algorithms are represented via flow charts, where there are many possible scenarios. Such abstract algorithm is seen also in the following demos, e.g. in math problem. Also, though we have seen the flow only in the lower implementation level of actions, it is not limited to these levels, and can appear also in higher ones. Note that temporal compositionality of actions is similar to the simple Chain-Of-Thoughts (CoT) in LLMs. But as we know this is only one branch in diverse thinking while planning for solution, see Fig. 4 and Fig 7(b). This is analogous to the Tree-of-Thoughts (ToT). But this is also limited to temporal or sequential type of thoughts/actions/algorithms. More generally, it can have flows in any direction besides the forward one. This is what introduces a more general scheme, called Graph-Of-Thoughts (GoT) [Besta et al., 2023]. GoT models different thought processes, such as divergent-convergent thinking (generating ideas then converge into one final idea), looping over a specific thought to enhance it. Note that all these processes in LLMs, describe cognitive flow operations, while we mostly concentrate on the final trajectories of thoughts, which can be memorized for later retrieval. We regard cognitive processes as in generally learned, to develop some intuition. Finally, this complex process of generating multiple thoughts and evaluating them on-the-fly, is the macro Figure 23: Hierarchy of composition in actions. process of entire thoughts. Similar process is exists in the micro in LLMs, e.g. beam search and alike - in producing single words. Moreover, additional prior knowledge can be inserted in recording events: date/place markers for later recollection24. Especially it is needed for temporal relativity between events, i.e. for after/before relation. Footnote 24: It could be done for example by attaching the relevant spatial and temporal attributes to the particular event class Some clarification so far: objects regarded as static, while their actions can be involving time, e.g. a dog object has a "running" action which when applied transforms its current state to a new state in the future25. Scripts are classes that contain a sequence of causally coherent event classes, i.e. each event causes the next one 26. But in order to be fully informative, it includes also all the background of these events: entry and exit conditions, objects involved in the events, and tracks - variations of the script, i.e. different options of event sequences. In other words, it is a flow program of "if-else"s made out of different events. An event can be either basic, i.e. made out of primitive actions, e.g. via simple sentence _"The dog ran home"_, or it could be composite (a scene), made out of basic events. A script can be part of some object, e.g. Eat-at-Restaurant Script is Event-Sequence slot in a Restaurant class. Footnote 25: This running action is attached to the dog object as a feature, as long as the action is operating. Additionally, since scripts have many possible tracks to follow, it can be modeled usually via method, representing nested if-then structure (tree-like), to represent all possible common tracks. Actually, even the default sequence of events in a script is already a method, and it is belong/applies to the script object. Hence, knowledge should not be represented merely by nodes and edges as a simple graph, but as a hyper-graph (Bretto, 2013), or as nested graph (Angles, 2009), where entities can also be groups. So we can abstract our original meaning of grouping (part-whole) not only on static info but also on a dynamic one, i.e. group of events. This grouping introduce yet another type of hierarchy in our system, besides the abstraction. Or we consider grouping as matching some general class to the given details. Hence, there are two independent types of hierarchies that can evolve in the system. Both could produce temporal classes/instances, which do not need special memory, but the usual one, where their usefulness decay rapidly if not reused. Moreover, gaining essence out of grouping (as it is in _AKREM_) is a difficult challenge, but also important for summarization tasks, and for retrieval from memory. One way it could be done is by generating the meaning of the group, e.g. by formulating it as a problem-solving task, where we describe the problem/goal/trajectory that this group represents. However, in _MOM_, these formal definitions play out differently. First, there are no conditions, but rather simple associative triggering of relative objects (either static or dynamic). Also note, that beyond inner associations (attributes of an object) or specific types of relations discussed in Section 4, there could be also associations among concepts, either basic or complex/grouped, mainly derived from past memories (hierarchies). Second, it is all based on abstraction principle. At the lowest level it is an accurate sequence of object and action instances, which is stored as episodic event. At an upper level it can abstract different parts of this or/and similar events, e.g. same sequence of actions but different objects participating them. A higher level of abstraction can add also abstraction of the specific actions/events that are involved in the previous episodic events. Even higher abstraction can involve only the major events that are shared among many episodic events, in a form of a group perhaps, and not necessarily with specified order. Note, that this generalizes the recognition in the system, to be not only spatial (of static objects) but also temporal (of events), hence for example an action or a set/sequence of actions can trigger some event object/class. An event class like any class can have its own actions and properties, for example an action can contain some admissible actions that follow it or precedes it. Also, since we can have hierarchical structure not only for classes and groups, but also for actions, then will field affects this structure also. It diffuses down the hierarchy and learns also the directionality of inner actions. That is, there is alignment not only on the actions in high levels, but also on sub-levels until the lowest level. Hence, we result with some type of dependence/alignment between the upper and lower levels actions' directions. This hierarchy is what illustrated in Fig 3(a), where coarse models are classes that can represent a group of classes in some structure (e.g. a group). As stated there, it is easier (less mental load) to plan in such hierarchy, since you can plan via small amount of large chunks (of sub-classes), then descend and plan also for small amount of classes, only smaller chunks, and so on. ### Examples of MOM in Mature state #### 6.2.1 Language as separated from knowledge Before presenting the following examples, we should note that language itself is a set of models, likewise are the translation between them. These models are most probably symbolic, i.e. included in the classes of our knowledge models. Although it might be partially also sub-symbolic (and perhaps be part of the cognitive models, see 8.1 and Fig. 43). Either way, these models are utilized in every input/output language operations, such as reading or listening for input or talking/writing for output. It means that the knowledge itself is not represented in the form of language, since syntax and grammar are specialized for each language, while the knowledge itself should be invariant to the language we use to express it. Language is like our interface to the outside world. Though thoughts themselves can be accommodated with language it is still not rooted in the knowledge itself, which is far more flexible. It, or "talking thoughts", is simply our way of simulating interaction with the outside world, similarly to any planning or imagination activities. In summary, Language is separated from knowledge or the actual cognitive operations, since these operations are more fundamental processes than language's specific structure. Language is merely a tool to communicate thoughts (to myself or others). This interaction is visualized in Fig. 24, where external process are e.g. senso-motoric activities and internal are e.g. planning, imagining, simulating, etc. This side note purpose in relevance to the following examples, was to emphasize that we ignore the language to/from conversions with knowledge map. Another note, is that knowledge represented in _MOM_'s mature state (objects, features, actions) and wills - are all learned, in the evolution phase. Hence the knowledge in the following examples is merely hypothetical demonstration, while in reality it could be different. Finally, naming, or the inclusion of language to describe objects/actions/features is not necessary in model learning. The learning still occurs, even before its name is introduced, or if it is forgotten for some reason. #### 6.2.2 Simple examples The simple examples are represented in Fig. 25. Also we can see how each element of an object has its own measure of relevance, or more generally it could be other measures, all treated as features in the spoken class. Figure 24: Language as separated from knowledge. In Fig. 25 examples, the cartoon of a head emphasizes that an attention is split: one part is perceiving incoming information, while the other part is trying to comprehend the total accumulated information received so far (by applying inference simultaneously with perceiving). Usually this part of attention is in the background (sub-conscious). In Fig. 25(a) example, it first triggers the fish object, then guided by "what" question it extracts the relevant feature(s). Next it is ready to use language models to phrase the goal state in a correct grammatical form. This step is avoided as mentioned previously. In Fig. 25(b) example, the dog class is triggered, and then an instance of it, assuming it to be the last discussed dog, is accessed. Again, we use will to extract the relevant feature. This time "where" question directs us to a specific feature: Place, which is a class by itself, with the feature: "under the bed". Next, we demonstrate high-order thinking, which is discussed in 5.3. Figure 25: Simple examples. Fig. 26 is an example of this case. To simplify the image we present only the final state of the given message. In the example we show that high-order thinking can be expressed as a hierarchy via grouping. Actually, the group can be expressed in the same level. Also, this hierarchy is similar to part-of-speech (POS) parsing. Next, an example of problem-solving task is presented in Fig. 27. This example is a math problem, described as follows: _"Math problem: given that 2 helicopter toys and 3 car toys cost 600. also given that 3 helicopter toys and 1 car toy cost 550. How much the helicopter and the car toys are cost each?"_. In Fig. 27 we see several things. First, we see a group object (of Helicopter and Car objects, colored in brown) in the problem state, and its cost feature. Next, we see the will induced for this specific problem. Next, we do not see it in the figure, but there is inheritance: Helicopter and Car objects derived from Figure 27: Math problem example. Figure 26: High-order thinking via grouping. Helicopter and Car classes, respectively, and these classes derive from the Toy class. Finally, we see a short visualization of one proposed solution, displaying only the main elements in each intermediate state. Actually, this example is similar to analogy, i.e it works in both instance and in class levels. It first transform the instances described in the problem into abstract numerical classes, then it uses actions/algorithms to solve this problem in its abstract form, then after solving it, it returns to the instance level to reply the answer. See this example illustrated in Fig. 28. Just a small note. As we mentioned in Section 1, there are unidirectional features/actions, meaning not every action is easily can apply inverse action to retrieve the original concept. We can see in Fig. 28 for example, the abstraction and specialization processes as such. Another example is differentiation and integration. Differentiation is taking an abstract function and calculate its instant tangent in a specific point, which is simply to do. This is abstract-to-special process. On the other hand, there is integration, the opposite process of differentiation. It is taking abstract function and produce abstract function, that calculates over unspecified range, unlike neighboring range of differentiation. This is in general hard or impossible to do. Another example of such abstract planning is presented in Fig. 29. Here we show that not all state transitions are deterministic. Many of them are stochastic even for the agent's actions, and many others are stochastic due to dependence on other factors beside the agent. Like in any game, for example chess (other agents). Stochasticity can be expressed for example via abstract states, having un-assigned class variables, i.e. unknown. For example, \(S_{1}\) state in Fig. 29 contains a _Post-Office_ class which have un-assigned feature of either being open or closed, while \(S_{2}\) contains a _package_ class which have un-assigned Figure 28: Math problem solution via the hierarchy of models. feature of either being found or not. See similar discussion about pre-assigned classes in 5.1. In order not to make this planning too stochastic, we try to consider only most probable possibilities, and disregard planning the response to any of the possibilities. #### 6.2.3 Complex example Next example is a complex one, hence it will be described lengthily. It is a mixed example, because it involves both the part of story comprehension, or understanding the problem (initial and goal states), and the planning part to solve the given problem. Fig. 30 is the result of the following description, revealed as an incoming stream to the robot: _"Jim speaking to his robot: I'm in a shower. Forgot to take a towel. Can you bring me one?"_. Next, we develop our discussion about the example in the following sequence: description of prior background knowledge and settings prior to the event, then the process of perception and inference (Fig. 30(a)), and finally the process of planning (Fig. 30(b)). There is naturally some pre-settings of this request: there is a serving robot, Jim is a human and he is its supervisor. When Jim talking to the robot, it needs to perform what Jim asks. Currently they are both in Jim's apartment. Some background common sense is assumed to be learned by the robot. Such as facing uncertainty while receiving a request. Other constraints include: ethics - do not break stuff (e.g. wall) to get something, nor to ride over people or animals on the way; actions/states that are forbidden/required (e.g. embedded Figure 29: Uncertainty of actions. in cost feature); and general cognitive memory and time (resource) constraints: e.g. total number of steps or total time constraint (or order of it). We can see the simultaneous inference process in the states. For example, in the initial state, face washing denial and towel's feature are not given in the input stream, but inferred. Also, note that the actual representation of this example is larger. Due to space limits, many details were omitted. For example, we do not show that "face washing" activity is a wrong branch, even though it requires also a towel. Since he did not specified this, the robot can either ask for confirmation/clarification, or assume which of the tasks is in more demand. Body washing means Jim is nude, and it requires from him to get dressed to find the towel himself, while face washing is less of a problem. Hence, body washing is the more challenging activity to retrieve the towel himself, hence Jim is in need for more assistance in this case. As can be seen, the states presented in the figure are the act of understanding the problem, or the story so far, until the robot realizes the request from the Figure 30: Mixed Example: Understanding and Planning. story. This request is what turn this story as incomplete, which requires some resolution/completion to fill the missing steps of the story in the state space. This description is typical to most of our communication, since we regard the incoming messages as some mystery to be figured out, either by inner inference or outer inference. All for the purpose of constructing a legitimate full story. Next we describe these two phases. The process of state generation is as follows (Fig. 30(a)). It first tries to comprehend the situation: shower, towel. Next it tries to construct the goal state: by=robot, where=to Jim's hand, when=as soon as possible (since shower is a matter of minutes). We can also see, that state generation is not only performed to the future, but in some cases it can also be into the past, and not only that but it could be also not in consecutive steps, but in some skips, such as the skip from the initial state to the provided goal state. Hence, more generally, a story can be completed by different time skips, for example forward/backward constructing, or some random skips to different time lines or space-time settings. Next, is the planning process (Fig. 30(b)), or solving the problem, or the "how". It should start with the robot being stopping the current activity, or putting it on hold, or giving it a few moments if it is more important (these considerations are also important). Next is to start the new activity, of reaching the new goal. When: should start now. Where to go: to where towels that Jim wants are (these are constraints in the problem). How to get there: walk to where they are, take it and go to Jim. This is high-level model. Low-level is the walking procedure, velocity according to time constraint, while safety is assumed during walking. Note that we described the problem-solving as the "how" question. Relating to inference guiding basic wills, see 3.6, we can claim that guiding will can also appear in the macro, for example as the type of will connecting the initial state to the goal state. The planning process starts from general actions or wills, and descent to levels with more details, in order to make it an actual feasible plan. Or to reach as most detailed solution as possible, see more about it in Fig 3(a). During descending the actions/wills are decomposed into smaller units for that purpose. For example, see the "take a towel" action in the 2nd level from the bottom decomposed into "open closet"\(\rightarrow\)"pick a towel"\(\rightarrow\)"close the closet" sequence of actions, in Fig. 30(b). Additionally, as can be seen, actions and wills are interchangeable. Sometimes we call it a will and sometimes it is an action. As mentioned previously, in Fig. 12, planning is a top-down process, while understanding is the opposite. We can demonstrate it by using the same Fig. 30(b), only instead of looking at it as constructed top to bottom, we imagine it as constructed bottom-up. It can be done simply by telling what the robot did retrospectively: _"Jim and its robot were home. Jim were taking a shower. The robot stopped what it was doing. It walked to a closet. It took a towel from the closet. It then went to Jim. And then it gave Jim the towel"._ In this case the process involve abstraction/grouping to aggregate the detailed actions into more general ones. For example, see the "open closet"\(\rightarrow\)"pick a towel"\(\rightarrow\)"close the closed" sequence of actions in the lowest level aggregated into "take a towel" action in the level above, in Fig. 30(b). Finally we (or any listener) reconstruct the original will from the detailed actions: _"Bring a towel to Jim._ Also note that at each state generation, weather it is planning or understanding, there is also local inference processes, which can be regarded as internal will. As opposed to the external will, which we try to reconstruct from the story. See more in Fig 20. Also, important to note is the surprising comfort of actions to be composed or decomposed, since the grouping is represented as simple sequence of actions, while in general the grouping of actions could have any form. This is not some lucky incident. It is appeared to be basic principle in programming, see the beginning of section 8.4.1 and its footnote. Finally, this and other examples demonstrate that any successful solution is memorized for future use, in its full hierarchy, since any actions and sub-actions can be used as tools in future problems. #### 6.2.4 More special examples Here we discuss some interesting special cases. First we show how actions, similar to functions, can operate on several arguments, or in our case on several objects. See the example of developing equations in Fig. 31. In the next example, shown in Fig. 32, we can see various effects in a story. The story is: _"David was running, while Jane has been doing the dishes for over an hour. In his route he met Jim and greet him. In the meantime, Jane finished to wash the dishes and put a turkey in the oven.... David came back home from work to Jane."_ First, we see a process (changing object's state after a while) in contrast to an instant type of action, in red color: the dish washing, which disappears in the presented next state (also in red). Figure 31: Developing equations example of actions applied on more than 1 object. Next, we see how objects are added over the states (in green) or removed due to to the pass of time or forgetfulness (in blue). And again we can see actions that are applied on several objects (in purple). The next example demonstrates the principles described in 5.4.1. Specifically, in the following are some examples of the three forms of single/multiple admissible actions. See Fig. 33 The first example shows the principles described in 5.4.1. Specifically, in the following, we can see the principles described in 5.4.1. Specifically, in the following, we can see the principles described in 5. Fig. 33(a) is about an inanimate object. Fig. 33(b) is about sensory perception transformed into our knowledge map. Fig. 33(c) is about sensory perception, specifically from different modalities. Fig. 33(d) illustrates human's different options to act or react. Fig. 33(e) presents a psychological experiment, that tried to teach a dog to associate a hearing ring from a bell, to being food served. Hence it is a reinforcement type of learning. We see that before the experiment the dog has no special reaction to a ring (stochastic reaction), but after enough training we see he is developed a dominant response in the form of drooling (deterministic reaction), which can be directly translated to its will of expecting a food to be delivered. Finally, we see here different types of interactions (sensory inputs and physical acts) with the environment. And we see that most of the interactions in the Figure 33: Examples of single/multiple admissible actions. figure can be aggregated into a group object, e.g. event object, with the appropriate actions/features. For example, "hit ball" object with the action/feature of being in the air, "(audio) horse or (visual) horse" object with the feature of "horse" class in knowledge map, "a ball thrown at me" object (and its admissible actions), and a "ring-hearing dog" object with the drooling action associated to it. #### 6.2.5 Another complex example In this example we will see how multiple wills can co-exists in a story. In Fig. 34, for simplicity, actions denoted as arrows, usually perform changes in several objects to their different attributes (as mentioned in previous examples). Here high admissibility is expressed via salient color (while directionality as usually relates to will). Fig. 34 is a _MOM_ representation of evolving state, representing a story. A state is consisting of several objects (joining/leaving as the story evolves), including their attributes and actions that define the state. **The story**: _"David entered his room. He searched for something on the floor. Then he searched in the basket. Then he searched under his bed, and was thrilled to find the ball there. In the meanwhile, his mother entered home. She put her keys on the desk. Then she removed her shoes and put her sunglasses on the desk. Then she searched for David, found him, and they sat to eat lunch together"_. Note how hierarchy is mostly around abstraction, and slightly about will. For example, in the story we could switch the focus in the middle of non-completed will, e.g. "while searching for something, mom entered the house". This example also demonstrates different levels of will. From the first sentence we understand the lowest form of will - he "wants" to move in the room. Later a higher will is introduced: searching for something. Only at the end a higher will is presented: he wanted his ball. There still could be a higher will serving these sub-wills, e.g. he wanted a ball to play with his best friend, whom he enjoys so much. So at first it can be a sequence of actions connected by time, and as the story evolves, will is revealed, and these actions mapped in a clear field of wills, within different levels, similar to problem-solving scenario: problem state and goal state. The same story is represented via _AKREM_, without the evolution of details, see the video link in [Komarovsky, 2022b]. Note, that if repeated often, the sequence of David or anybody searching in some place can be grouped/abstracted as an event class, also referred to as _Trans-Frame_[Minsky, 1988]. Additionally, it is seen that this is a hyper-graph, since the actions, represented as edges in it, connect multiple sources to multiple targets. That is, actions, though connected to all objects by relevancy, can actually act mutually on several objects at once. This is also supported in neuroscience, e.g. in a modified Hopfield network [Burns and Fukai, 2023], where set-wise connections instead of pairwise connections between neurons is proposed. Altogether, the edges and the nodes (as mentioned previously they can be grouped to represent new group node) imply a representation of hyper-graph. Note that _AKREM_'s examples, and the presented example imply a dynamic knowledge representation, i.e. watching the full knowledge state as it evolves in time, while a story is perceived. It is like frames in a movie, stacked one over the other. See Fig. 35. Figure 34: A part of a story. the instantaneous state from its evolution (state-by-state static representation), to emphasize the will main effect on how this state evolve. Nevertheless, it spurs the discussion of which of the above representations is more appropriate. On the one hand, _MOM_'s state evolution representation is supposedly virtual, and should not be grasped/accessed by the agent. On the other hand, this representation is presented in agent's mind, while planning, imagining, simulating, etc. In memory, we suppose that there are two separated but interconnected memories: the usual LTM, and some uploaded items from it in the WM. This may explain the capacity limit in WM. Since if the planning are sequences of changed state copies, then there is a limit for how much can be held in terms of states and sequences of them. For example, imagine the size of a memory needed for a sequence of states in an evolving story, which includes multiple objects and agents. Thus, to make the memory more efficient, we can use the dynamic representation, and at each "frame" store only the changes from the previous state. Also it supports our efficient representation of uploaded memories into WM, as being generated directly from triggered objects in LTM. Meaning, we maintain layered representation which is founded upon LTM knowledge map. Likewise, grouping, as discussed in _AKREM_, might just be for the purpose of reducing the amount of elements attended at each time instance. Figure 35: Dynamic knowledge representation of the story in Fig. 32. _Mom_ evolution This section deals with how _MOM_ is evolve and develop through time, from "birth" of a human/agent till it reaches its adult phase, or mature intelligence state. A state we consider as stable. We do not consider this state as the goal state of cognition, since obviously the agent/human can still evolve and learn new things. However, we sometimes refer to it as goal intelligence state, merely to emphasize that this is the desired outcome of a thinking machine or an agent. A stage of its evolution that it is fully capable to assist and drive progress in different scientific faculties. It can do so by solving problems or designing new and creative artifacts. Finally, this section concentrates mainly on the basic principle of consolidation in the cognitive evolution, and then it elaborates about the learning approaches that use this consolidation tool in their process. ### Consolidation So far, the _MOM_ cognitive model has been presented in its mature state. Now, the discussion is about how to reach it. This is a process in time, which is mainly based on consolidation. A more general view of cognitive evolution in _AKREM_ is discussed in Appendix A.7. Consolidation is about transforming from chaos to some stable order of patterns, or from a continuous realm to a discrete one, as in quantum mechanics. An infinite amount of details is hard to handle (i.e. to understand and then to control), therefore consolidation to fewer patterns is required. Consolidation also allows for fuzzy logic and categories [Wyler, 1995]. It also allow for effective and minimal communication, with short codes, where there are few shared symbols. Consolidation can be expressed in many forms, such as: * in the conversion of sub-symbolic to symbolic, for any type of _element_ * in cognitive evolution: from flexible (at infancy) to less flexible (at adulthood) * in problem solving: going back and forth between divergent thinking (open up for multiple solutions) to convergent thinking (consolidating into one or less potential solutions) * in modeling, at program search, from huge hypothesis space for possible programs to a small set of hypotheses (as in _DL_). It is both in the micro (within models) and in the macro (between models) * in testing multiple versions of an unknown model, and finally converging into less/one version(s) that are/is consistent with evidence * and in grouping/abstraction, where some separate elements become connected Note that causality is a special case of modeling, a spatio-temporal one, where re-occurrence is consolidated. More generally, re-occurrence helps in learning both static objects and dynamic basic/composite events (equivalent to scenarios/scripts in _OOP_). See more about this in Section 6.1. Additionally, _MOM_ enables multiple parallel versions of the same thing, since any specific topic or subject can have multiple theories/models, sometimes in conflict. Hence, like in the quantum superposition realm, multiple-version combinations could be tryout, and consolidation can help in collapsing them into fewer versions. Those versions should make the most sense, i.e. to be consistent on different occasions or supporting the majority of evidence. Thus, it just maybe, that at infancy, a highly uncertain period, there are many versions created, and with time - only the most consistent ones survive (consolidate).27. Footnote 27: Note, that it has nothing to do with having multiple versions representing different perspectives or opinions, over specific controversial model/topic. In this case all versions are necessary, and not reduced with time. Similarly when multiple versions of a concept (with a same name), having multiple interpretations, depending on context, due to the fact that meaning derived from will. And since there are different wills for the same objects - different meanings/interpretations emerge Interestingly, consolidation is about prioritization of models, i.e., it is about seniority. In other words, the more "proven" models are those that "call the shots", e.g. it is required from new untested models to be consistent with these old models. This prioritization is what allows for symmetry-breaking, i.e., the ability to go from totally symmetric/equal model proposals and converge slowly into a more confident and stable models. Moreover, multi-version principle aids and fit also to the will field. As we discussed previously, it is dynamic resetting of the knowledge representation to fit the current will. In other words, the agent learns how according to a specific will, it should produce the relevant objects and actions to accomplish this will. It is similar to the idea of problem representation, where a problem is half-solved if we succeed in representing it in its most natural and efficient way. This fits perfectly with having multiple versions of "world representations", where the will acts as a switch to select among those. This is yet another multi-versionality, hidden in the additional degree-of-freedom: the will. Similarly, settings or context, see 5.3, can determine the representation of knowledge. Similar idea is advocated in (Schaffner et al., 2023) where perception is preferred to have goal-oriented representation over "seen as it is" modeling. This is also embraced by attention mechanism, where the sensory information is always overwhelmed, hence must be filtered and handled efficiently. Lastly, two operations help in producing consolidation. On the one hand, to deal with a stochastic environment and ambiguous signals, **repetition** provides memory prioritized by relevancy. Hence the need for associative recalls, i.e. cues. Repetition is never exactly over the same thing, but rather over many different examples of a thing. Hence, it is useful also for generalization or abstraction out of diverse and noisy/blur/corrupted/partial examples. Therefore, the strict/rigid approach (e.g. symbolic or logic) is not suitable for learning in a real-world or world-simulated environment, but only in ideal ized well-constructed one. Repetition is needed also with guided tutoring of an _AGI_ agent. Conversely, **sparsification** is about reducing irrelevant signals. It also support Occam's Razor principle, i.e. searching for the simplest algorithm among all possibilities. In summary, these operations are also act as dualities. #### 7.1.1 Reusability An additional form of consolidation is reusability (Chollet, 2019), since the more learning progresses, the fewer new models are proposed in favor of using existing ones. Hence, reusability is expressed via exploration (mostly at early stages) verse exploitation (in stable or mature stages), as in _RL_. In the beginning, many possible codes are generated for actions/models, but as time goes by, the process is less exploratory and more exploitative, i.e., there is more emphasis on retrieving known codes, while testing fewer new codes in parallel. Similar effect was seen in learning abilities among children verse adults (Frank et al., 2022). This effect states that children can gain new memories easily, while adults exhibits interference with old memories in the process of gaining new memories, hence an inhibition process is stronger at adults. In addition, reusability aligns perfectly with abstraction/grouping, in a constrained environment and limited resources. They are both needed to hold control of as much as possible, with minimum effort (Zipf, 2016), i.e. without generating many models of each thing. In practice, reusability is about using less of the initial available tools, as the learning evolves. Meaning, while regular _DL_ tools (if, sum, activation function) or the primitive tools, see Section 5.1, can be used for program search of basic action methods or relation methods, the new methods apply reusability. In such methods, less primitive tools are used while the current methods are used more, thus encouraging more connectivity in the network, since the more existing methods are used, the more are associations are connected to them, thus avoiding isolation of new methods by inner program search. It also encourages logical connectivity in the model, since _DL_'s non-interpretable connectivity is replaced with more logical functions, as in algorithms. Moreover, Functional Programming can be applied to assist reusability. On the one hand, the general/outer structure is _OOP_, i.e. _elements_ are grouped in an _OOP_ fashion. On the other hand, methods are kept in a pure operational immutable form (Chen, 2019; Van Roy et al., 2009). Meaning, having small and simple methods, which maximally reuse other functions, and without inner variables, due to objects-memory-only assumption (see Section 5.3). Meaning, striving for methods that are comprised of other methods, as much as possible. This is compositionality/grouping applied in actions (similar to hierarchy of visual features in CNNs). See summary of reusability in Fig. 36. the same way. Final remark for this chapter: we encourage curriculum incremental learning over batch learning or even meta-learning, since our belief is that learning is gradual, in stages. Meaning, we cannot use shortcut to learning, by squeezing everything for an agent to learn. It must gradual process, most probably with failures and regressions. Additionally, we advocate slower learning scales (days, months), with gradual improvements, in contrast to fast learning (hours, days) with final model as in _DL_. Similarly, we are for larger and more dominant stage of exploration verse smaller stage of exploitation. It is because we humans use unguided or self-learning, just as supported by many important AI researchers, similarly we should allow for the AGI agent a long period of experimentation and exploration, before it converges into a solid/stiff world/knowledge model. See more motivation in Appendix A.7.2. Finally, _DL_ is very problematic with regards to data. Because of the phenomena of garbage-in garbage-out, the data scientist must clean carefully, tediously and exhaustively the data from outliers, duplicates, ambiguity and other factors. This both narrows the task, and do not let the model to learn exceptional cases, and it reduces the agent autonomy since the scientist is the one prepares him the data. Instead it should be able independently explore the environment and to cope with the data it perceives along with the ability to choose/control the data it attends or filters. This factor should be also part of the AGI goal to organize its own gathered data in a consistent, comprehensive and usable way. Also, it is for effective learning, e.g. knowing when to disregard information taught to you. An information that is way over your understanding. For example, learning a 12th grade material while you are in a 2nd grade level. ### Consolidation and language If we think over the idea in Fig. 41, we can derive to a broader theory about language as unique tool of humans. Think about all the animals or even pre-historic human races, that use very simple and small sets of symbols to communicate. We can imagine it as the perception module in Fig. 41, while the modeling module is very primitive and the top-down processing is merely for purposes of dangers or food gathering. This makes the models from top very simple and almost non-symbolic. I.e. it is like a "jungle" of neurons. Such humans or Figure 36: Actions, objects, compositionality, and reusability animals have very limited symbolic capacity both to communicate and think, or in other words to express themselves. It means that attaching more words, in our opinion, makes this "jungle" of neurons, more and more ordered into clusters of concepts, via consolidation. ### Learning the modeling Here, model learning mechanism is proposed, where two contrary but completing learning approaches in AI are combined [14]: empirical, i.e. from examples (induction), and expertise (rule-based). This is a learnable symbolic manipulation, or can also be referred to as a hybrid approach or Neuro-Symbolic, see [13]. Empirical is bottom-up (from examples to rules), e.g. via observation or passive interaction. Rule-based is straight from/in the top, via rules in abstract language, e.g. via conversation or observation. It can then descend to examples of these rules (deduction). Examples applying these approaches can be seen in Appendix A.8.1. Similar ideas presented in neuroscience, as statistical learning and rule-based learning [13]. These approaches can explain the development process of an infant. We can assume that at infancy, the infant starts with observation (visual), then fuses visual objects with audio simultaneous stimuli, also via observations. This is the bottom-up learning process. Physical interaction could be exercised too, but we will assume it is insignificant. After infancy, he can use language to update and refine his preliminary and raw models, as it is done in childhood, and at school. This is the top(-down) learning process. So we could say that just like gradual involvement of senses, so is the gradual involvement of interactions. The idea of how models are developed is illustrated in Figure 37. This explains the first step of infant's learning, i.e. observing or imitating, since he is yet to know the proper response to his perception. The first bottom-up step enables him the learning of models, and also the learning of implicit and explicit intentions embedded in the observed behaviors. Which can be the basis for his own intentions in the future, especially in conversation and other interactions. I.e. it might be very possible that different types of will or intention are first learned. Figure 37: Bottom-Up and Top-down methods to learn models For example, in [102][they talk about _"the ultimate goal of this language is to align AI and our values"_, and _"the essence of the alignment problem is the disconnect between intention (the purpose that we desire) and results (the purpose that we put into the machine)"_. I.e. we have no interpretability in current _DL_, since it produces results without any intention behind it. Hence there is the need to supply or teach the AI with intentions, and not just any intentions, but those understandable to humans (hence the "alignment"). Interactions we have can be passive (observing/mimicking) or active (e.g. RL), see Fig. 38. But it is not only for modeling external processes, but more generally it could model inner ones (meta-knowledge), such as introspection, or how to update models or create new ones, etc. For example, there are general problem-solving approaches, such as forward/backward chaining, however, this is also consider to be model, and perhaps there should be different models (such as there are different heuristics) for different subjects. These models may be external as any other learned model, which can be recognized or associated when necessary. These interactions, inner or outer, represent different models in the world. Next, we can wonder, why will is needed to be part of these models. Because most of classic AI studies assume that cognition is mainly about logic, which result with models represented merely by logic. But in our opinion, all these interactions presented above, are not represented merely by logic, but they also include very fundamental and crucial component - will. We assume that will is in every interaction an agent has with the world/environment. Either it poses its own will (animals and humans) or it has some function/role in someone else's will, e.g. an inanimate object has will embedded in its meaning or interpretation to the human (or AI agent). To its perspective or perception of the world. Figure 38: Human create models from interaction, taken from [102]. For example, as seen in Fig. 39, when we encounter an object, we try to evaluate its function or its potential use for some purpose. While in encountering an animal, we try to figure out the animal inner state so that we would know how to anticipate and understand its behavior. Similarly, while encountering humans (single or a group), we try to extract their will from their decisions, actions or behavior, since we obviously assume that these outputs are derived from their inner will. Finally we externalize will to other realms, such as philosophical queries, where we wonder about the purpose of the world, our purpose in it, and so on. From these examples and the discussion above, we can conclude that will is rooted deeply within us, perhaps it's our basic prior knowledge. This brings us to the belief, that a full model of any entity is consists both of knowledge about it and about some will associated to it. Or in a formula: \[\text{Full model of anything = Knowledge + will.} \tag{3}\] This assumption can also demonstrate what a partial model is. One good example of a partial model can be seen in teacher-student interaction. For example in math classes, where the student do not comprehend the material he learns deeply. Though he can certainly absorb only the shallow view of it. Either it is a simple memorization, or a technical understanding of the material, such as not understanding the meaning of stuff but only their utilization, i.e. technical understanding. In other words, the student can end up with just knowing how a method or some formula can be used, without deep and full understanding of the concepts, such as determinant in algebra or Green's theorem in calculus.28 These are examples of partial model. Footnote 28: These examples of misunderstandings in learning were the drive for the SciSoft idea: to make one detailed system of all theories and models, with all the necessary information presented to the student. More about it is discussed in A.3.3. Back to the bottom-up, top-down learning approaches. These top-down and bottom-up approaches might contain models of concepts that do not belong to them both, but only to one of them. For example, concepts that are hard to define, like love, God, beauty, and tacit/unconscious knowledge like walking and breathing - all can be modeled simply by examples. Similarly are the Figure 39: Will is significant in modeling. sub-symbolic features, like audio and visual inputs - they do not have logic/linguistic/symbolic meaning, hence should be modeled by examples, as it is done in _DL_ nowadays. 29 Hence, these non-symbolic concepts can be learned via usual non-interpretable _DL_. On the other hand, abstract concepts, like those in math and sciences, that appear less in the physical reality, can be learned solely in the top levels of LTM. Footnote 29: Also, as said by John Von Neumann: _”In mathematics you don’t understand things. You just get used to them”_, it may imply that children and adults perceive mathematics also by examples, without real comprehension of the concepts, i.e. perceive only their utilization and understanding their consistency. Moreover, in this hybrid approach, _DL_ is used twice: the extension of _DL_ and the use of _DL_ as the base level on top of which symbolic processing develops. On the one hand, _DL_ is extended from its too constraint program-search to be much more flexible, if more operations are added as building blocks, see 5.1. Hence, symbolism is learned and adaptive just like in _DL_, differently from most expert/rule-based AI. On the other hand, different input sensors are fused (e.g. visual stimuli with textual one) to represent specific symbols/concepts, i.e., the uninterpreted features in _DL_ become symbolic tokens (Fig. 40). This utilization above of _DL_ is hybrid because on the one hand classic AI is about enforcing our exact view of how the _AGI_ system should be (symbolic knowledge representation), while in contrary, _DL_ is about letting the _AGI_ system to find its own way with minimum bias from us. It is also the tension between heritage verse environmental influence. Same is here. What we do not know for now to put implicitly as inductive bias we put explicitly in a form of direct program ("heritage"). More about the reasons for advocating a hybrid approach can be found in Appendix A.1. Figure 40: Proposed cognitive model basic diagram Additionally, the topmost level in Fig. 40 is actually temporal and used for creativity and problem-solving. In this level, temporal new abstractions are created, by stripping off attributes/actions/relations, thus connecting distant or different abstractions to perform for example analogy or transfer learning between different domains [Gentner, 1983]. For example, the abstraction is via the number of edges in the polygon classes (Fig. 40). Or by abstracting the branching property of trees we could recognize "treeness" in a picture of a human lung. More generally, differently to structural similarity, i.e. in the relations, analogy could be based upon pragmatic similarity, such as in the goals/purposes. Note that abstraction can be applied to any element of knowledge, beyond objects, e.g. actions, and even complex objects/actions, i.e. any group of classes. An example for using analogy, to transfer one problem to another problem, from a different domain, and abstract its solution, then specialize it for the target problem and eventually apply it, can be seen in [Bassok, 2003]. As can be seen, it is about ascending models and descending them. It is actually about searching solution in the immediate neighborhood, by the available association and admissible actions. Then, if we get stuck and do not find a solution, we can rise to a more abstract level, to create some temporal (or existing) class(es), from which we can search for similar problems/solutions much farther then our current location. Additionally, we see in the main AI fields that there is a single continuous aggregated hierarchy of knowledge, either in vision, such in CNNs, or in the classical processing levels in NLP. Although it is still a viable representation, we propose another possibility: to separate this hierarchy into two parts. This can be inspired by the diagram in Fig. 40. The first part is when we communicate with the world, or external type of processing, while the second part is when we process things internally. It can be also match our philosophy of min-body approach, where the external processes, like perception (input) or actuation (output) are merely the interfaces to communicate with the world which is not us, or the agent itself. However, the internal part represent our self, our interpretations, and so on. Such that, it is hypothesized that there could be two independent processes: top-down and bottom-up30. Footnote 30: This also represents the fact that we have almost separate approaches in AI: symbolic and connectionsm, that still barely can be fused. Similarly to the two major theories in physics: general relativity and quantum mechanics. As in physics, where we could have top-down large to small separate model and a bottom-up one, we could similarly propose here. Actually, everything has a prior. Bottom-up processing include prior about external phenomena. Top-down modeling has also prior, even much more of them. Such as will, causality, physical assumptions, and more. Moreover, as well known supervised learning, in which symbols are given as the end values of a NN, to learn all inter-mediate features between them and the subsymbolic input. The above idea can replace these symbols as true labels, with the top-down model prior knowledge. One way to achieve this is by the known pre-training in DL. That is, the subsymbolic processing can be done in an unsupervised learning fashion, e.g. via Deep Belief Network (DBN). While the top-down processing makes the little tuning of the subsymbolic output to the appropriate classes (like the supervised learning small training phase). This is illustrated in Fig. 41. In it we see the NLP hierarchy, as one legitimate model, which allegedly captures both subsymbolic and symbolic processes in one hierarchy. We propose that semantics, the meaning of things perceived from outside and acted upon, are not propagate further, but rather stop at some level, and let top-down processes to dictate the meanings of things perceived from below. It means that we consider the automatic hierarchical feature generation by CNN, to be merely a tool for semantical recognition. A recognition of anything we consider to be meaningful to us, e.g. objects, their semantic features (not the sub-symbolic ones) and actions. This learning in CNN were producing visual features by the tension from (input, output) = (subsymbolic data, symbolic data) at the boundaries of the CNN, while trained via supervised learning. However, it can also be produced via unsupervised learning. Even though we propose that the higher semantic hierarchy is driven by aggregation, it can be done by something else, but still separated from the subsymbolic processes. Another option could be making the whole hierarchy learnable for example. Additionally, learning models are not done in isolation. These learned models should be coordinated or consistent with each other, to avoid failures and confusions. Consistency-check is about learning model(s) in consistency with other existing models. It is beyond any type of transfer learning (Schunk, 2012): * Zero, positive and negative transfer measure how learning new task influences previous acquired tasks. Though positive is always preferred, negative is sometimes a consequence of previous incomplete or erroneous knowledge that should be revised. Still, if negative learning is not being regulated it may even result with forgetfulness. So the discussed consistency should be guaranteed within some range. * Forward and backward transfer (Wang et al., 2019), is about transfer from old tasks to new ones (forward) or from new ones to old ones (backward). Figure 41: MOM hierarchy of external processing and internal processing * Near and far transfer, is about the distance of new task to previous ones. All these different transfer processes require active online cognitive effort (also referred to as "High road transfer") or offline reorganization of knowledge (also referred to as "Low road transfer"), to reveal new connections and analogies. Finally, the operational modeling idea, the outcome of the learning approaches described so far, is not connectionistic like _DL_, but made as classic AI, or logic-based. It is our goal intelligence. But in order to reach it, we propose _DL_ alongside with consolidation that described earlier. In neuroscience we can see evidence that higher cognitive animals use this kind of system, e.g. in [Piza et al., 2023], where marmosets (type of monkeys) use a "gaze-based" spatial navigation, contrary to the "place-based" navigation observed in rats. That is, to navigate they use more cognitive skills like understanding the functioning and the model of the objects they recognize, instead of merely physical orientation in the surroundings. Similarly, while we recognize objects by similar techniques to _DL_, i.e. CNNs, our models are still logic-based, e.g. in recognizing digits (by V1\(\rightarrow\)V2\(\rightarrow\)V3\(\rightarrow\)V4 brain perception) verse understanding their generation rules, e.g. 9 = counter-clockwise small circle + clockwise half an arc. ### Evolution prior The main issue with learning, is to find the right guiding prior for the top-down processing. It is the core intelligence problem, in which the top-down guiding prior determine how to deal with the bottom-up perceived sensory data. Up until know we have a lot of prior to embed in our AGI system. Prior like making sense, full modeling which includes also the will behind the actions, basic prediction, spatio-temporal grouping and abstraction, and more. However, it is difficult to imagine how to imply all these priors at once. We must go back to psychological theories of development in stages, like Jean Piaget and Erik Erikson theories. In our opinion, it is the development around the expressing or perceiving different types of will. In brief, it starts from "what" question inquiries, to "how" questions, to "why" questions. Each of those with regards to our immediate social surroundings. More precisely, it starts first from detecting single objects and entities in visionary and auditory inputs. It includes prediction of tracking these objects and their transformations. This is the "what" inquiries, trying to align the will "what" with the perceived data. Next, is the "how" will. It is about behavior of these objects. I.e., how they transform, how they act and react, and so on. Then, we also include the agent itself, not only external observation, i.e. interaction. The "how" also include "how" I can act in my environment. This means moving, throwing, braking, and so on. As stated in Section 3.7, these objects do not posses their own will, hence it is easier and make more sense to start with. Meaning the prior evolve from simple to complex, in stages. So will is not needed in these stages. Next, after gaining enough practice of objects and their behavior, and how the agent and other agents affect objects, themselves and others, we can turn to the "why" question. But the answers are basic: simple type of wills (originated from basic desires). This is the lowest level of social interaction the agent has. A higher "why" questions that follow, are of more deepen sources of wills, both in the agents, and hence also searched also in other agents as well. All the above means, that the prior is applied gradually or conditionally, i.e. only when the "what" is satisfied enough (how much is enough), is when we are prepared to the next stage of "how", and then "why". And then extend it further, as a function of our affect. However, it could be that these stages are not roughly separated, but all exists in the same time, only have different dominance at different times, just like brain waves. Eventually, we back to the main issue of learning described above. Or in other words, how human will or human intention can be materialized? for example in a software or algorithm. It is a very difficult question, since this concept is too abstract and amorphic. We have no direct connection to it, but only implicitly, through perception and our own will realization. But we have no idea what is it and how it is materialized31. Footnote 31: This is why top-down prior is learned, as anything in AI agents. What in humans is simply some sort of soul guiding the learning, the growth, and the adaptation to environment, in our opinion. However, we can overcome this issue. For example, as proposed for _AKREM_ in Appendix A.7. In _MOM_ on the other hand, it can be realized in the following steps. The "what" questions are simply noticing many different features of different entities and objects, and assigning them to these objects. All this done without language or logic, in an unsupervised learning fashion. Then, the "how" is attaching actions to these objects. Obviously this step requires the previous step: of having objects that these actions applied on. Finally, the "why" is handled by adding directionality to these actions. Also this requires the previous step: having actions. This directionality is a feature that connects all different actions, so that they are not merely independent entities, but entities that relative to each other. These actions are also context-dependent. See these evolutionary steps in Fig. 42, where the small blue circles represent audio semantic features, the small orange circles represent visual semantic features, yellow big circles represent objects, and the orange arrows represent actions with directionality. Note that due to redundant repetition, we omitted the "how" step, as the intermediate step between Fig. 42(a) and Fig. 42(b), which is exactly like Fig. 42(b) only without the arrows at the end of the orange lines. In summary, this overall process is cyclic, i.e. whenever new features learned, new associations to objects can occur. New actions can always be added, and directions can be updated. Also, note that basic objects and their features are already abstraction. It means that even the most basic instances we know from _OOP_ are already abstraction or generalization of different features that describe it. In other words, the most basic semantic objects are already generalizations. ## 8 Implementation directions ### Model separation Another issue that precedes learning, is how to obtain separate models at all. One way, is like the _DENN_ (Dynamic and evolving NN) idea [14], i.e. always learning the "model of everything" while refining it more and more with every new experience, such that new sub-models are produced. The idea here, is like Jeff's hierarchy [13], where the top model is always reached, at perception from senses, and it decides which lower model will handle the situation. E.g. model for problem-solving, for learning, and for story message (where it connects separate events sequentially). And it can go on further. For example, problem-solving model chooses the most appropriate model for solving a particular problem. Or, a learning model selects the most appropriate model for assimilating new knowledge. Or, conversational model being about taking turns, waiting till me/other side is finished, recognizing self models32, deciding whether the sender's message imply the updating of some models due to conflicts for example, or its simply was not interpreted correctly or something wrong with the message hence should be clarified or repeated33, etc. Or in perceiving: perceiving fictional information is treated Figure 42: Evolutionary steps, neglecting the how-step differently than factual information, and so on. Or in "act like" missions: act like some of the known models of other agents/people, or act more creatively (creative model). Footnote 33: The _same_ model is the same as the model of recent model updates. The idea from [15] can present similar duality here also. The above refinement can be regarded as top-down model evolving, while there could be also bottom-up one, where existing models can be grouped into a new or existing model, similar to the opposite operations in [15]: merging verse splitting. Hence, the modeling itself can be refined to convergence as the famous AI's generalization and specialization operations, using positive and negative examples. Furthermore, there is this common idea that an infant holds already about 100 billion neurons, and what change along the development is only the connections between neurons. This enforce our view of separation, since we could have a static NN, and the learning will only change the separation into models via different connections. In conclusion, this option was produced since problem-solving and alike are very complex models, which is why the proposal to separate them from the knowledge models. But it extends further - perhaps there is separation of model representation. May be some models can be represented as operational classes, but others cannot34. These other models could be not interpretable nor can be explained by the agent, e.g. models that are placed in the background of thinking itself, thus they are "hidden" or implicit. It can also represented as declarative type of memory (knowledge models) verse implicit memory (other models). See Fig. 43. Footnote 34: See similar idea in Godel’s ideas, where truths are more generally are not algorithmic. Or in [https://medium.com/paul-austin-murphys-essays-on-philosophy/roger-penrose-on-kurt-gX03%B6del-and-gXC3%B6delian-truth-7cc6e8f79069](https://medium.com/paul-austin-murphys-essays-on-philosophy/roger-penrose-on-kurt-gX03%B6del-and-gXC3%B6delian-truth-7cc6e8f79069), where it is stated that there is separation of mathematical truth verse sensing, and that we have more access to mathematical objects than to perception, of a rose for example. One way to implement model separation is proposed in [https://matt-rickard.com/mixture-of-experts-is-gpt-4-just-eight-smaller-models](https://matt-rickard.com/mixture-of-experts-is-gpt-4-just-eight-smaller-models), where they propose switch transformers, to generate sparse distribution among experts. Another way to implement model separation is by nested NNs (or NN of NNs) [1, 14], see Fig. 44(a), where consolidation occurs in multiple scales. It can be encountered through many phenomena in nature, e.g. in the universe (consolidation into stars/solar-systems and galaxies), in fractals (such as snow-flakes), and in other recursive structures. It can be also seen in the transition from some initial network of unlearned models Fig. 44(b) to Fig. 44(c), as a consolidation in multiple levels, both in micro (within models), and in macro (between models). The modeling or the reorganization of inner elements, is occurring at many levels of models, i.e. from the basic models to the most complex ones. Recently, a neuroscience article [14] discovered evidence in the brain, that supports our system twice: 1. First, it extends the normal binary operation of all-or-none message pas age between neurons, i.e. if the signal overcome some threshold. Now they found additional logical functions, that are quite basic and necessary in our system: AND, OR, and even XOR. 2. Second, it admits that these complex action functions considered so far to require multi-layer NN to be implemented: _"These action potentials allow single neurons to solve two long-standing computational problems in neuroscience that were considered to require multi-layer neural networks."_. Moreover, as it is well known [30], DNNs can implement logic gates (AND, OR) with 1 layer and more complex gates like XOR with several layers. Hence, combining with the article above [11], it strengthens our hypothesis that brain neurons are DNNs themselves. ### Attention After assuming a set of unlearned models, Fig. 44(b), it can be assumed further, that will or more probably consciousness, acting like a flashlight or a beacon, produces consistent attention (over time) to learn/attend each model (or several of them) individually. This concept explains why an infant is usually very focused over his toys (e.g. a ball), and tracking them is essential for this process. This effect sticks till adulthood, also in the process of using the cognitive model (i.e. after the learning stage). It is the need to be attentive only to a limited set of models (\(7\pm 2\) items in WM) [31]. See examples in Fig. 45. Fig. 45(c) demonstrates split attention concentrating on two specific models: a football and a truck toy. Figure 43: Separating cognitive and knowledge models In other words, will is coming from the top (or from the side of an hierarchy, if it does not interested in top-down focusing but on specific models at any level), shining like a projector, focusing on one/few models, while tracking it/them in the real world. Therefore, the utilization of attention can be illustrated in an example of an infant learning a "ball" model. During waking hours, an infant is gathering instances of its current models, e.g. a ball, and at sleeping, he uses these instances to train his models, for the purpose of making sense. First, he tries to figure out different models, then he tries to model them also in time, thus he is able eventually to track them, which is a validation of the correctness of his "ball" model [12]. Because the final test of his model is prediction, hence temporal modeling is what enables prediction, or more specifically forecasting (prediction in time) [12]. Also, attention is needed to construct a coherent picture of visual perception, see [13], which explains it by attending each object in a given scene separately. Note, that attention to a few models also implies that just as humans, _AGI_ agent need not to understand and model everything, but only what it is focused on or interested with. Also, there is the idea of bidirectional attention, which is bottom-up (external) verse top-down (its own will), and describes the competition between having a (strong) will and being highly influenced by the outside. In _AGI_'s case, it should be mostly navigated by external guidance, if will is not engineered into it. In addition, attention can have different "focal length", like the theory of Figure 44: Nested DNNs for model learning vision, having small pinhole perception at lower levels (e.g V1), and a bigger one at higher ones (e.g. V4). Meaning, the ability to sometimes see small details and sometimes see the big picture. In model attention it is the same: we can both have low-level more detailed attention on smaller models, upto a high-level attention for more general or composite models. In comparison to classical object detection in computer vision, high-level classes/concepts use only the higher-level features for the classification task, but more generally there is no reason not to be attentive to low-level features whenever is needed (hence the use of residual connections as in ResNet and Transformer). Moreover, in regarding to focus (or continuous attention), see [https://medium.com/publishous/lack-of-focus-heres-what-you-need-to-know-from-a-neuroscience-point-of-the-most-the-most-the-the-most-the-the-most-the-the-most-the-the-most-the-the-the-most-the-the-the-the-the-the-the-the-the-the-the-the-the-the-the-the-the-the-the-the-the-the-the-the-the-the-the-the-the-the-the-the-the-the-the-the-the-the-the-the-the-the-the-the-the-the-the-the-the-the-the-the-the-the-the-the-the-the-the-the-the-the-the-the-the-the-the-the-thethe-the-the-the-thethe-the-the-the-the-the-the-thethe-the-the-the-thethe-thethe-thethe-the-thethe-the-thethe-thethe-thethe-thethe-thethe-thethe-thethe-thethe-thethethe-thethe-thethethe-thethethe-](https://medium.com/publishous/lack-of-focus-heres-what-you-need-to-know-from-a-neuroscience-point-of-the-most-the-most-the-the-most-the-the-most-the-the-most-the-the-most-the-the-the-most-the-the-the-the-the-the-the-the-the-the-the-the-the-the-the-the-the-the-the-the-the-the-the-the-the-the-the-the-the-the-the-the-the-the-the-the-the-the-the-the-the-the-the-the-the-the-the-the-the-the-the-the-the-the-the-the-the-the-the-the-the-the-the-the-the-the-the-the-thethe-the-the-the-thethe-the-the-the-the-the-the-thethe-the-the-the-thethe-thethe-thethe-the-thethe-the-thethe-thethe-thethe-thethe-thethe-thethe-thethe-thethe-thethethe-thethe-thethethe-thethethe-) working in the background, where suddenly an idea can pop-up, what's called "aha" moments [Sadler-Smith, 2015]. This phenomena can encourage our idea of splitting attention between the main consciousness and the hidden one, which works like a detective, trying to fill up missing information, deduct new conclusions, and sometimes even solve larger problems. See more about attention splittings in Appendix A.8.2. These and other examples of attention splitting are examples of how attention is limited and flexible, and it is distributed from total concentration to even but low attention. It resembles the attenuation theory [Treisman, 1964], which states that while perceiving input from senses, e.g. audio, we simply lower the volume of the background and turning it up to some desired inputs, i.e. tuning the attention. This explains the ability to hear our name called or call for help in a crowd while talking with someone. Additionally, there is this battle between senses influencing models (bottom-up attention), verse models influence how senses are interpreted (top-down attention) or whether they are being distorted to fit the models better. See more in [Auguste et al., 2022] and in [Franke et al., 2022] about how our inner state influences sensory attention, i.e. to specific mode of perception. Similarly, in optical illusion [Laeng et al., 2022], they showed that anticipation to darkness can affect the eye dilation. Finally, attention in our perspective is very similar to the attention in _DL_, only without regulation. Meaning, without considering the ideal state of consolidating into symbolic reasoning of models and operations and most importantly allowing for dynamic abstraction. However, _DL_'s attention is similar by also allowing for multiple implicit functions in a given learning NN, since it reacts differently depending on the input. In other words, the DNN can be regarded as a group of undeclared models/functions, generated by attention units, thus implicitly implement compositionality and reusability. Moreover, _DL_ has an issue with unspecified prior knowledge, which can be perhaps resolved by attention. One of the difficulties in early _DL_ studies, is the decision upon the inner connections in NN or its structure. It could be for example the decision about sharing or grouping features [Schlichtkrull et al., 2018] or instead separating features. E.g., when traffic data is set apart from weather data [Koesdiwady et al., 2016], or roads from stations [Huang et al., 2014] and are being fused only later (how later is also a prior knowledge to be decided upon). Or when tasks are separated to groups [Huang et al., 2014]. Or when NN structure is sparsifyed/pruned [Ioannou, 2018]. All this prior knowledge is difficult to recognize apriori to different tasks, hence we could use _NAS_ or attention which deal with this dynamically. Also, there is the paradigm that the brain works on causality of chemical and electrical processes. On the other hand, in contrary to this deterministic view, there is a non-materialistic paradigm [Bentovish, 2022], which allow the free-will, where the mind is not causal, but instead governed by a soul. If we dig further, in [https://bigthink.com/the-well/eastern-philosophy-neuroscience-no-self/](https://bigthink.com/the-well/eastern-philosophy-neuroscience-no-self/), eastern philosophy and neuroscience research advocate the idea that there is no "self", and that it is only illusion in our mind. It is actually similar to relativity theory, where there is no space without time. In analogy, it means that the brain is materialistic and includes "self" as part of its models. Therefore, it may mean that the soul, separated from this physical brain, is merely a passive, non-participating observer, while thoughts are appearing externally by some unknown source. That is, the observer soul, simply observes external stream of thoughts, and falsely attribute some of them to a "self". In the article they argue this, since the thoughts are not controlled by some "self", although being attributed to some fictitious "self", one of the many models we have of different objects and humans in the world. Hence, our idea of self-model is similar. It is treating ourselves as another "thing" to model. This observing soul is one possible source for attention. Similarly, the sources can be awareness or consciousness. Lastly, attention mechanism also suggest that memories not under attention are still there, accessible, ready to be addressed. Moreover, examples like attending a movie/story while simultaneously trying to comprehend it and fill up missing information imply subconscious processing in the background (see attention beams in the inference in simple examples 6.2.2). ### Implementation using Separating models and Attention If we use the two principles of separating models via attention mechanism, we can try to construct a more specific implementation of learning. First, we assume that the nested NN structure, as seen in Fig. 44. Hence, each neuron is some NN by itself, and can represent all relevant models of some specific concept. Second, we can imagine how many fundamental concepts are there, to evaluate how many complex neurons we need, assuming a concept per neuron. If we look at an average person, or at a dictionary, or a glossary, or even encyclopedia, for the index of used terms, there are not so much of them. In the most exaggerate case, it is an order of thousands. An average person holds much less concepts in his daily life. Even if we include specific jargon of experts in different field, it is still not exceed thousands of concepts. Hence this is terribly small in comparison to human or artificial NNs. Third, after we decided to use nested NNs, and agreed upon the scale of concepts to be learned, we can continue to what actually will be learned. We represent each concept (object or action or feature), i.e. class in _OOP_, along with its instances. Next, we can assume that if the lowest level in our nested NN, is of basic concepts, then the upper layers represent different aggregation of these basic concepts, e.g. different groupings or abstractions. Next level is similarly an aggregation of these aggregations. ### Multi-version model Side note: this section is still under construction, hence should be skipped. It is very like the Pathways Language Model (PaLM) idea [Chowdhery et al., 2022], encouraging sparsity, multi-modality and continual learning of tasks (unlike multi-tasking where tasks are pre-defined). Also it resembles the multi-head attention in Transformers [Vaswani et al., 2017], where several versions of attention are learned. One suggestion to implement multi-version is by using 3d NN instead of the usual 2D one, where the paths are not limited to the same 2D network, but also have the freedom to move to other direction in 3D space. One simple option of this idea is a stack of 2D NNs, let say \(N\) of such NNs, where the paths can go from any NN to another. This way we can implement cyclic learning, where at each interaction with the environment, the current version of the NN is See Fig. 46. #### 8.4.1 Functional programming search It is well known that the _DL_ representing DNNs is functional, i.e. in order to allow for back-propagation (BP) or differentiation from target to source, it must be a cascade of functions35, i.e.: Footnote 35: This idea, or how assembly programming language works, or any simple machine language, or Turing machine - all might suggest that any algorithm, even if it includes (condition/loop) flows, is a sequence of actions, hence _MOM_ propose that any such algorithm representing a method in _OOP_ is an aggregation of other actions in a sequence. \[\begin{split} Y&=g\circ f_{K}\circ f_{K-1}\circ... f _{2}\circ f_{1}(X)=\\ &=g(a(...a(W_{2}\cdot a(W_{1}\cdot X+B_{1})+B_{2})...+B_{K}))\end{split} \tag{4}\] Where \(X,Y\) are input and output respectively; \(W_{i}\) and \(B_{i}\) are weight and bias matrices of layer \(i\) in the DNN, respectively; \(a\) is some activation function (e.g. sigmoid or ReLU); and \(g\) is special function for the specific task in the DNN's last layer; and finally \(f_{i}(X)=a(W_{i}\cdot X+B_{i}),\quad\forall i=1..K\). Similarly to the DNN described above, the learnable symbolic manipulation should be a cascade of functions. Hence, here we transform common programming language tools, the basic building block of any function, into a form of operators or functions. Figure 46: Single-version verse Multi-version implementation suggestion. First we introduce some common symbolic/logical tools that are used in programming: * Arithmetic operators (+,-,*,/,modulus, power/square-root), * Mathematical operators (min/max, absolute value, rounding, norm, factorial, sinusoidal/exponential/logarithmic/polynomial functions) and their constants (\(e,\pi,\tau\),...), * Assignment operators, * Comparison operators (equality and non-equality), * Logical operators (and, or, not, all, each, (in)equalities, exists, count), * Flow operations (while, for, if, if-else, if-elif,...) * Identity operators, * Membership operators (if a sequence is presented in an object), * and Bitwise operators (shifting, AND, OR, NOT, XOR, etc). Then we present some examples of transferring them into functions, in the form: \(\text{tool}\Longrightarrow\text{function}\): \[\begin{split}\text{if x then y}&\Longrightarrow if(x,y)\\ \text{if x then y else z}&\Longrightarrow ife(x,y,z) \end{split} \tag{5}\] So for example a composition could be: \[\begin{split}\text{if x then y elif x2 then y2 elif x3 then y3 else y4}\\ \Longrightarrow ife(x,y,ife(x2,y2,ife(x3,y3,y4)))\end{split} \tag{6}\] \[\begin{split}\text{for x do y}&\Longrightarrow for(x,y)\\ \text{while x do y}&\Longrightarrow while(x,y) \end{split} \tag{7}\] Also expressions: \[\begin{split}\text{x and y}&\Longrightarrow and(x,y)\\ \text{x and y and z}&\Longrightarrow and(x,y,z)\\ \text{x or y}&\Longrightarrow or(x,y)\\ \text{not x}&\Longrightarrow not(x)\\ \text{x + y}&\Longrightarrow plus(x,y)\\ \text{min(x,y,z)}&\Longrightarrow min(x,y,z)\\ \text{x}&\leftarrow\text{y}& \Longrightarrow assign(x,y)\end{split} \tag{8}\] As seen, functions can have specific arguments, or unknown list/dictionary of them, as in Python for example. Some of them can be set with default value at function entrance. Also, they allow recursion. Also, Lambda function, taken from HOL, makes it easy to produce functions on-the-fly. It is an anonymous function, hence it is not needed to be pre-defined. This idea can be utilized similarly to temporal abstractions, i.e., the addition of temporal actions when it is required. All these temporal items can be stored in some storage unit, for cases when some of them are reoccurred. In such cases, they are transferred to LTM. See more about temporal abstractions in Section 7.3 and consolidation in Section 7.1. Finally, how different NN structures are equivalent to a rule-based algorithms (or usual programming)is explained in Appendix A.6. After the proposal of the basic building blocks, the next issue is what learning approach is should be utilized to consolidate the learning process into the correct program. Two approaches proposed: 1. **Generative**: The program-search is the opposite of _DL_. In _DL_ we start from large number of hypotheses and as more examples are used (either more epochs or more examples), the number of possible hypotheses is reduced. But this is due to fixed/static architecture to apply BP in it. Here, on the other hand, is the opposite, we start from no hypotheses and generate multiple hypotheses dynamically. Meaning, we propose each time a new composition of basic functions. Since we do not have fixed structure we cannot use BP. However, this method is computationally heavy, since there is never guarantee that a good hypotheses will be found, especially with consistency with other models. 2. **Eliminating**: similar to _DL_ we transfer the set of functions into a DNN. It will be described further in the following. 3. Additionally, perhaps there is some combination of these methods. For example, _DL_ is applied first to narrow the number of hypotheses, then the generative approach is applied somehow. **Related Work** In literature, there are many code representation methods, for tasks like classification (e.g. naming the code or its purpose, decide if it is a malware, etc.) or producing a sequence similar to translation, e.g. code completion or code captioning [1]. Here presented some code representations and their model that is used for accomplishing some task: * syntax-based methods [1] represent program as a bag of words/tokens. It can be processed for some task via RNN/LSTM for example or a Transformer, e.g. via BERT [14]. * code2vec method [1], represents a program as a bag of all possible paths between any pair of nodes in a tree representing the program. It uses a Transformer model in order to discriminate different paths with different importance. Thus they use attention mechanism. * Another more natural method to represent a program tree is via graph neural network (GNN) (Fernandes et al., 2018). The methods above concentrate on code representation, to accomplish some task given the code. This is discriminating AI. While our problem is generative, i.e. we need to generate code for the specific task of model predictiveness, for example as any LM: via forecasting next sensory inputs. One approach handling this is Genetic Programming (GP) (Koza et al., 1994; Lehman et al., 2022) based an evolutionary approach. It works similarly to GA. For example the population is a vector of numbers, where each number is interpreted as a code to select from a list of production rules to constructing a function. These production rules represent correct syntax or feasible expansion of a code tree. Here is an example: 1. \(\mathrm{S}\rightarrow\mathrm{bool}\) 2. \(\mathrm{S}\rightarrow\mathrm{exp}\) 3. \(\mathrm{bool}\rightarrow\mathrm{and\;bool}\) 4. \(\mathrm{bool}\rightarrow\mathrm{not\;bool}\) 5. \(\mathrm{exp}\rightarrow\mathrm{ife\;bool\;exp\;exp}\) 6. \(\mathrm{arg}\rightarrow\mathrm{while\;bool\;exp}\) 7. \(\mathrm{exp}\rightarrow\mathrm{mul\;exp\;exp}\) 8. \(\mathrm{arg}\rightarrow\mathrm{input}\) where \(\mathrm{arg}\) represents any expression or argument, starting at the bottom as the input to the program. Then the right side of each rule (the terminal) is the function and the number of its arguments. We could produce initial population of vectors step-by-step, till the program is complete. For example, assuming maximal depth of a tree to be 3, and a Depth-First-Search method: 1. 1: \(and(,)\), depth=1 2. 4: \(and(while(,),)\), depth=2 3. 2: \(and(while(not(input),),)\), depth=3 4. 3: \(and(while(not(input),),)\), depth=3 5. \(\mathrm{arg}\rightarrow\mathrm{input\;0}\) ## 9 Related/Prior Work ### Comparison to other cognitive architectures NOTE: This sub-section is also under construction, hence can be skipped. The first important difference between _MOM_ and other CAs, is that _MOM_ does not have a strict block diagram separating different modules (modularity) of different functionalities. Instead we embrace holistic approach, setting all different functionalities in one place. For example, the knowledge models include different memories and cognitive operations such as abstraction. One example is that we do not separate semantic and episodic memories. We include both events and the concepts they are built upon in one knowledge map36. The only separation for now is that of knowledge models, supervising models and I/O models. It can be regarded as layered structure, where the I/O layer is the most external, then the knowledge layer, and finally the most internal layer is the supervising one. It can be viewed in Fig. 47. We can see that potentially, any model can be either of symbolic type (interpretable) or of sub-symbolic type (uninterpretable). However, each layer has some dominant type. Footnote 36: Note that implicit memory is also partially represented in the knowledge map. For example, tying shoes or driving a bicycle is a physical activity that its meaning is stored symbolically, while the physical memory of its implementation is stored in the nervous system and in muscle memory. Also, mention about logic general explainability, verse narrow specific type (structured) of explainability tools in current _DL_: SHAP values, LIME, gradient heat-maps, etc. Most of which simply represent feature relative significance. This is more of an analyzing tool than a real explainability. Some say similarly about attention mechanism in Transformers [Jain and Wallace, 2019]. Figure 47: Overview of _MOM_ in different aspects Real explainability is logical, where like a step-by-step reasoning, there is a model-based narrative guiding the listener with the relevant elements, that eventually construct a full-proof model to explain the action/decision of an AI agent, as it is in humans. Hence, _MOM_ strives to deal with this issue generally. Will is expressed both internally, to handle a given situation, such as how to react? E.g. how to solve some given situation/problem. Either immediately, e.g. via instinct, or in a delay, as a plan. This reactive mind state is both for passive will and for active one, i.e. in either way it's either react or simply act. And it is expressed externally, to understand a given situation, such as comprehending a message/story/phenomena.. In both cases (my/others will), which is about control, what precedes it, is making-sense/understanding will. I.e., internally organizing models from past experiences, and externally - understanding a given state using these models. Traditional behaviorists speak of controlling variables (i.e., variables that control the animals behavior), meaning input\(\rightarrow\)behavior\(\rightarrow\)output, or stimulus\(\rightarrow\)response. whereas control theorists speak of controlled variables (i.e., variables that the animal attempts to control). That is, animal behavioral output functions to reduce the discrepancy between the perceived actual state and the reference desired state. Thus, the function of the behavior is the control of perception, rather than the stimulus causing the behavior. In Traditional humans are mechanistic, like billiard balls. In the control, they are purposeful, agentic creatures. I partly disagree: Traditional=open-loop, Control=closed-loop or feedback. Still in both we learn models. And in both we have will. It's simply a matter of direct or indirect interaction with the environment. In the diagram you see opposite causality of behavior. Including will can implement both cases, since we deal with all sorts of interaction, hence also various learning regimes, as we showed in RL case. The left diagram is more general, while the right one is only in problem-solving case. As we've seen - there's also designing, etc. There's also emotion as indicator of alignment with will. E.g. in learning: as RL - after a good/bad result/reward which comes after an action, and at perception: when the state is recognized as good/bad - the optimal/appropriate action is selected. Example: "when a child sees a lollipop and has a positive emotional response based on past experiences eating a lollipop, the anticipated pleasure is serving an incentive function. When the child is struggling to open the wrapper and finally rips it with his teeth, shifting his experience from frustration to pleasure, then the emotional response has reinforced the behavior and he is more likely to rip it with his teeth in the future. Fig. 48. CAs fall into four top-level categories: symbolic, emergentist, hybrid and universalist. For example, symbolic CAs include: ACT-R, Cyc, EPIC, ICARUS, SNePS, SOAR. Emergentist CA: Hierarchical Temporal Memory (HTM) (Hawkins..2007), and DeSTIN (Arel.. 2009) is a hierarchical temporal pattern recognition architecture, with some similarities to HTM but featuring more complex learning mechanisms. Cyc is a symbolic CA that represent knowledge via formal logic, specifically HOL. It is in contrast to knowledge graphs such as semantic web, since they represent simple binary relations, such as given a triple of SOV (subject-object-verb). It cannot represent uncertainty, and reason over distributions, though we can represent it within additional continuous features. It uses simple forward and backward inference, given a set of rules, to perform reasoning. This is highly inefficient and cause computational explosion, hence they can implement it on very simple problems. On the other hand, Cyc is especially concerned about common sense, or the filling missing information, as _MOM_. It understood that consistency is not global strict rule of knowledge, rather human knowledge can contain inconsistencies sometimes, especially moving from one context to another. Meaning they advocate a local consistency, for a specific context. This is what we refer to as multiple versions of the same concepts. One important observation in Cyc, is that common sense knowledge is often not found in knowledge sources and databases, such as Wikipedia, since it is obvious. For example, that water flows downhill, that humans are mortal and cannot be dead and alive at the same time, and so on. NARS is also store rules for everything. It also has forgetting system due to limited resources. _MOM_ uses both forgetting due to low use and other measures, and by encouraging abstraction to remember more general rules/objects and less specific ones. It is mostly concerned with inference from classes to new classes, i.e. performing logic learning (induction, deduction, abduction). For example induction: learning classes relationships from given data/evidence. These and other types of learning should be considered in our evolution phase. Also as in Figure 48: our _MOM_, the inference is in runtime. In NARS the objective of learning is to organize knowledge, which is similar to our _MOM_. ACT-R uses declarative memory to store objects and their features, while it uses production rules, i.e. if-then rules, to store actions. We do not confine ourselves merely to straightforward deterministic rules, but allow for various actions, while consolidation allows the flexibility to be in the range of more options or less. However, we do not use probabilistic representation/programming, which assign probability for each admissible action, although it is a viable option. For now it is left to the process of creativity, which dismiss the most admissible option in favor of others. Still, it is certainly a possibility, and very simple change in the current model. Also, ACT-R is limited to a single declarative unit of knowledge while reasoning, e.g. a single perceived object. This complies with our concentrated attention. Meaning each object is perceived solely, and afterwards they can be proceed to inference. LIDA is a hybrid CA. Like _MOM_, it also emphasize the understanding and making sense goal. Then it uses GWT to compete over attention. SOAR for example has action selection unit, that prioritize among some triggered production rules, based on the accumulated knowledge so far. This is a fast response system, to make sure it reacts in human time scale. However, as any CA, this selection is not dependent on the will, only some fraction of it, perhaps the goal, but mostly on the current state. SOAR, like _MOM_, is also emphasizes highly the limitation of resources, especially processing time over optimality of the response. One issue with rule-based system like this, is retrieval time as a function of knowledge size. Meaning, that as the knowledge increases, the time it takes to find something in it get larger, this is highly inefficient. Hence, we propose fast class recognition via usual _DL_. Although the next step of inference is based on classes, which can be like rule-based also inefficient. Additionally, SOAR like _MOM_, uses emotions as rewards when applying RL, while interacting with the environment, i.e. without specified reward. SOAR like CRAM (Flanagan et al., 2006) and other CAs also uses the breaking-down problems to sub-problems principle. SPAUN is an example of Neuro-Symbolic platform, that utilizes VSAs, where the sub-symbolic data is transferred into distributed vector representation, which can represent symbolic data and can be manipulated as cognitive operations. OpenCogPrime (OCP) integrates multiple learning algorithms associated with different memory types, using a weighted labeled hyper-graph knowledge representation and making heavy use of probabilistic semantics. It is based among other things, on Cognitive Synergy Theory (CST), which includes different types of memory: declarative, procedural, sensory, episodic, attentional and intentional. In our case, attentional is perhaps separated from intentional, though they are both supposingly originated from will. Also, as knowledge represent as hyper-graph constructed from nodes and edges, they are weighted with truth values such as probability and confidence. This is similar to NARS (frequency and confidence) and to our measures, which can be additional features of simple or complex objects. However,as previous CAs it uses production rules. CLARION has similar structure of triggering, i.e. the neurons triggered in the sub-symbolic to symbolic section are correspond to the object/class they detect, while CLARION also trigger in opposite: from triggered symbolic class to its corresponding sub-symbolic triggered features. Also they have bottom-up and top-down learning, however, _MOM_ implements both learning or recognition only unidirectionally, i.e. bottom-up, similarly in learning - the top stays top, it does not affect the bottom. Unlike _MOM_, CLARION is implementing self-will, and hence attached to meta-cognition to monitor and regulate these motives (wills). Hence, it is also explains human unique psychological phenomena. We on the other hand, dismiss self will, and do not interested in simulating human-like agent psychologically, but rather only rationally, only by including will as important component for effective communication. Subsequently, we include will as external component to be included in modeling other things outside the agent. Action selection is according to goal in both levels, which is similar to _MOM_. However, we do not separate motivational representation to explicit goals at the top and drive strengths/activations at the bottom. We strive to make things continuous and gradual. This is again, the same motive in most CAs: moduling or separating of functions, which in our opinion makes the system more rigid. CLARION uses label nodes, node for each concept (in top), as _MOM_, but their features are represented via distributed representation (at bottom) to be retrieved only by relevance to a given scenario. Again, this representation might be problematic if we to include also abstractions and assignment/specialization/instanciation. In summary,... ### Knowledge Graph Reasoning (KGR) There are different types of reasoning approaches in knowledge graphs [122]. Mainly logic rules-based, representation-based, and NN-based methods. KG reasoning is the task of complete missing elements in KG by inference. However, the original KG has other components as well: KR learning (KRL), knowledge storage (KS), KG construction (KGC), knowledge updating (KU), and knowledge reasoning. KGC includes knowledge extraction (KE), knowledge fusion (KF), and knowledge processing (KP). KS proposes storing mostly via relational or graph data structures. KE is about extracting objects, relations or both in the same time, mostly by utilizing pre-trained DNNs. KF usually discover entities that represent the same semantics in different KGs, and then eliminates the ambiguity of entities from different sources. KP mostly concentrates on relations between objects. KU is about how to update objects, relations, triples of these, and more generally how to combine new knowledge with old one. However, _MOM_ is hybrid neuro-symbolic model, that learns mostly as NNs and some other principles, such as consolidation and top-down learning. Moreover, most of the KG construction processes are neuronal and not via logical or symbolic processing, especially not from a pre-designed KB, but directly from raw data. Also, although most of KE processed via DNNs, it is based on supervised learning from given datasets. See the comparison of these approaches in Fig. 49. Logic rules-based KGR use rules and KG features to infer new facts, e.g via first-order logic (FOL) pre-defined rules, or via ML extracted rules, and via paths between entities in a graph structure. KR representation-based method compute semantic relationships among entities through projecting the semantic information (such as triples) into a dense low-dimensional vector space. There are three main approaches in this method. Tensor decomposition-based approach decompose a relation tensor of KG into multiple matrices, which are used to construct a low-dimensional embedding of KG. The distance approach learns distributed representations of all entities and relation types by vector addition of all relation triplets, see TransE (Bordes et al., 2013). And semantic matching model measures scores of relation triples by comparing them with other entities and relation types in the low-dimensional space. These comparisons evaluate the similarity among the relations in KG. KR NN-based methods use different DNNs (CNN, RNN, graph neural network (GNN), and deep reinforcement learning (DRL)) to produce new facts by processing the graph structure of KG. ### Comparison to previous versions of _Mom_ Here all models developed so far are compared, in Fig. 50. Figure 49: Comparison KG taken from (Tian et al., 2022) and our _MOM_. In summary, _DENN_ stores each new combination of events, while _AKREM_ stores dynamically in episodic memory any newly encountered combination of basic events. Basic events are stored separately in two types of memory: concepts and actions. _MOM_ on the other hand, unites those separate memories into one dynamic operational memory, consisting of concepts, actions, relations and any instance of those. ## 10 Conclusion The following is the summary of the paper including some key takeaways. First, a new cognitive model is introduced to join the existing CAs, in the form of dynamic operational memory of models and instances. In it, a holistic approach is embraced, assuming that intelligence should be highly versatile and diverse. As opposed to "picking one side", which makes it not fully representative and unbalanced, since it is inclined to one or few directions only. Next, our model includes will as an essential part of the modeling process. Thus, in case of its absence, it turns most of the learned models to be very partial. In the _OOP_ formulation, the will is an additional variable/attribute, and it is mostly significant in the top level while it is least significant in the lowest one. Moreover, our assumption is that will is derived from states or the representation of the world, via different channels (emotions, rule systems), and it affects the actions of our accumulated knowledge, by learning to align with them. Next, operationability turns static knowledge representation into a dynamic one, thus enabling cognitive processes. The actions are learned via regular Figure 50: Comparison table of models discussed so far program search, via either _DL_ or other tools, in a self-supervised manner. Next, one way to ensure that the continual learning is consistent, is by implementing local learning, i.e., concentrating on updating only one/few model(s), while confirming compatibility with other models. A model can be learned either from examples or directly by using existing operations (logically). Another way to ensure that learning is consistent, is by a slow process of consolidation. It is ensured by maintaining a high level of flexibility over a long period, while pursuing more and more consistency within and between models. Next, reusability is utilized to enhance connectivity between models, instead of learning them as separate entities. Finally, the cognitive model is designed via inverse engineering. Meaning, starting from our highly aware and mature cognitive state of mind, and then tracking back in time to study its evolution. ### Future Work The main problem is how to implement a cognitive system, that produces the appropriate models, i.e. how grouping/clustering occur, to generate the right models. Also, how these models produce new ones by the correct compositions. In addition, there is the issue of how sub-symbolic become symbolic. Perhaps the models produce objects and actions directly upon sub-symbolic data. Furthermore, if continuing this line of thought, then models may be removed at all, which converts this problem to pure _DL_-based approach. Additionally, relevant to the last issue, is model evolving. Is it model refinement of some main model to sub-models, which controls how models of knowledge are used? Or is it that all the models are separate? And if it is by refinement, is it one large _DL_-based model, and all the rest are knowledge models? Or if we continue this line of thought - again, end up with pure _DL_-based one huge model, containing implicitly all the different models, their actions and attributes. But then how _elements_ and abstraction are implemented in such a _DL_ model? More about the suggestion above see Section 8.1. These two examples ending up with _DL_ pure approach (without logic representation), may imply that this theory can be utilized as a prior knowledge in _DL_ models, without the inclusion of explicit logic in the final representation. This is certainly viable/legitimate idea to pursue. Also, though hierarchy by abstraction can be implemented, but then how can it be implemented by will, i.e. how to decide when and what to group in such a hierarchy? Other _MOM_'s challenges include: 1. How to learn concepts and their features, or more generally classes. 2. How to learn different wills, to be utilized for action alignment. The second issue also about how to extract will from the perceived state. Or it is more known as Symbol Grounding Problem: how the symbolic representation acquire meaning or more generally attach will to it. These are all open questions to deal with. P.S.: Neuro-Symbolic AI for us is merely taking inspiration of class-based structure, to act as the final stage of learning, while _DL_ is the main tool to reach it. So it is all about flexibility of DNNs, only that we use consolidation to finally reach symbols. Another implication of this, is memory. First, since it is vague and mostly reconstructed during replay (recalling). It means it is not recorded accurately. And second, since it is probably used in sleeping periods similarly in a vague/foggy form. ## Appendix A Appendix ### Hybrid approach It is our opinion, that DNN represents a very basic computational model, comprised either from a sequence of (non-linear) operations or from a sequence of "if" operations. This limitation might be the reason for the counter-intuitive relations found when analyzing DNNs' interpretability. I.e., the effect of DNN finding wrong rules, is probably due to: * some of the neurons must be not triggered (inhibited). * no relation to other relevant models, to assist the DNN in choosing most appropriate rules consistent with the other models.37 Footnote 37: Actually, it is not entirely correct, since DNNs use shared parameters which can be considered as ”shared functions” that are utilized by higher layers. However, on the other hand these are local functions, temporal ones relevant only to the trained task(s). One advantage of DNN's is their layered structure, which is equivalent to chain-of-thought (CoT) [20] or dynamic programming (DP) [17]. I.e. it is a step-by-step process, which may imply perhaps why the more layers (deeper) the DNN has, the better it is, since it allows for more steps of computation. But this is also its disadvantage: if it is not dynamic in structure, it always has the same number of cognitive steps, while in reality, each model should have different steps, and more precisely - each model can have different number of steps depending on the input (just like in an algorithm). Subsequently, we advocate what researchers like Gary Marcus [14] have claimed for many years - the adoption of neuro-symbolic hybrid approach. The modeling we propose is exactly this - learnable symbolic manipulation. As the caricature in Fig. 52 shows, we must have symbolic explicit (and not implicit via _DL_) representation of knowledge. Especially due to communicative aspects, as discussed in [11]. Another shortcoming of DNNs is that the features comprising the model are unrelated to any concept, hence cannot be communicated as actions over concepts. This problem was tried to be dealt with by numerous studies, such as converting features to interpretable concepts [Blazek and Lin, 2021], the training on objects instead of pixels paper.., and dynamic evolving NN [Komarovsky, 2022d] which construct the NN neurons step-by-step by relevancy. Another argument that _DL_ solely is not enough, is since we need abstraction for multiple reasons, such as analogy and other functions. One way to realize this is via symbolic manipulation as in _OOP_. It is mentioned in many places, such is in [Dickson, 2022] article: _"Despite the impressive achievements of deep learning, some of the field's problems remain unsolved. Among them are causality, compositionality, common sense, reasoning, planning, intuitive physics, and abstraction and analogy-making."_. Though, there are studies in _DL_ that try to implement modularity, e.g Modular Neural Network [Azam, 2000], and see modular NNs in Neuro-Symbolic section/presentation... We can view it also from a prior knowledge point-of-view: it is the trade-off between how much external prior/bias we should induce, verse how much should be learned from the data, i.e. how much of our proposed model is a prior and how much of it is flexibility and freedom to adapt. Hence, whatever _AGI_ architecture we will choose, we must be prepared to insert artificially, e.g. externally by learning, some of the prior embedded in human brain, perhaps inherited right from birth. For example, physical world priors38[Khatib et al., 2022, Piloto et al., 2022], or the self and others models, and more. Additionally, some things will have to be guided explicitly, since _AGI_ do not posses a will of its own, e.g. telling it what to transfer from WM to LTM (what is important). Moreover, some basic functionalities perhaps needed to be rule-based, i.e. written algorithmically, in the _AGI_. These functionalities are probably coming from the human genome, mainly regulating the main systems or memory. Similarly to [Marcus, 2003]. Footnote 38: It may just be that physical conservation principle (object do not appear/disappear) is learned, as stated by Piaget, since it is not merely programming, but it is also descriptive (behavioral). Nowadays, Machine Learning (ML) theory assumes best performance is in _DL_, see Fig. 51(a). Especially for large data set [Chalapathy and Chawla, 2019], see Fig. 51(b), where the main effect is for big data, since then there is a difference between the models. But in small data the different sizes of NNs are all behave approximately the same. However, in our humble opinion, unlike the place where _AGI_ supposed to be: in the most right part of the scale, where we almost totally dependent on data (minimal prior knowledge), it is actually the opposite - we need full knowledge for designing _AGI_ (lots of assumptions), which do not depend on data at all. Then the _AGI_ is flexible and adaptive, and is able to tune gradually and gently according to data. Whose more important here? It is not the data, but rather the assumptions and lots of planning of AI. This is like planning an adaptive or any control system, which is deployed only after knowing apriori all possible data/cases/scenarios there could be, encounter by the agent. Same idea is in [Ioannou, 2018]: _"learning ability does not come for free, and is far from automatic. It relies on very specific assumptions that are mostly encoded in the design of the NN architecture itself - here denoted as structural priors."_. Moreover, DNNs are not that far from symbolic manipulation, since the input is perhaps sub-symbolic, but the actions on it are pure logic: if, sum, activation function, etc. We see in the figure that DNN has very strict and rigid structure, which of course allows it to efficiently learn, but on the other hand it denies from it the freedom to model the actual relations and interactions in the data to represent it in its correct form/model. Handing the learning module more freedom (higher variance), might have larger search space for the best model, but eventually, after enough time, it will settle on the most appropriate one. This is the exact argument against the _DL_ immediate-results perspective Figure 51: Comparison between AI methods to comprehend the environment. Figure 52: Caricature of interpretation in communication [Ball, De Bono, 2016]. First, because there is no gradual learning in the complexity of data, as a school student experiences throughout his school years. And second, it takes years for an infant to acquire even simple skills, while in _DL_ it is expected to do the same and more (the performance is compared to adults), in a scale of days. Note, that Fig. 52 is not about the issue of explainability of DNNs, but rather it is about the inappropriate structure to represent the correct model, to enable correct reasoning and communication. Although, when learning the correct model, it is also explainable in its inner structure. But more importantly, it is explainable by communicating with. Also, due to the gap between the set of hypotheses or models that a DNN can produce and the ideal model that represent some trained data, you can notice the shortcuts that the DNN produces to learn the data, such as predicting a cow in an image, if it spots a sky. I.e. in this case it has nothing to do with a cow or its features. Similar effect, of not being able to generalize in LLMs performing multi-step composite tasks. Doing this also by shortcuts, or more specifically by pattern matching and subgraph matching, instead of comprehending the tasks [Dziri et al., 2023]. This effect is reasonable, since there are no constraints or bias over how the DNN should arrive to truly representing approximated function. It is because there is a huge amount of suitable functions to map a limited amount of inputs to their corresponding outputs, see also in [Guest and Martin, 2023]. This huge amount of potential functions explains over-fitting in _DL_, since many possible functions can represent the training data without any respect to the general data and knowledge in the world. It can be solved in a system, where different models are not learned in isolation, and it learns a model with alignment and consistency-check with other models learned so far, which _MOM_ supplies (continual learning). That is, the learning performed in interaction and feedback from other models, in order for each of them to be maximally correct. I.e., every model should be learned without conflicts with other models, but rather in cooperation with them, which enforces each model to be aligned to its true functions. Moreover, any ML system has to be provided with a well-defined objective, which means that the designer needs to be very punctilious about his objective and the constraints of the problem. All this rigorous process is redundant if we use natural language, since we assume a minimalistic form of communication, where very little is said, and most of it is completed by previous knowledge and context. #### a.1.1 Neuro-Symbolic AI in literature Neuro-Symbolic AI is widely studied in ML community. For example, the Neuro-Symbolic Concept Learner for Visual-Question-Answering tasks [Mao et al., 2019], which is an object-based representation for a scene, and run a semantic parsing module to translate a question into executable and symbolic program, given a domain specific language (DSL), and executes it on object features to get the answer. They use (image, question, answer) tuples for supervised training. The DSL contains a set of operations over objects' features or concepts in the question. These concepts are neural operations, learned in training. Hence, the program executor is quasi-symbolic, and the program is a nested sequence of operations, e.g. _Query(Shape, Filter(Red, Relate(Left, Filter(Sphere))))_. The training is end-to-end excluding the visual component that produces some embedding, to be operated upon. Differently, [20] presents purely symbolic executor, since instead of using sub-symbolic features, it uses a scene parser that de-renders the image to obtain a structural scene representation, i.e. generate symbolic features of the objects, e.g. their size, shape, material and color categories. And they are used as parameters in program execution. Here the training is partitioned to object detection with the categories above, and then the symbolic part is trained via supervised learning on question-program tuples following with RL REINFORCE algorithm for question-answer tuples, where the answer is the reward. It raises the question, whether we should convert the sub-symbolic to symbolic [20], or leave it as sub-symbolic and operate symbolically on it, as in [18]. In [21], they use deep RL (DRL) to learn a neural policy network first and then do local search on programmatic policies to minimize the distance between policy generated by DRL and programmatic policy. Thus generating an interpretable policy. They claim that they perform this conversion, since program cannot be learned directly, since it is not differentiable. But we propose to find differentiable type of program learning, see Section 8.4.1. ### How will is learned at the beginning Inverse reinforcement learning (IRL) [14] or imitation learning is an RL variation focusing on learning a reward function from human feedback. We argue that just as mirror neurons discovered by neurosciencetists, mimicking other peoples' behavior, similarly IRL and perceiving a story or a message from a sender (as described in _AKREM_) is all about constructing the actual intention from the observed behavior. Hence infants, due to lack of developed memory must use visual (besides audio) input as a "story-telling" channel, to learn intentions of objects, either physical or living (e.g. animals and humans). Mimicking is quite central in early life, such as pretending games [22], where other perspectives are learned for social interaction. Though in early childhood it may also be unrealistic perspective, or more imaginary. However, the more fundamental question is whether will can be externally learned or it is embedded internally, inherently, or may be both. ### Knowledge representations Here we survey some common knowledge representations, and inspect them in comparison to _MOM_. #### a.3.1 Scene Graphs and Knowledge graphs In AI, knowledge is expressed usually by objects, relations and attributes, via _Knowledge graphs_ (KG), and in _DL_ it is often realized via _Scene Graphs_ (SG) [14, 15]. SGs are uniquely describe a given image, while KGs are expressing general knowledge. One of the SGs limitations, is that it produces very large network of too much details. It cannot distinguish between the important and the minor, between the essence and the details. Sometimes, KGs and SGs combined together, in VQA tasks for example. Anyway, we assume that memory data should not be in the form of merely KG or SG, since they are fixed and rigid, while we need more flexibility. In thinking we simulate different and new scenarios by applying new actions on the data. So unlike static representations of knowledge, i.e. static maps, operational representation enables the dynamic structure of this knowledge, by applying admissible actions in the knowledge structure. #### a.3.2 Opm Another approach, known as Object-Process-Methodology (OPM) [16], is a conceptual modeling of systems. It contains two types of elements: objects, processes and links between them. Processes represent activity, action, and procedure. One of OPM's limitations: it is locally designed, with arbitrary names for objects/processes, while it should be universal (like John Ball's Super KG [https://medium.com/pat-inc/how-semantics-enables-super-knowledge-graphs-part-1-bda3c4a4386c](https://medium.com/pat-inc/how-semantics-enables-super-knowledge-graphs-part-1-bda3c4a4386c)). Also: Many conceptual discriminations/categorizations should be learned and not pre-defined. E.g. physical verse logical (temporal verse non-temporal), environmental verse systematic, which actually should be in a continuum, because there are several degrees of relevance to the discussed object. Also types of relations: structural verse procedural (should be learned). Or the separation of processes and relations. We consider all as actions, though we should have primitive actions to start with. We, unlike OPM, do not separate into different categories, since we cannot predict if there will not be others also along the way. Only the basic atoms of knowledge should remain: objects, attributes, and actions, all else should be learned. Additionally, OPM based on physical reality, while _AGI_ is more general - it allows also abstract concepts. Hence objects could be non-spatial, and "processes" can be a private case of actions, which generally are not necessary temporal, but could be sometimes. Moreover, OPM is largely based on previously presented representations (e.g. semantic networks). Furthermore, the state changing is simple assigning operation. They also distinct main object/process/function from secondary ones (like systemic verse environmental). We have it as additional attribute, of an action/object, about how relevant is one to the other, i.e. action admissibility. It is very important as prioritization among different options in the process of association. It can be the result of consolidation, where some are consolidated strongly, while others weakly. OPM can also model spatial data, e.g. an architecture of an apartment or a room. Conversely, we use semantic-like operators (above/below, left/right, up/down, next to, in-front-of/behind) with relative strength to model spatial data. Although _DL_ treats semantics as being represented in continuous space, via embedding. Finally, OPM's zooming-in diagrams support limited WM hypothesis (or limited cognitive load) [Dori, 2011], as in _AKREM_ and _MOM_. #### a.3.3 SciSoft model This is operational representation of knowledge. It is about representing scientific theories and models, such as classic mechanics or electro-magnetism in physics. It was designed specifically to present equation development in these theories. The main motivation is to make the data highly connected, as in Wikipedia and other web-based platforms, to enable maximal understanding of any topic. It is done specifically by connecting a topic to all its relevant concepts so that nothing would be missing from the user to understand the topic to its fullest. The final product of this idea is a knowledge base, represented by a graph, containing various types of objects: animations, images, text, mathematical expressions and visualizations (as graphs and shapes). All objects are connected by relevancy, but also have attributes, such that no piece of data, no matter how small it is, will be missing. For example, each equation should have access to all its components (e.g. variables, parameters and operations), with each component as an object have its attributes and connections. The necessity in such detailed system comes from failures in teaching scientific subjects (from the teacher) or difficulties from the student side. The root of all these problems is our language. It is highly context-based and minimalistic, i.e. like an efficient code, we use minimal set of words to describe something - we say much less then we should, and expect everything else to be completed in the recipient's head. But unfortunately, most of the time students lack some "obvious" details, which the teacher considers they have in their model. This fact generates misunderstanding, which generates the need for Q&A. All this can be avoided if we have all the details in one system, for the student to observe and bounce from one concept to another, filling any gap he has. Note that SciSoft is obviously following entropy of information principle, i.e. when I perform operations on equations, the result cannot restore its origins, because some operations are irreversible. Hence, SciSoft is a directed graph, representing the development of equations from some fundamental assumptions or axioms. Meaning, like any operation, as in _MOM_, if it is irreversible, then information is lost in the process. For example, the operation in Fig. 31(a) is irreversible, since \(2(y-3)+4=10\) become \(2y-2=10\) which cannot reproduce its origins: \(2x+4=10\) and \(x=y-3\). Since any specific topic or subject can have multiple theories/models, sometimes in conflict, then SciSoft enable multiple parallel versions of the same thing. Similarly, it is enabled in OPM and in _MOM_. Finally, it is noticeable that most rule-based systems, like OPM and SciSoft, construct the knowledge manually. However, _AGI_, or in our case _MOM_, should learn it by itself. ### Bias Bias is required in our reality. We need it for survival and fast response. It is also due to consolidation, or similarly described as the function collapse in quantum theory. See more in Section 7.1. We always pick patterns in an unpatterned reality at its core. Hence, it is related to the right and left brain theory, where the left hemisphere might act as unbiasing or resetting function, to take us back to all the possible options, before we selected one specific, in every topic and model. Left-brain creative part, unlike right-brain pattern-recognition system, is where we set free from patterns. It is unlimited space for imagination. Hence activating it allows using non-learned actions while thinking. For example the imagining of an elephant flying with his ears, is not learned from experience. Similar idea should be applied in art, e.g. painting. It should not be a random and a weird mash of things, as it is so often occurs in current _DL_ generating tools as DALL-E (Reddy et al., 2021). Classical art should be created via structured imagination, i.e. having some sense/logic, though it may be not physical or it can be infeasible. Back to Bias. It is related to our subjectivity. We born unbiased (or slightly biased due to our genes), and as we learn models, we gain our character and mold our identity, or our uniqueness. The sum of all our models is what define us, and differentiate us from others, and generate our unique perspective and our subjective interpretation. You can see it as if every model, of something, is a superposition of all possible models it could be, while we simply collapsed into one of them. Hence unbiasing is the act of collective thinking. Where I behave less as a unique subject, or individual, and more as a part of some collective. We cannot unbias ourselves, but rather we can either deepen our models by incorporating more details, options, and conditions, that reduce simple prejudice view over some topic (e.g. over groups of people, i.e. racism); or, we can add more versions of the same model (which is like including the models of other people in me). But eventually, these two options are equivalent. It is simply the question of how we update our models. Simply put, we like to have hierarchical models to abstract details away and simplify as much as possible, since we dislike to remember lots of details. We hence strive to construct hierarchical associative models. This idea can be seen here for example, [Friesen, 2021], claiming: _"It feels good when many items come together in a simple manner. It feels bad when there is an exception to some general rule."_ ### Linguistic aspect Here an advantage of _AKREM_ in language is proposed. Figure 53 describes some issues in Natural Language Processing (NLP). Figure 53(a) shows that syntax is not enough to contain also meaning (semantics), while Figure 53(b) shows that local semantics or solving Coreference Resolution (finding what refers to what) is not enough to ensure logic and coherence within and between sentences. So Figure 53(b) shows that semantics is also not enough. We argue that both John Ball [Van Valin Jr, 2015, Ball, 2017] and Chomsky [Chomsky, 1956] tried to do the right thing - to construct a model from a sequential input. Syntax parsing [Abney, 1997] only evaluate the entities themselves, while John Ball includes the relations between the entities or their roles - thus introduce the meaning of a full sentence. Though meaning reduce the number of illogical phrases and sentences, it is only local meaning. We propose to continue to check relations within and among sentences, and continue in the same manner to higher aggregating levels. All levels in this hierarchy have purpose, just like in a military: imagine a general that wants to occupy some country to get its natural resources. He then command his officer to invade some city that is strategically blocking the access to these resources. He gives the officer the reason for the command: most of the enemy is in this city. Consequently the officer gives order to his soldiers, to attack in specific formation, while giving them the reason: given the acquired intel it is the most effective way to occupy with least casualties. We see that every level acts with some purpose, hence the associative hierarchy is a hierarchy of will39. Figure 53: Language processing issues It is a top-down modeling that constructed from the bottom, therefore while a person is hearing a story/message, he always checks for its consistency (even though the highest intention is usually revealed at the end, such as in thriller-s/mysteries). John Ball claims that syntax is combinatorial, i.e. that it is not restricted enough, so his meaning-based approach argue that it is a more restrictive model. But as it seems, this approach is not restrictive enough too. So we filter along the way, to end up with a more refined model to represent correct human language. Footnote 10: The term “\(\mathcal{L}\)” is the term “\(\mathcal{L} ### DNNs' equivalence to Programming Languages Here we will show the equivalence of different NN structures to a rule-based algorithms (or usual programming). For example, we can see that NNs (multiple if's), decision trees (nested if-else), RNN (for loop), recursive NN (recursion) and other ML methods are all actually program-like. Decision trees as parametric methods, are highly unstable (hence ensemble models or random forest are usually preferred), since they produce multiple solutions. In our opinion, it may be due to insufficient set of tools to express any function. They use only "if-else" blocks, as oppose to universal approximating DNNs. There is also the difference of structure: decision trees have dynamic structure (non-parametric model), while DNNs usually have fixed structure (parametric model40), which constrain the solution space, for better or for worse. Footnote 40: Parametric models have a fixed number of adaptable parameters, independent of the amount of data, while non-parametric models have a variable number of parameters, i.e. the parameters change to adapt to the amount of data. It is very hard to obtain the equivalence of DNN to programming syntax. General DNN encapsulates in it many of the programming operations mentioned above, in a declarative manner: "if-else" rules by using activation function (see in (Aytekin, 2022)), "or" rules by using sum of weighted neurons, and layers enable multiple steps of a program. RNN adds also iterative operators. Attention is equivalent to element-wise operation, see (Weiss et al., 2021), i.e. matrix/vector type of manipulations, e.g. the element-wise matching of matrix/vector to matrix/vector, to perform some specific operation. Also attention can be compared to other NN forms: unlike fully-connected layers, it changes the attended parts in the input instead of accounting for the whole input, all the time. This mechanism converts the fixed architecture of NN into a dynamic one, depending on the input, similar to hypernetwork (Galanti and Wolf, 2020), where the weights determined by the input. It is also about finding relations. Whereas convolution layers (Li et al., 2021) take into account only nearby relations, hence need many layers to locate also distant relations, attention and fully-connected layers capture these distance relations more straightforward. Moreover, transformers might be so successful in vision and language nowadays, due to these layer-after-layer self-attention, which may be equivalent to regular algorithm, that finds only specific variables from the full input set of variables, to be manipulated at each phase of calculation. ### Evolution in _Akrem_ In this section, the general learning method in _AKREM_ via episodic memory is presented. It starts with defining episodic memory. Then episodic memory's interaction with LTM through waking and sleeping periods is discussed. Then, a more broader view of human development is discussed. Finally, model learning is appended to this development process. #### a.7.1 Episodic memory in _Akrem_ Here some trained DNN is assumed, containing basic concepts and actions, to be treated as a base memory, upon which an episodic memory can be constructed, see Fig. 54(a) and similar discussion in _"Issues with the proposed DLM"_ section in [Komarovsky, 2022c]. This episodic memory can be based on _AKREM_, storing all the generated hierarchies. Episodic memory's function is not only to store data. In our opinion, it can also be used for rearranging the concept memory to deal with multi-tasking. In other words, any new task requires the rearrangement of the whole concept memory, to cope with all previous tasks and the new one (consistency checking). This episodic memory cannot create unlimited amount of memories. Therefore, a forgetting mechanism is essential. It might act like LIFO (last-in-first-out): new memories enter daily, and old mostly un-accessed ones are decay or deleted, to keep the memory fresh and with relevant information, and to respect the limited memory capacity. See Fig. 54(b). Forgetting is also necessary, since in humans it helps reasoning and understanding by forcing them to generalize and abstract [Kuhl et al., 2007]. It improves associations, relevancy, and extracting the right patterns in a given situation, since it differentiate data to be scaled from being most common to least common. Moreover, forgetting is necessary due to the no-free-lunch principle [Adam et al., 2019], since any model cannot do best in all tasks, i.e. there is always some trade-off due to finite constraints, where it is better in some tasks while worse in others. Hence, adaptivity through forgetfulness allows for learning new tasks on-the-fly, thus putting all the weight of our current focus on the new learned task. This explains why the material after learning a course is so fresh and highly effective in preparing for exams. Meanwhile, after some period, due to the lack of using it, this material is partially forgotten, thus its utilization performance is reduced. Figure 54: Evolution of cognition Finally, the base memory can consist not only from concept memory, but also from a procedural one. Because, they both are like tool memories, containing the basic elements from which not only events can be constructed, but also skills, recipies or any operational memory of how things are done. #### a.7.2 Wake-Sleep Periods In such a system, the learning occurs via two modes: waking and sleeping periods. During the waking period, the episodic daily memory acts like a capacitor, i.e., it is charged with events. It can also perform online learning, where a quick response is required, see Fig. 55(a). At sleep, these events are discharged into LTM and perform a reorganization of the perceiving system or what is referred to as the concept and procedural memories, which are also function as the LTM. This complex and heavy learning can be regarded as offline learning, see Fig. 55(b). This idea is supported by _DL_(Liu et al., 2019) and by neuroscience, e.g. (Golden et al., 2022), where it is stated that _"Sleep helps reorganize memories and presents them in the most efficient way"_. In our opinion, at infancy, most of the cognitive effort is directed to the reorganization of the base memories, and less or not at all to store memories into LTM. It is because it is well known that infancy years are not remembered at adulthood. Our hypothesis is that it is because the concepts or the language by which we store these memories are not yet formalized or converged to be stable enough in order to be the same concepts as those used at adulthood. Another aspect can be deduced from the wake-sleep cycles: different regimes of learning. In daytime we cannot process data too long, it must be fast real-time such as online-learning, for survival. Whereas, in a relaxed or sleep mode, Figure 55: Wake-Sleep Cycles the brain is free to process memorized data. The processing is of a sequence of data, which can infer cause-and-effect relationships for example. Therefore, in survival mode we probably have access to only few state(s), representing perceived information in a sequence: few previous states (n) and few for prediction (m), due to limited resources. While in relaxed mode we can use more memory and processing power, hence assume large \(k>n\) and large \(l>m\), which can produce more complex learning and abstractions. In other words, in relaxed mode we can learn from memory, based farther to the past (e.g. using many associations of past episodic events). And we can predict much farther to the future. See Fig. 56. In summary, learning can be divided into online and offline modes. In waking periods, i.e. when sensors are active, the learning is online and minimal since most resources are dedicated to fast response (e.g. fast optimization into local optimums). However, during sleeping periods, sensors are inactive, and previous memories can be used for improving models to make overall sense, e.g. larger time scale is used to generate causal relations in models. In such a case, it is a slow processing, to allow fast optimization or inference during waking periods. It is implemented e.g. as a Neural Architecture Search (_NAS_) or a Genetic Algorithm (GA) [Binitha et al., 2012], to get out of local minima and search for a global one. #### a.7.3 Human Development As we have seen in Section A.7.1, the episodic and the base memories are interconnected, therefore the way I remember events is a function of the current concepts I have, or a function of the words that constructing event's description. That is why I do not remember my infancy years. Because my concept world, or the language to describe events has changed drastically compared to the language I use as an adult. So both base memories and the episodic memory are changing frequently at infancy. The overall evolution is described as follows. The infant has no explicit guidance, but only some limited feedback. Hence, most of his learning occurs via unsupervised or even self-supervised mode. At childhood he has some basic concepts that are last until adulthood therefore he can only vaguely recall Figure 56: Different modes childhood events. But still the concept world or the language is evolving in school from the various subjects that extend his concept world and change it significantly, such as Math, Sciences, and also social interactions. Only at adulthood he can recall exact memories, with all the details, if it is not a function of natural forgetfulness. From the assumption above we can propose that sleep functions as testing the best model considering continual learning, by testing it on all or recent memories in the episodic memory, to make sure it is consistent with all/most of them, see [https://medium.com/liecatcher/a-noise-model-for-why-we-sleep-and-lie-too-69bf723b3ba9](https://medium.com/liecatcher/a-noise-model-for-why-we-sleep-and-lie-too-69bf723b3ba9). It can be accomplished for example by sampling some of the recent memories, and performing self-supervised learning. In parallel to the evolution of memories, there is also evolution of logic. Logic is about how consistent and comprehensible the world we experience. It is where we understand accurately how everything is operating and functioning in the world. More precisely, it is a symbolic reasoning. This logic is underdeveloped at infancy, and actually starts to evolve in school years, until we reach adulthood, and then it is stabilizes. It may even be, that at infancy the episodic memory function is to organize data (e.g. at sleep), i.e. as a prediction tester, while at adulthood it is just a memory (aggregating specific events). See this development process in Fig. 57. Finally, we assume that people perform by default unsupervised learning, while a supervised one is experienced only in guidance or tutoring. Also, external supervised learning is performed initially via communication, and later it can be done via observation also (such as learning from videos, books,...). #### a.7.4 Consolidation in model learning In model learning via _DL_ style, we start with random, uninterpretable features, and with time, these features become more symbolic, and perhaps represent Figure 57: Development plot concepts and actions. So with time the NN become more symbolic. This idea can be presented in Figure 58: The transition from an uninterpretable representation of knowledge to a symbolic one is depicted in Figure 58(a). In it we see the combination of the gradual interaction from infancy both from the sensory and the actuators modules, lead to a more consolidated and less flexible knowledge. So, we complete the developmental process of memories and logic, from Fig. 57, with the inclusion of modeling development, from Figure 37. Resulting with a gradual evolution of interaction, starting from infancy, involving the sensory and the actuators modules. We can see gradual extension of the infant tools, to allow more types of interaction, which mold and upgrade his models. Interestingly, we can see in Figure 58(a), that qualitatively, most of the changing in modeling occurs at infancy, then a bit less at childhood, and the least is at adulthood. While quantitatively (in the amount of knowledge), it is the opposite. In Figure 58(b) on the right, there is a DNN with consolidation through sparsity, either over paths (in red) or/and over classes/objects in a layer (in yellow). On the left, it is a consolidation of an algorithm or a hypothesis out Figure 58: Evolution of cognition of many possible ones. The sparsification in model searching is actually the manifestation of Occam's razor principal or the minimal description length, i.e., we look for the simplest algorithm as possible. This constraint could be _AGI_ main's objective. It should, as we proposed, be searching for the fastest/smallest models, in order to have fast response for different situations, such as for survival, or for fewer steps of thought. Consequently, in the problem of combining _DL_ with rule-based approach, we can distinguish between soft-combination: where _DL_ consolidate into rule-based form, and a hard-combination: learnable model-structure right from the start. Interestingly, similar consolidation mechanism were recently discovered in neuroscience [23], in what referred to as pruning. They discovered a protection/punishment machinery that would strengthen certain connections and kickoff the pruning of others, thus resulting with branches that get pruned to leave a single strong connection. ### Other implications of MOM #### a.8.1 How Logic operations such as in/de-duction are possible in this framework? As stated previously, in Section 7.3, induction and deduction may be part of the learning mechanism itself. Hence, either we treat it is prior knowledge applied in the _AGI_ system, or less preferred option: treating them as another operators, thus they can belong to a "logic" class, containing all relevant admissible operations of logic classes. Hence any logic class can naturally inherit these operations as part of being associated with the "logic" class. However, for now, we assume these are not part of the learnable system, and considered as cognitive operators, separately from the learned knowledge operators. To implement these operations we use HOL, or \(\lambda\)-calculus, used in semantics in NLP/NLU. It is simply the implementation of compositionality in language, i.e. where you can assign any object where there is slot for it. For example, _"I love you"_ is simple \(\lambda x,y.love(x,y)(I,you)\). But we could make it composed: _"I love to create tools"_\(=\lambda x,y.love(x,y)(I,\lambda x.create(x)(tools)\). In our case, it is simple _OOP_, where we assign the "create" class to the y variable of "love" class, which is actually an action class. Similarly, we could have compositional sentence: _"I stand on a chair, that was built in a factory near my home" = stand(I, chair(origin(factory(near(home)))))._ Another example of logic is via classes and instances, e.g. _"All men are mortal. Peter Parker is a man. Therefore Peter Parker is mortal"_, is an example of updating temporarily (in WM) the "men" class to have mortal property, then creating instance of "men" called Peter Parker. Then following his object, finding that he is mortal due to inheritance. This is deductive reasoning: from general (class) to specifics (instance). Nevertheless, even these examples demonstrate an obvious inference processes, they still require a guiding will, see 3.6. There is also the opposite reasoning: inductive, from specifics to general, e.g. Data: _"All firetrucks I've seen are red"_. Hence, generalization: _"Most firetrucks are red"_. This is induction directly from logic, and not from roar examples. Another example, is in analogy tests, e.g. _"Smile : mouth :: wink : eye"_, which as described earlier is accomplished by active thinking: recognizing the problem - analogy between two objects, then recognizing the source object (mouth), smile as its representing action, then searching for the same relation in the target object (eye), thus resulting with winking. More broadly, this is an example of case-based reasoning [Fuchs et al., 2020], where source problem is used to solve a target one, by mapping and adapting source functionalities or actions to the target problem.41 Footnote 41: Based on category theory. The logic inference rules are discussed [Marquis et al., 2020]: * Contraposition: if A \(\rightarrow\) B then NOT B \(\rightarrow\) NOT A; * Transitivity: if A \(\rightarrow\) B and B \(\rightarrow\) C then A \(\rightarrow\) C; * Simplification of disjunctive antecedents: if (A OR A') \(\rightarrow\) B then A \(\rightarrow\) B and A' \(\rightarrow\) B. Simplification is simply the function of "OR" action, while Transitivity is expressed via state-space as defined previously, where actions as vectors can be cascaded. That is, Transitivity is simply one of the features of vector state space (wills/actions). Contraposition is expressed when problem-solving is applied backwards. It is actually how designing works: finding the problem of a goal or the cause to symptoms in health inspection. It is also referred to as abduction reasoning, which is about searching for explanations (cause) from incomplete observations (effect) and some background knowledge (such as rules). In summary, we consider any-order logic as a stiff formalism and poor imitation of an actual flexible thinking. In our opinion it is a special framework, an abstract and mathematical one, that is learned in peoples' late years. It is not natural and fundamental as it might have being considered so far. For example, De Morgan's laws and the converting from first-order logic to conjunctive normal form (CNF) [Russell, 2010] are not intuitive at all. Similar is the Resolution process and other things in logics. In our opinion, it should be simple mechanism, such as propagating in a network of states. #### a.8.2 Creativity _MOM_ enables creativity in two dimensions: either via non-admissible actions applied or via temporal abstractions. The well known System 1 & 2 [Daniel, 2017], combined with our attention mechanism as the projector of will, can explain how creativity sets into action. It can be viewed by at least two processes. On the one hand, when we are contemplating about stuff while performing simple chore like walking or driving for example. In this case our attention is on the thinking process, hence it is less creative, since we use our consolidated regular logic. The less attended chore is move to the background. On the other hand, in the opposite case, when we diverge our usual thinking and concentrate on a simple or usual chore - creativity pops in, when sudden ideas comes to mind, like during a shower42. In our framework, it is explained again with the attention. Since the simple chore is the one being focused, then the higher models are freed, this time to be handled by subconscious. This realm is not under our logical consolidated control, but it is free to "play" with the models as it wishes, without our constrained reality-based or physically-grounded thinking. In neuroscience, there is default mode network (DMN) [Buckner, 2022] which is activated when we do not do anything, or mind-wandering, and it shows that some brain regions instead of being less active, they become more active, which may indicate self organization or creativity pops in, to produce new connections and more. Footnote 42: It can also be explain with quantum brain theory, which claims that thinking involves quantum effects. For example, it might just be that when we consciously searching for a solution, then we search it via classical way, i.e. via exponential growth, due to wave collapse. However, when we sleep or distracted, the search is via quantum superposition, hence it is exponential search. Therefore may yield much better solutions. This could be explained in other words also. As it is well known, creativity resides in the right hemisphere, where the thinking is more divergent. The creativity comes to play mostly when we are less looking for patterns and categorization [Beaty et al., 2015]. I.e., we can be biased or stuck in past patterns, when we recognize them, to find a solution to a problem. It can happen both in System 1 & 2, where we look for the best model to resolve the current situation, as in a recognition process.But sometimes we should dismiss the familiar, in order to find novel ways to solve problems [https://www.thinkwrongbook.com/](https://www.thinkwrongbook.com/). In our case, it might be about bypassing the automatic recognition mode, and search for a more appropriate solution, via a more thorough search in different models, ending up with generating new model for the given problem. Overall, we can unite will with attention, by stating that will projecting via attention "beacons". Hence, similarly as we have main will and sub wills, or wills with different intensity - similarly our attention is splitted into parallel beams, where there is the main one and smaller ones. For example, in hearing a message/story, we are mostly attentive to the incoming message in sequence, but a smaller attention is applied in stitching the different parts of the story together, as if we constructing a puzzle. Similarly in the examples above, there is main attention (e.g. thinking while walking, showering, etc) and the secondary attention (e.g. walking). This splitting of attention to parallel processing is important for survival. Because sometimes we have no choice but to handle several stuff simultaneously. For example, to handle distractions, or as a selective attention between different sensors (such as hearing and seeing an object), or also within each sensor: as in vision we first grasp multiple objects, then concentrate only on few of them. Or as an alternating attention by switching from one thing to another, see [Abdalaziz et al., 2023] about parallel multi-tasking, or as a dividing attention by focusing on several things at once, such as driving and talking at the same time. However, splitting attention has more preference on parallel non-interfering splittings instead of ones in conflict, such as splitting within the same modality. More generally, attention splitting can be used in any problem solving, as described above in the example of message/story comprehension, by concentrating of "processing units" over the incoming objects to perform inference for filling missing information, but also for making overall sense (connect between different objects). This split of attention supported also by neuroscience, e.g. in [1], experienced memorizing of several elements simultaneously, and in [14] showing an example of mental load attention splitting, e.g. between visual and audio sensory perception. Finally, we can argue philosophically, that the attention as a top-down process revealing our will, is actually a separated entity from the physical realm that interacts with the physical brain constantly, as long as we are alive. This idea can even have evidence from neuroscience, such as here: [https://iai.tv/articles/brain-scans-tell-us-nothing-about-conciousness-aud-2514](https://iai.tv/articles/brain-scans-tell-us-nothing-about-conciousness-aud-2514): describing a battle between physicalism (consciousness is in the physical brain) and idealism (the experience is separated from its partial expression in the physical mind), similar to person looking at me, sees only the front side of me, but not the back. Another paper [10], can only unintentionally hint this idea. It describes _"Cytoelectric Coupling"_: arguing that the brain's electrical fields, created by neural network activity, can influence the physical configuration of neurons' sub-cellular components to optimize network stability and efficiency. In another perspective we can treat electrical brain waves as the closest side of the soul, and thus view the process described above as the soul influence the brain physical activity. #### a.8.3 Information Our _MOM_ has some basic axioms, regarding how information is interpreted. ### _Agi_ additional characteristics Here are a few small comments about the _AGI_ system to be constructed. First, our opinion, is that the research of explainable AI (or XAI) is not totally correct in its core. We believe that DNN in its nature should not be explainable at all. **It is merely the vessel of intelligence, while logic and explainability are the products of intelligence.** There is no rule to require these human outcomes to reside in the cognitive system in the first place. We advocate that these two concepts to be accurately defined and separated. Moreover, as mentioned in Section 3.4, many generative models that are used for explanation, e.g. Principal Component Analysis (PCA) [1] or sparse/denoising auto-encoders [15], are themselves black-boxes. Also, most of _DL_ explainability methods are very narrow type of explainability, e.g. merely feature relative significance. Next, we do not see the necessity for _AGI_ or any AI to be conscious. We do not comprehend why there is so much talk around this (in our opinion) insignificant topic. _AGI_ should not emulate humans at all nor to be autonomous. It should only be a data processing machine, as regular PC43, to comprehend natural language, learn, and accomplish tasks. The actual meaning of things, in human perspective, should be left to humans. Footnote 43: Personal Computer Next, _AGI_ should not be about strong computational power as it is nowadays. We see evidence for this in our own intelligence: we use papers to store and elaborate our ideas and thoughts, presentations, drawings, 2D/3D (interactive) visualizations, simulations, etc. Similarly it is in current generative AI: the various action GPT tools like AutoGPT, AgentGPT, BabyAGI, MetaGPT, LETI, ToolLLM, WebGLM and more - which use external applications and sources. All these are tools extending our limited capabilities, without the need to extend ourselves. Just as we use calculators to perform hard calculations, or planes to fly, the _AGI_ should function similarly. It should not be fast at arithmetic calculations, but rather it should be good at thinking and solving problems, while provided various tools for its disposal, as it is for humans. And similar to humans, it can create the necessary tools/technology for solving problems. See for example in [https://www.lesswrong.com/posts/K4urTDkBbtNuLivJx/why-i-think-strong-general-ai-is-coming-soon](https://www.lesswrong.com/posts/K4urTDkBbtNuLivJx/why-i-think-strong-general-ai-is-coming-soon). Also see [https://clivethompson.medium.com/the-weird-power-of-transactive-memory-7c7e324c9425](https://clivethompson.medium.com/the-weird-power-of-transactive-memory-7c7e324c9425) about how we extend our memory by socially distribute skills and knowledge among members in a close community. Next, as we embrace the holistic perspective for designing _AGI_, similarly we encourage implementing both self-supervised learning and RL in the same system. We argue here, that model learning should encapsulate these both regimes, since part of the modeling is to learn within interactive setup. Hence, there is no need for a reward to be gathered from the environment, since internal "reward" or intrinsic motivation is sufficient and represents the current will. Meanwhile the feedback is gathered simply from the sensors, i.e. the system is always stays open-loop, but during interactions it becomes (or it is considered as) closed-loop, similarly as RL. We can see these two regimes in Fig. 33. Next, we think that current meta-learning in _DL_/RL is not fit to serve as a tool for _AGI_, if we consider having shared parameters learned between tasks. Because most of the skills we learn are not related to previous tasks at all. Hence it should be progressive in learning. Next, our purpose in AI is to reach the most natural language model (LM) to perform tasks for us. However, reaching this goal requires very slow and gradual process. Thus, the evolution of AI research so far can be described in the following. It started with low-level machine programming code (assembly), then moved on to high-level programming languages (e.g. C#, Python, Java), then moved to NLP LLMs such as GPT3 and DALL-E2 and others that use prompts, which are less "programmitive", but still require practice and expertise in writing the most appropriate prompts to get the desired response. Hence, our model suggest that _AGI_ agent must be learned via human interaction, to learn the alignment of our wills from the way we express it linguistically, e.g. like a teacher demonstrates or behave in-front of his student. In other words, unlike the batch learning usual policy costumed in _DL_, where the AI agent is "showed"/bombarded with a huge amount of data, we encourage a more humane treatment - a gradual and progressive learning (via online and offline modes, but with incremental learning). Additionally, for constructing _AGI_ framework we practice both the bottom-up approach (neuroscience, mainly based on Jeff Hawkins) and top-down approach (psychology, mainly based on Daniel Kahneman). In general, this should be the duality principle we encourage in _AGI_ research. That is, embracing the combination of different disciplines together, to capture intelligence's many facets and features. Finally, as computational models (automata) [15] have shown, in order to achieve a Turing machine model [14], we must have unlimited memory and unlimited access to it. Hence the _AGI_ model should also have these capabilities. Our proposed cognitive model express/reveals these capabilities via the use of LTM (past knowledge) and WM (current memory).
2303.16176
The fiber of persistent homology for trees
Consider the space of continuous functions on a geometric tree $X$ whose persistent homology gives rise to a finite generic barcode $D$. We show that there are exactly as many path connected components in this space as there are merge trees whose barcode is $D$. We find that each component is homotopy equivalent to a configuration space on $X$ with specialized constraints encoded by the merge tree. For barcodes $D$ with either one or two intervals, our method also allows us to compute the homotopy type of this space of functions.
David Beers, Jacob Leygonie
2023-03-28T17:43:02Z
http://arxiv.org/abs/2303.16176v1
# The fiber of persistent homology for trees ###### Abstract Consider the space of continuous functions on a geometric tree \(X\) whose persistent homology gives rise to a finite generic barcode \(D\). We show that there are exactly as many path connected components in this space as there are merge trees whose barcode is \(D\). We find that each component is homotopy equivalent to a configuration space on \(X\) with specialized constraints encoded by the merge tree. For barcodes \(D\) with either one or two intervals, our method also allows us to compute the homotopy type of this space of functions. ## 1 Introduction ### Motivation Persistent Homology (PH) is a computable descriptor from Topological Data Analysis (TDA) which summarises complex geometric data. More precisely, the persistence map, denoted PH, takes as input a topological space \(X\) equipped with a real valued function \(f:X\to\mathbb{R}\) and returns a multiset of intervals in the real line called a _barcode_, which encodes the topological variations across the sublevel sets of \(f\). In a wide range of situations, persistent homology is robust to perturbations of the input data [4], which is one of the key reasons for its successful application to problems in data science, e.g. in neuroscience [1], material sciences [11], shape recognition [18], and machine learning [3]. Complementarily, it is natural to ask how decisively PH distinguishes distinct input functions \(f\). Equivalently, we may ask which functions give rise to the same barcode \(D=\operatorname{PH}(f)\). This inverse problem formally translates into studying the fiber \(\operatorname{PH}^{-1}(D)\) over a target barcode \(D\). The topological and geometric properties of \(\operatorname{PH}^{-1}(D)\) strongly depend on the underlying space \(X\) and on the space \(\mathcal{F}\) of functions \(f:X\to\mathbb{R}\) on which persistent homology is defined, which can be for instance the space of _filter functions_ (or one of its subspaces) when \(X\) is a simplicial or CW complex, the space of Morse functions when \(X\) is a smooth manifold, or simply of continuous functions when \(X\) is merely a topological space. For filter functions on a simplicial complex, it was observed in [17] that \(\operatorname{PH}^{-1}(D)\) has the structure of a finite polyhedral complex. This polyhedral structure was exploited in [16] to design an algorithm for computing the homology groups of \(\operatorname{PH}^{-1}(D)\), and this algorithm was demonstrated on a menagerie of small examples. When \(\mathcal{F}\) is the subspace of filter functions determined by their values on vertices, it was shown that every connected component of \(\operatorname{PH}^{-1}(D)\) is contractible when \(X\) is a simplicial decomposition of the unit interval [8], and homotopy equivalent to a circle when \(X\) is instead a simplicial decomposition of the circle [20]. Cases where \(X\) is non-discrete have also been investigated. For _Morse-like_ continuous functions on the unit interval, the number of path components of \(\operatorname{PH}^{-1}(D)\) was computed for generic barcodes [6]. For Morse functions on the 2-sphere \(\mathbb{S}^{2}\) obtained by composing an embedding of \(\mathbb{S}^{2}\) in \(\mathbb{R}^{3}\) with the vertical projection, the tools developed in [2] motivated conjectures on the number of connected components of \(\operatorname{PH}^{-1}(D)\). For general Morse functions on an arbitrary smooth compact manifold, it was proven in [17] that the groups of diffeomorphisms of \(X\) isotopic to the identity defines an action on \(\operatorname{PH}^{-1}(D)\) which is transitive on each connected component. This allowed computing the homotopy type of path components of \(\operatorname{PH}^{-1}(D)\) for Morse functions on 1-dimensional and 2-dimensional oriented manifolds. However, the tools developed in the above literature do not adapt easily to continuous functions on a topological space \(X\) that is not a manifold. In [20], it was observed that when \(X\) is a star-like tree and \(D\) is the specific barcode that has only one finite interval, then the path connected components of \(\mathrm{PH}^{-1}(D)\) are wedges of circles. In this work, we analyze \(\mathrm{PH}^{-1}(D)\) in the case of an arbitrary generic barcode \(D\), for continuous functions on an arbitrary geometric tree. The case of a tree is of particular interest as it is frequently encountered in applications of persistent homology to neuroscience, e.g. for analyzing neuronal morphologies [13, 15] and brain functionalities [1]. In fact, a few other related inverse problems for topological descriptors on a tree have already been studied. For instance, statistical and algorithmic inverses of the Topological Morphology Descriptor (TMD) have been described in [7, 14]. Another example is the study of the realization problem for barcodes of functions on a tree, which have been investigated in [12, 19]. ### Contributions and outline of contents In this work we study the case when \(X\) is the geometric realization of a tree (geometric tree for short), \(\mathcal{F}\) is the space of continuous functions on \(X\), and \(D\) is a finite generic barcode. For this reason, \(X\) denotes any geometric tree for the remainder of the introduction. Our analysis relies upon the fact that for functions \(f\) on \(X\), the persistence map factors in the following way: Here, the intermediate object \(T\) is a topological space called the _merge tree_ of \(f\), which describes how the connected components of the sublevel sets \(f^{-1}(-\infty,t]\) appear and join together as \(t\) varies. Hence, to characterize the fiber of persistent homology in this setting, we can instead characterize the space of functions with a given merge tree and the space of merge trees that map to \(D\). The main contributions of this work are: * In Theorem 8, we provide sufficient conditions for a merge tree derived from a function \(f\) on a topological space to have a cellular structure. * In Theorem 17, we show that \(\mathrm{MT}^{-1}(T)\) is homotopy equivalent to a constrained version of the configuration space of \(n\) points on \(X\), denoted \(\mathrm{Conf}(X,T)\), where the points must satisfy additional constraints determined by \(T\). * In Theorem 22, we show that \(\mathrm{Conf}(X,T)\), and hence \(\mathrm{MT}^{-1}(T)\), is path connected when \(X\) has a branch point. We deduce a 1-1 correspondence between path connected components in the fiber \(\mathrm{PH}^{-1}(D)\) and non-isomorphic merge trees with barcode \(D\). * We derive two important consequences of the above results for when \(X\) has at least one branch point: (i) in Corollary 33, we find a lower-bound on the distance between the path connected components in \(\mathrm{PH}^{-1}(D)\), and (ii) in Corollary 32, we count the number of such components using existing work on merge trees [6, 14]. The paper is organised as follows. In Section 2 we formally define the notions of trees, geometric trees, and merge trees. Additionally, we define the notion of a cellular merge tree, a merge tree equipped with a suitable cellular structure. We also formally define persistent homology, describe the relationship between the local minima of a function and its zero dimensional barcode, and detail how the persistence map factors for functions on geometric trees. In Section 3, we show that a function on a compact connected space has a cellular merge tree if and only if it has finitely many local minima. Then we define the interleaving distance between merge trees and show that it is a true metric on the subspace of cellular merge trees. Section 4 is devoted to providing necessary and sufficient conditions for when a function on a geometric tree \(X\) has a given cellular merge tree. In Section 5 we define the space \(\mathrm{Conf}(X,T)\) and a few other intermediary configuration spaces constrained by rules determined by \(T\). By a series of consecutive homotopy equivalences between these configuration spaces, the section culminates in a proof of Theorem 17, showing that \(\mathrm{MT}^{-1}(T)\), the space of continuous functions on \(X\) with merge tree \(T\), is homotopy equivalent to \(\mathrm{Conf}(X,T)\). Section 6 exploits this homotopy equivalence to deduce topological properties of \(\mathrm{MT}^{-1}(T)\) and \(\mathrm{PH}^{-1}(D)\) for generic barcodes \(D\). The main result of this section, Theorem 22, says that \(\mathrm{Conf}(X,T)\) and hence \(\mathrm{MT}^{-1}(T)\) are connected when \(X\) has at least one branch point, i.e. \(X\) is not homeomorphic to an interval. This then allows us to provide a lower bound on the distance between any two path connected components in \(\mathrm{PH}^{-1}(D)\) (Corollary 32), which depends only on the barcode \(D\). In addition, combining Theorem 22 with existing work enumerating the number of merge trees with a given barcode [6, 14], we deduce in Corollary 33 that \[\#\pi_{0}(\mathrm{PH}^{-1}(D))=\prod_{[b,d)\in D}\#\big{\{}[b^{\prime},d^{ \prime})\in D\mid[b,d)\subset[b^{\prime},d^{\prime})\big{\}}.\] We conclude by computing the homotopy type of \(\mathrm{PH}^{-1}(D)\) via \(\mathrm{Conf}(X,T)\) when \(D\) has either one or two intervals. When \(D\) has one interval, we deduce that \(\mathrm{PH}^{-1}(D)\) is contractible (Corollary 34). When \(D\) has two intervals, Corollary 35 shows that \(\mathrm{PH}^{-1}(D)\) is homotopic to a wedge of \[-1+\sum_{v\in N(X)}(\eta(v)-1)(\eta(v)-2)\] circles, where \(N(X)\) is the set of vertices in any triangulation of \(X\), and \(\eta(v)\) is the degree of vertex \(v\). ## 2 Background ### Trees, merge trees and cellular merge trees A _tree_ is a finite connected acyclic graph. It is _binary_ if each vertex is the endpoint of at most 3 edges. The _geometric realization_ of a tree \(T\) is a topological space given by a copy of the interval \([0,1]\) for each edge in \(T\) with pairs of endpoints quotiented whenever they correspond to the same vertex of \(T\). A _geometric tree_ is the geometric realization of a tree. Between any two points on a tree it is well known that there is a unique non self-intersecting path. We refer to this path as the _shortest path_ between a given two points. Indeed any other path connecting a given two points contains the shortest path in its image. When \(T\) is geometric, the discussion extends to disjoint closed connected nonempty subsets \(A,B\subseteq T\): there is a unique shortest path \(\mathrm{ShortPath}(A,B)\) connecting them. It follows that a subset \(S\subseteq T\) of a geometric tree is path-connected if and only if it is connected. Namely, if \(S\) is not path connected, then the shortest path between \(a\) and \(b\) in \(S\) is not contained in \(S\). Taking \(U_{1}\) and \(U_{2}\) to be the connected components of \(T\) minus a point in this shortest path, but not in \(S\), we induce a disjoint open cover of \(S\). We define the _convex hull_ of a collection \(\mathcal{C}\) of closed subsets, denoted \(\mathrm{Conv}(\mathcal{C})\), as the union of points on shortest paths between elements of sets in \(C\). Clearly, convex hulls are connected. A rooted tree \((T,r)\) is a tree \(T\) with a distinguished vertex \(r\). A _leaf_ in a rooted tree is a vertex not equal to \(r\) adjacent to exactly one other vertex. A _branch point_ is a vertex adjacent to three or more vertices. If the root \(r\) is adjacent to two or more vertices, then we say that \(r\) is a branch point as well. A choice of root induces an orientation on the edges of any tree \(T\) by the following procedure. We start by directing edges of \(T\) adjacent to \(r\) away from \(r\). Inductively, if an edge between \(v\) and \(v^{\prime}\) has not yet been oriented but an edge incident to \(v\) has been oriented, we orient the edge between \(v\) and \(v^{\prime}\) from \(v\) to \(v^{\prime}\). Whenever there is a directed edge from \(v\) to \(v^{\prime}\) we say that \(v^{\prime}\) is a _child_ of \(v\). In a rooted tree say that a vertex \(v^{\prime}\) is a _descendant_ of \(v\) if there is a directed path from \(v\) to \(v^{\prime}\), where potentially \(v=v^{\prime}\). For rooted trees, we denote by \(\mathrm{LCA}(v,v^{\prime})\) the least common ancestor of \(v\) and \(v^{\prime}\). Next, we introduce merge trees. We will use two distinct definitions of merge trees from the literature, which are both instances of _gauged spaces_: **Definition 1**.: _A **gauged space** is a topological space \(X\) equipped with a continuous map \(\pi:X\to\mathbb{R}\)._ _A **morphism** between two gauged spaces \((X_{1},\pi_{1})\) and \((X_{2},\pi_{2})\) is a continuous map \(\phi:X_{1}\to X_{2}\) satisfying \(\pi_{1}=\pi_{2}\circ\phi\). An **isomorphism** of gauged space is a morphism that is also a homeomorphism._ A continuous function \(f:X\to\mathbb{R}\) yields a merge tree as defined in [21], which is an instance of gauged space: **Definition 2**.: _For a topological space \(X\) with a continuous function \(f\), the associated **merge tree**\(\mathrm{MT}(f)\) is the quotient of the space_ \[\mathrm{epi}(f):=\{(x,t)\in X\times\mathbb{R}:t\geq f(x)\}\] _by the relation \((x,t)\sim(y,t)\) whenever \(x\) and \(y\) are in the same connected component of \(f^{-1}(-\infty,t]\)._ Since merge trees inherit a map \(\pi_{f}\) from the second coordinate projection map on \(\mathrm{epi}(f)\), they are naturally viewed as gauged spaces. We illustrate the construction of a merge tree in Figure 1. The other definition of merge trees that we will use appears for example in [6]. We provide an analogous definition here, along with the notion of a labeling from [10]. **Definition 3**.: _A **cellular merge tree**\((T,\pi)\) is the quotient space of a geometric rooted tree \((T^{\prime},r)\) and a half open interval:_ \[T^{\prime}\sqcup[0,1)/(r\sim 0),\] _equipped with a real-valued map \(\pi\) satisfying_ * \(\pi\) _is strictly decreasing along edges in_ \(T^{\prime}\) _oriented from the root_ \(r\)_._ * \(\pi\) _is strictly increasing to infinity along the half open interval_ \([0,1)\)_._ The nodes of a merge tree \(T\) are endowed with a partial order \(\preceq\) where \(v\preceq v^{\prime}\) whenever \(v\) is a descendent of \(v^{\prime}\). Given a cellular merge tree \((T,f)\), the map from \(x\in f^{-1}(t)\) to the connected component of \(f^{-1}(-\infty,t]\) containing \(x\) is a bijection. Further, if there is a path from \(x\) to \(y\) in \(T\) along which \(f\) is increasing, then for \(t\leq f(y)\), the connected component containing \(x\) in \(f^{-1}(-\infty,t]\) is contained in the connected component containing \(y\) in \(f^{-1}(-\infty,f(y)]\). From this it follows that \(T=\mathrm{MT}(f)\) and \(f=\pi_{f}\). Therefore, through the continuous injection \((T,f)\mapsto(\mathrm{MT}(f),\pi_{f})\), cellular merge trees form a subspace of regular merge trees. For notational convenience we will sometimes refer to the subset \((0,1)\) of the half open interval of a cellular merge tree as \(e_{\infty}\). For indexing convenience, we will often work with _labelled_ cellular merge trees, for which an arbitrary ordering of the leaves \(l_{1},\cdots,l_{n}\) and of the nodes \(v_{1},\cdots,v_{m}\) has been fixed. Although by default we assume that these orderings do not allow repetitions of the leaves and of the nodes, we will sometimes explicitly allow repetitions to make use of more general definitions and results from [10]. For instance, the following definition allows labels with repetitions: **Definition 4**.: _The **induced matrix** of a labelled cellular merge tree \((T,\pi)\) is given by_ \[\mathcal{M}(T)_{ij}:=\pi(\mathrm{LCA}(l_{i},l_{j})).\] To simplify notations, when the context leaves is clear, we will write \(\mathrm{MT}(f)\) given a function \(f\) to designate the gauged space \((\mathrm{MT}(f),\pi_{f})\), and similarly \(T\) to designate the cellular merge tree \((T,\pi)\). Figure 1: The construction of a merge tree from a function. (a) The graph of a function \(f\) on an interval. (b) Shaded is the content of \(\mathrm{epi}(f)\) strictly above the graph of \(f\). (c) By sending connected components of horizontal slices to points we obtain \(\mathrm{MT}(f)\). It happens that this merge tree has a cellular structure, although in general this may not be the case. ### Persistent homology Fix a topological space \(X\) and a continuous function \(f:X\to\mathbb{R}\). The function \(f\) gives rise to a sequence of topological spaces \(f^{-1}(-\infty,t]\), nested by inclusion maps. Applying \(i^{\text{th}}\) homology over a field \(\mathbb{F}\) to the sequence of spaces induces a sequence of vector spaces indexed by \(\mathbb{R}\). This sequence \(\mathbb{V}_{i}(f)\) of vector spaces is called the \(i^{\text{th}}\)_persistence module_\(f\). The persistence modules \(f\) can also be thought of as functors from \((\mathbb{R},\leq)\) to the category of vector spaces. If \(\mathbb{V}_{i}(f)\) is _pointwise finite dimensional_ (_pfd_ for short), i.e. \(\dim H_{i}(f^{-1}(-\infty,t])<\infty\) for all \(t\), then the \(i^{\text{th}}\) persistence module decomposes into a direct sum of modules [5] indexed by a multiset \(D\), \[\mathbb{V}_{i}(f)\cong\bigoplus_{I\in D}M_{I}, \tag{1}\] where each \(I\subseteq\mathbb{R}\) is an interval of the real line, and \(M_{I}\) is defined to be the sequence of vector spaces \[M_{I}(t)=\begin{cases}\mathbb{F}&t\in I\\ 0&\text{else},\end{cases}\] with associated maps \[M_{I}(s,t)=\begin{cases}id&s,t\in I\\ 0&\text{else}.\end{cases}\] The sequences of vector spaces \(M_{I}\) are called interval modules. The multiset of intervals \(D\) is called the _barcode_ in dimension \(i\) of \(f\). We say that \(D\) is _finite_ if it is a finite collection of intervals. We say that a function \(f\) is _pfd_ if all its persistence modules \(\mathbb{V}_{i}(f)\) are pfd themselves. In this case, the collection of barcodes \(\{\text{PH}_{i}(f)\}_{i\geq 0}\) associated to a function \(f\), abbreviated \(\text{PH}(f)\), is well-defined and referred to as its _persistent homology_. ### Persistent homology and local minima In this section we show some relations between the zero dimensional barcode \(\text{PH}_{0}(f)\) and the number of local minima of a function \(f\). **Definition 5**.: _Given a topological space \(X\) and a map \(f:X\to\mathbb{R}\), a subset \(M\subseteq X\) is a **local minimum of \(f\)** if \(M\) is connected, \(f\) is constant on \(M\), and any connected \(M^{\prime}\) containing \(M\) also contains a point \(x\) satisfying \(f(x)>f(M)\)._ Note that if \(f\) is continuous then its local minima are each closed. Hence, if \(X\) is also compact, then the local minima of \(f\) are compact. In particular the local minima are compact when \(f\) is continuous and \(X\) is a geometric tree. **Lemma 6**.: _Let \(X\) be any topological space, \(D\) be a finite barcode, and \(f:X\to\mathbb{R}\) with \(\text{PH}_{0}(f)=D\). Then \(f\) has finitely many local minima._ Proof.: Let \(M\) be a local minimum of \(f\), and \(m=f(M)\). Assume, seeking contradiction, that no interval of \(D\) starts at \(m\). Then we can find a range \([m-\epsilon,m]\) where no interval of \(D\) starts. Using the decomposition (1) we see that the internal morphism \(\mathbb{V}_{0}(f)(m-\epsilon)\to\mathbb{V}_{0}(f)(m)\) is surjective. Note that \(M\subseteq f^{-1}(-\infty,m]\) is connected in \(X\) and hence is a disjoint union of path connected subspaces. Pick one of these subspaces \(M_{0}\). Therefore there is a path-connected component \(\Omega_{m-\epsilon}\) of \(f^{-1}(-\infty,m-\epsilon]\) which lies in the same path-connected component as \(M_{0}\) in \(f^{-1}(-\infty,m]\). Given a path \(\gamma\) from \(\Omega_{m-\epsilon}\) to \(M_{0}\) in \(f^{-1}(-\infty,m]\), the set \(M^{\prime}=M\cup\operatorname{im}\gamma\) contradicts that \(M\) is a local minimum. The same reasoning, working locally around local minima, shows that there are at least as many intervals in \(D\) starting at \(m\) as there are local minima with value \(m\). Since \(D\) is finite, this implies that \(f\) has finitely many local minima. The following result also ensures that the barcode of a continuous function on a tree with finitely many local minima is well-defined. **Lemma 7**.: _Let \(X\) be a tree and \(f:X\to\mathbb{R}\) be a continuous function with finitely many local minima. Then \(f\) is pfd._ Proof.: Let \(t\in\mathbb{R}\). Let \(\Omega\) be a path-connected component of \(f^{-1}(-\infty,t]\). Consider the following alternative: 1. Either \(f\) has constant value \(t\) over \(\Omega\). Then \(M:=\Omega\) is a local minimum of \(f\), because any connected strict superset \(M^{\prime}\) will be included in some \(f^{-1}(-\infty,t^{\prime}]\) with \(t^{\prime}>t\), but not in \(f^{-1}(-\infty,t]\). 2. Or \(f\) attains a minimum \(t^{\prime}<t\) over one maximal connected subset \(M\subseteq\Omega\). Then \(M\) is also a local minimum of \(f\) in the whole \(X\). In both cases, we can find a local minimum of \(f\) inside \(\Omega\), and since \(f\) has finitely many local minima, we deduce that \(f^{-1}(-\infty,t]\) has finitely many path-connected components, i.e. \(\dim H_{0}(f^{-1}(-\infty,t])<+\infty\). This is because a subset of a geometric tree is connected if and only if it is path connected. Finally, since \(X\) is a tree, its subsets are component-wise contractible, hence \(\dim H_{i}(f^{-1}(-\infty,t])=0\) for all \(i>0\) and the result follows. ### The fiber of persistent homology on a tree In this section we assume that \(X\) is a tree. Then all of its subsets are component-wise contractible, so its \(i^{\text{th}}\) persistence modules are trivial for all \(i>0\). Hence, we refer to the zero dimensional barcode of \(f\) simply as \(\operatorname{PH}(f)\), the persistent homology of \(f\). In this paper, we study the space of pfd continuous functions \(f:X\to\mathbb{R}\) giving rise to a fixed barcode \(D\): \[\operatorname{PH}^{-1}(D):=\bigg{\{}f:X\to\mathbb{R}\text{ pfd continuous}\mid \operatorname{PH}(f)=D\bigg{\}}.\] We consider the topology on \(\operatorname{PH}^{-1}(D)\) induced by the supremum norm on continuous functions. **Remark 1**.: _We can also consider this inverse problem more generally in the space of all continuous functions by working directly at the level of persistence modules: studying the space of continuous functions \(f:X\to\mathbb{R}\) satisfying \(\mathbb{V}_{0}(f)\cong\bigoplus_{I\in D}M_{I}\). Our analysis could be conducted in this setting without substantial modifications, in particular because we will assume \(D\) to be finite. But to keep the exposition simple, this work assumes functions are pfd so that their barcodes are always defined._ As observed in [21], the zero dimensional persistent homology of \((X,f)\), for \(f:X\to\mathbb{R}\) a pfd function, is also the zero dimensional persistent homology of \((\operatorname{MT}(f),\pi_{f})\). Indeed, the dimension of \(H_{0}(f^{-1}(-\infty,t])\) is exactly the number of path components of \(f^{-1}(-\infty,t]\), which, being a subset of a tree, is the number of connected components of the same set. This is exactly \(|\pi_{f}^{-1}(t)|\). However, the set \(\pi_{f}^{-1}(-\infty,t]\) retracts onto \(\pi_{f}^{-1}(t)\) via the homotopy \[h_{u}:(x,s)\longmapsto(x,s(1-u)+tu).\] Hence for each \(t\) the map \(x\mapsto(x,t)\) induces pointwise isomorphisms between \(H_{0}(f^{-1}(-\infty,t])\) and \(H_{0}(\pi_{f}^{-1}(-\infty,t])\). Further, these isomorphisms commute with inclusions arising from inequalities \(s\leq t\). Hence the persistence modules of \((X,f)\) are completely determined by \(\operatorname{MT}(f)\). In other words the map \(\operatorname{PH}\) factors as a composite of maps We thus refer to the _persistent homology_ on merge trees as the second of these maps, and its value on a given merge tree \(\operatorname{MT}(f)\) as the _barcode_ of \(\operatorname{MT}(f)\). These observations naturally organise the problem of computing the fiber \(\operatorname{PH}^{-1}(D)\) into two consecutive steps: we will first study which functions have a given merge tree, and second, which merge trees have a given barcode. ## 3 Tree structure and metric for merge trees ### When Merge trees are trees It is tempting to assume that \((\operatorname{MT}(f),\pi_{f})\) is always a cellular merge tree, however this is not the case, even when \(X\) is very simple. **Example**.: _If \(X=(-\infty,0]\) and \(f(x)=e^{x}\), \(\mathrm{MT}(f)\) is an interval with two open endpoints. This cannot be a cellular merge tree since cellular merge trees have at most one open endpoint._ The following result gives conditions under which \(\mathrm{MT}(f)\) is indeed a cellular merge tree. **Theorem 8**.: _Let \(X\) be a compact connected space and \(f:X\to\mathbb{R}\) be a continuous function. Then \((\mathrm{MT}(f),\pi_{f})\) is a cellular merge tree if and only if \(f\) has finitely many local minima. More precisely, \((\mathrm{MT}(f),\pi_{f})\) is isomorphic to the labelled cellular merge tree \((T,\pi)\) with leaves \(l_{1},\ldots,l_{n}\) if and only if the following conditions on \(f\) are satisfied:_ 1. _The function_ \(f\) _has finitely many local minima_ \(X_{1},\cdots,X_{n}\)_, with values_ \(\pi(l_{1}),\cdots,\pi(l_{n})\)_._ 2. _For any_ \(1\leq i<j\leq n\)_, let_ \(t_{ij}\) _denote the infimum of values_ \(t\) _where_ \(X_{i}\) _and_ \(X_{j}\) _are in the same connected component of_ \(f^{-1}(-\infty,t]\)_, i.e._ \(t_{ij}=\inf\{t\mid(X_{i},t)\sim(X_{j},t)\}\)_. Then_ \[t_{ij}=\mathcal{M}(T)_{ij}.\] We fix the function \(f:X\to\mathbb{R}\) and the cellular merge tree \((T,\pi)\). We will use two lemmas. **Lemma 9**.: _Let \((y,t)\in\mathrm{MT}(f)\). There exists a local minimum \(M\) of \(f\) such that \((x,t)\) is also a representative of \((y,t)\) for any \(x\) in \(M\). In particular, \(M\) can be chosen such that \(f(M)\leq f(y)\). As a result, for any \(t\), there are at most as many connected components of \(f^{-1}(-\infty,t]\) as local minima of \(f\)._ Proof.: Fix \(y\) and \(t\). Let \(\Omega_{y}\) denote the connected component of \(y\) in \(f^{-1}(-\infty,t]\). The set \(\Omega_{y}\) is a closed subset of a compact set, and therefore is compact. Let \(m:=\min f_{|\Omega_{y}}\), and let \(M\subseteq\Omega_{y}\) be a connected component of \(f^{-1}(m)\cap\Omega_{y}\). We claim \(M\) is a local minimum of \(f\) in \(X\). Suppose \(M^{\prime}\subseteq X\) is a connected set containing \(M\) on which \(f\) is never greater than \(m\). Since \(M^{\prime}\subseteq f^{-1}(-\infty,m]\subseteq f^{-1}(-\infty,t]\), we have \(M^{\prime}\subseteq\Omega_{y}\), and therefore \(M^{\prime}\subseteq f^{-1}(m)\) because \(m=\min f_{|\Omega_{y}}\). Thus \(M=M^{\prime}\). It follows that \(M\) is a local minimum of \(f\) in \(X\). Meanwhile, since \(M\) minimizes \(f\) on \(\Omega_{y}\), it must be the case that \(f(M)\leq f(y)\). To each connected component \(\Omega_{y}\subseteq f^{-1}(-\infty,t]\), we associate one of its local minima \(M\subseteq\Omega_{y}\), and the last part of the lemma follows. **Lemma 10**.: _For any \(1\leq i<j\leq n\), local minima \(X_{i}\) and \(X_{j}\) are connected in \(f^{-1}(-\infty,t_{ij}]\)._ Proof.: Suppose the opposite. Let \(\Omega_{i}\) be the connected component of \(X_{i}\) in \(f^{-1}(-\infty,t_{ij}]\). By Lemma 9 there are finitely many connected components in \(f^{-1}(-\infty,t_{ij}]\), so \(\Omega_{i}\) is both open and closed. Thus \(\Omega_{i}\) and \(\Omega_{j}:=f^{-1}(-\infty,t_{ij}]-\Omega_{i}\) are disjoint sets in \(X\) that are both open and closed. In particular, \(\Omega_{j}\) is nonempty as it contains \(X_{j}\). Since \(\Omega_{i}\) and \(\Omega_{j}\) are each open, \(X-\Omega_{i}\cup\Omega_{j}\) is closed and thus compact in \(X\). If \(X-\Omega_{i}\cup\Omega_{j}\) is empty, this would imply \(X=\Omega_{i}\cup\Omega_{j}\) is disconnected, contradicting the hypotheses of the theorem. Hence \(X-\Omega_{i}\cup\Omega_{j}\) is nonempty and we may let \(m:=\min f_{|X-\Omega_{i}\cup\Omega_{j}}\). The value \(m\) is greater than \(t_{ij}\) since \(f^{-1}(-\infty,t_{ij}]\) is contained in \(\Omega_{i}\cup\Omega_{j}\). Therefore \(X=f^{-1}(-\infty,t_{ij}]\cup f^{-1}[m,+\infty)\) is disconnected, contradicting the hypotheses of the theorem. We next turn to the heart of the proof and find the actual tree structure of the merge tree \(\mathrm{MT}(f)\). Proof of Theorem 8.: Let \(x_{1},\ldots,x_{n}\) be points in each of the local minima of \(X\) achieving the values \(m_{1},\cdots,m_{n}\) under \(f\), and let: \[T:=\bigg{(}\bigcup_{i=1}^{n}\{x_{i}\}\times[m_{i},+\infty)\bigg{)}/\bigg{\{}(x _{i},t)\sim(x_{j},t)\mid t\geq t_{ij}\bigg{\}}.\] So \(T\) is a disjoint union of the \(n\) right-open intervals \(\{x_{i}\}\times[m_{i},+\infty)\), and the \(i\)-th interval is identified with the \(j\)-th one at and beyond the threshold \(t_{ij}\). In particular, \(T\) is a tree. We have a continuous map \((x_{i},t)\in T\mapsto(x_{i},t)\in\mathrm{MT}(f)\) which is well-defined by Lemma 10. We also have a continuous map in the other direction, namely \((x,t)\mapsto(x_{i},t)\) where \(x_{i}\) is provided by Lemma 9 to ensure \((x,t)\sim(x_{i},t)\) in \(\mathrm{epi}(f)\). Therefore \(T\) and \(\mathrm{MT}(f)\) are isomorphic. **Remark 2**.: _If we defined merge trees using the equivalence relation \((x,t)\sim(y,t)\) whenever \(x\) and \(y\) are in the same path component of \(f^{-1}(-\infty,t]\) instead of the same connected component, Theorem 8 would not hold. For a counterexample, consider the so-called topologist's sine curve_ \[S=\{(0,t)\in\mathbb{R}^{2}:t\in[-1,1]\}\cup\{(x,\sin\frac{1}{x})\in\mathbb{R}^{ 2}:x\in(0,1]\}.\] _It is well known that \(S\) is closed and connected but has two path components. Taking \(B\) to be a closed disk covering \(S\) and \(f:B\to\mathbb{R}\) the Euclidean distance from \(S\), we see that \(f^{-1}(-\infty,0]=S\) is not path-connected. Thus \(\mathrm{MT}(f)\) cannot be a cellular merge tree as it is not Hausdorff: any neighborhood of either point \(p\) with \(\pi_{f}(p)=0\) contains both such points._ **Remark 3**.: _In the conditions of Theorem 8, implying that \(\mathrm{MT}(f)\) is a cellular merge tree, a node \(v\) is a descendant of a node \(v^{\prime}\) if and only if, when viewed as connected components of \(f^{-1}(-\infty,\pi(v)]\) and \(f^{-1}(-\infty,\pi(v^{\prime})]\) respectively, \(v\) is a subset of \(v^{\prime}\)._ ### Interleaving distance on cellular merge trees In this section we analyze the interleaving distance, a pseudo-distance on merge trees [21, Lemma 1]. We show that it is in fact a genuine distance on the subspace of cellular merge trees. Aside from the map \(\pi_{f}\), merge trees also come equipped with \(\epsilon\)-_shift_ maps, for \(\epsilon\geq 0\): \[i^{\epsilon}_{f}:(x,t)\in\mathrm{MT}(f)\longmapsto(x,t+\epsilon)\in\mathrm{MT} (f).\] We will often omit subscripts, writing \(i^{\epsilon}_{f}\) as \(i^{\epsilon}\). **Definition 11**.: _Let \(f\) and \(g\) be two continuous functions on \(X\). An \(\epsilon\)**-interleaving** between \(\mathrm{MT}(f)\) and \(\mathrm{MT}(g)\) is a pair of continuous functions \(\alpha^{\epsilon}:\mathrm{MT}(f)\to\mathrm{MT}(g)\), \(\beta^{\epsilon}:\mathrm{MT}(g)\to\mathrm{MT}(f)\) satisfying the following equations_ \[\beta^{\epsilon}\circ\alpha^{\epsilon}=i^{2\epsilon} \pi_{g}(\alpha^{\epsilon}(x))=\pi_{f}(x)+\epsilon\] \[\alpha^{\epsilon}\circ\beta^{\epsilon}=i^{2\epsilon} \pi_{f}(\beta^{\epsilon}(y))=\pi_{g}(y)+\epsilon.\] _The **interleaving distance**\(d_{I}(\mathrm{MT}(f),\mathrm{MT}(g))\) is defined as the infimum of values \(\epsilon\) such that \(\mathrm{MT}(f)\) and \(\mathrm{MT}(g)\) are \(\epsilon\)-interleaved._ We illustrate an interleaving between two merge trees in Figure 2. **Proposition 12**.: _The interleaving distance is a metric on the subspace of cellular merge trees (up to isomorphism)._ Figure 2: An interleaving between two merge trees. Proof.: By [21, Lemma 1] the interleaving distance is an extended pseudometric on cellular merge trees, so it remains to show that the interleaving distance is real valued on cellular merge trees and that two cellular merge trees \((T_{1},\pi_{1})\) and \((T_{2},\pi_{2})\) are isomorphic if they have interleaving distance zero. By Corollary 4.4 of [10] (which in turn depends upon [22, Theorem 1]), there exists two labellings (possibly with repetitions) of \(T_{1}\) and \(T_{2}\) with the same number of indices of leaves, such that: \[d_{I}(T_{1},T_{2})=\min_{1\leq i,j\leq N}|\mathcal{M}(T_{1})_{ij}-\mathcal{M}(T _{2})_{ij}|.\] Therefore \(d_{I}\) is real-valued. In addition, by Lemma 2.9 of [10], if \(\mathcal{M}(T_{1})=\mathcal{M}(T_{2})\) then \(T_{1}\cong T_{2}\), and so \(d_{I}\) is a metric on cellular merge trees up to isomorphism. **Proposition 13**.: _Let \(X\) be a compact connected topological space, and let \(f,g:X\to\mathbb{R}\) be continuous pfd functions with the same finite barcode \(D\) in dimension zero. Let \(\delta_{L}\) be the minimum distance between pairs of non-equal interval left endpoints in \(D\) and \(\delta_{R}\) be the minimum distance between pairs of non-equal right endpoints. If_ \[\|f-g\|_{\infty}<\min(\delta_{L},\delta_{R}),\] _then \(\mathrm{MT}(f)\) and \(\mathrm{MT}(g)\) are isomorphic._ Proof.: Suppose \((T,\pi)\) is a cellular merge tree with barcode \(D\). Then for any branch point \(v\) in \(T\), the map \(H_{0}(\pi^{-1}(-\infty,\pi(v)-\epsilon])\to H_{0}(\pi^{-1}(-\infty,\pi(v)])\) must have a nontrivial kernel for all sufficiently small \(\epsilon\). Therefore, \(\pi(v)\) must be the right endpoint of an interval of \(D\). Similarly, for any leaf \(l\) in \(T\), the map \(H_{0}(\pi^{-1}(-\infty,\pi(l)-\epsilon])\to H_{0}(\pi^{-1}(-\infty,\pi(l)])\) does not have full image for all sufficiently small \(\epsilon\), and so \(\pi(l)\) must be the left endpoint of an interval of \(D\). By Lemma 6 and Theorem 8, \(\mathrm{MT}(f)\) and \(\mathrm{MT}(g)\) are both cellular. By Corollary 4.4 of [10] (which in turn depends upon [22, Theorem 1]), there exist two labellings (possibly with repetitions) of the cellular merge trees \(\mathrm{MT}(f)\) and \(\mathrm{MT}(g)\) with the same number of indices of leaves, such that: \[d_{I}(\mathrm{MT}(f),\mathrm{MT}(g))=\min_{1\leq i,j\leq N}|\mathcal{M}( \mathrm{MT}(f))_{ij}-\mathcal{M}(\mathrm{MT}(g))_{ij}|.\] The diagonal entries of \(\mathcal{M}(\mathrm{MT}(f))\) are projected values of leaves of \(\mathrm{MT}(f)\), and non-diagonal entries are projected values of branch points. Similarly for \(\mathcal{M}(\mathrm{MT}(g))\). However, from the first paragraph, the projected values of leaves and branch points of \(\mathrm{MT}(f)\) and \(\mathrm{MT}(g)\) are the values of left and right interval endpoints of \(D\) respectively. Thus if we assume \(d_{I}(\mathrm{MT}(f),\mathrm{MT}(g))\) is positive, then in fact, by our above equation, \[d_{I}(\mathrm{MT}(f),\mathrm{MT}(g))\geq\min(\delta_{L},\delta_{R}).\] Hence the stability theorem for the interleaving distance [21, Theorem 2] gives us that \[\min(\delta_{L},\delta_{R})\leq d_{I}(\mathrm{MT}(f),\mathrm{MT}(g))\leq\|f-g \|_{\infty}<\min(\delta_{L},\delta_{R}),\] a contradiction. So \(d_{I}(\mathrm{MT}(f),\mathrm{MT}(g))=0\). Since \(\mathrm{MT}(f)\) and \(\mathrm{MT}(g)\) are both cellular, Proposition 12 implies that \(\mathrm{MT}(f)\) and \(\mathrm{MT}(g)\) are isomorphic. ## 4 Functions on a tree with a given merge tree In this section we work with cellular merge trees whose branch points are non-degenerate: **Definition 14**.: _A cellular merge tree \((T,\pi)\) is generic if it is a binary tree (each internal node has two children), and all leaves have distinct projection values._ In particular if \((T,\pi)\) has \(n\) leaves then it has \((n-1)\) internal nodes. For the rest of the section we fix a generic labelled cellular merge tree \(T\), and for each internal node \(v\in T\), we fix an arbitrary labelling \(Lv\) and \(Rv\) of its two children. In particular \(Lv\preceq v\), where as a reminder this notation means that \(Lv\) is a descendant of \(v\), and likewise \(Rv\preceq v\). For \(1\leq i\leq n\) we denote by \(m_{i}:=\pi(l_{i})\) the value of a leaf. Let \(X\) be a geometric tree. **Proposition 15**.: _Let \(f:X\to\mathbb{R}\) be a continuous function with finitely many local minima. Then \(\mathrm{MT}(f)\) is isomorphic to \(T\) if and only if both the following conditions are satisfied:_ 1. _The function_ \(f\) _has_ \(n\) _local minima_ \(X_{1},\cdots,X_{n}\) _with values_ \(m_{1},\cdots,m_{n}\)_._ 2. _For any_ \(i\neq j\)_, the maximum of_ \(f\) _restricted on_ \(\operatorname{ShortPath}(X_{i},X_{j})\) _equals_ \(\mathcal{M}(T)_{ij}\)_._ Proof.: By Theorem 8, \(\operatorname{MT}(f)\) is a cellular merge tree, and it is isomorphic to \(T\) if and only if: 1. The function \(f\) has \(n\) local minima \(X_{1},\cdots,X_{n}\) with value \(m_{1},\cdots,m_{n}\). 2. For any \(1\leq i<j\leq n\), denoting \(t_{ij}=\inf\{t\mid(X_{i},t)\sim(X_{j},t)\}\), we have \(t_{ij}=\mathcal{M}(T)_{ij}\). Furthermore by Lemma 10 we can replace the infimum by a minimum in the definition of \(t_{ij}\). Since \(X\) is a geometric tree it is then clear that \[\min\{t\mid X_{i}\text{ and }X_{j}\text{ are connected in }f^{-1}(-\infty,t]\}=\max f_{| \operatorname{ShortPath}(X_{i},X_{j})}\qed\] We arrive at our most useful characterisation of functions with a given merge tree. **Proposition 16**.: _Let \(f:X\to\mathbb{R}\) be a continuous function with finitely many local minima. Then \(\operatorname{MT}(f)\) is isomorphic to \(T\) if and only if both the following conditions are satisfied:_ 1. _The function_ \(f\) _has_ \(n\) _local minima_ \(X_{1},\cdots,X_{n}\) _with values_ \(m_{1},\cdots,m_{n}\)_._ 2. _Given a node_ \(v\) _(possibly a leaf), let_ \(X_{f}=(X_{1},\ldots,X_{m})\) _and_ \[\operatorname{Conv}_{X_{f}}(v):=\operatorname{Conv}\biggl{\{}X_{i}\mid\text{ leaf }l_{i}\text{ is a descendent of }v\text{ in }\operatorname{MT}(f)\biggr{\}}\subseteq X.\] _Then, for any_ \(1\leq k\leq n-1\)_, we have:_ \[\max\big{\{}f(x)\mid x\in\operatorname{ShortPath}(\operatorname{Conv}_{X_{f}} (Lv_{k}),\operatorname{Conv}_{X_{f}}(Rv_{k}))\big{\}}=\pi(v_{k}),\] (2) _and the maximum is attained at a unique connected subset_ \(Y_{k}\) _of the shortest path._ Proof.: Let \(f\) satisfy the conditions of the statement. One is condition 1 of Proposition 15, and an induction on nodes of \(T\) in increasing order of \(\pi\)-value immediately yields condition 2 as well, so \(\operatorname{MT}(f)\sim T\). Conversely, condition 2 in Proposition 15 is equivalent to \[\forall v_{k}\in T,\forall l_{i}\preceq Lv_{k}\text{ and }l_{j}\preceq Rv_{k} \text{, }\max f_{|\operatorname{ShortPath}(X_{i},X_{j})}=\pi(v_{k}).\] From this, an immediate induction yields that, for each node \(v_{k}\in T\), the value \(\max f_{|\operatorname{Conv}_{X_{f}}(v_{k})}\) equals \(\pi(v_{k})\) and is attained on \(\operatorname{ShortPath}(\operatorname{Conv}_{X_{f}}(Lv_{k}),\operatorname{ Conv}_{X_{f}}(Rv_{k}))\). Assume, seeking contradiction, that the maximum \(\pi(v_{k})\) is attained at two distinct connected subsets \(Y_{1}\) and \(Y_{2}\) of the shortest path. Then, inside \(f^{-1}(-\infty,t]\) for \(t<\pi(v_{k})\), the connected component of elements between \(Y_{1}\) and \(Y_{2}\) is distinct from that of elements of \(\operatorname{Conv}_{X_{f}}(Lv_{k})\) and \(\operatorname{Conv}_{X_{f}}(Rv_{k})\), and at \(t=\pi(v_{k})\) we thus have three or more connected components of the sublevel-sets of \(f\) which are identified, contradicting that \(T\) is generic. ## 5 Retraction of the fiber to configuration space on a tree Let \(X\) be a geometric tree. We metrize spaces of functions on \(X\) via the supremum norm. In this section we fix a generic labelled cellular merge tree \(T\) and we analyze the subspace of functions \(f\) in the fiber: \[\operatorname{MT}^{-1}(T)=\biggl{\{}f:X\to\mathbb{R}\text{, }\operatorname{MT}(f)=T \biggr{\}}.\] We assume without loss of generality that \(T\) has only branch points and leaves as nodes. We will simplify \(\operatorname{MT}^{-1}(T)\) by means of a series of homotopy equivalences \[\operatorname{MT}^{-1}(T)\xrightarrow{\sim}\operatorname{Conf}_{\operatorname {Crit}}(X,T)\xrightarrow{\sim}\operatorname{Conf}_{\operatorname{Min}}(X,T) \xrightarrow{\sim}\operatorname{Conf}(X,T),\] where the spaces \(\operatorname{Conf}_{\operatorname{Crit}}(X,T)\), \(\operatorname{Conf}_{\operatorname{Min}}(X,T)\), and \(\operatorname{Conf}(X,T)\) are configuration spaces tracking the local minima and saddles of a function \(f\in\operatorname{MT}^{-1}(T)\), detailed hereafter. Consider \(\tilde{X}=(X_{1},\ldots,X_{n})\subseteq X^{n}\). Motivated by the definition we made in Proposition 16, for a node \(v\) of \(T\), possibly a leaf, we define: \[\text{Conv}_{\tilde{X}}(v):=\text{Conv}\Big{\{}X_{i}\mid\text{ leaf }l_{i}\text{ is a descendent of }v\Big{\}}\subseteq X.\] We define \(\text{Conv}_{x}(v)\) for \(x=(x_{1},\ldots,x_{n})\in X^{n}\) similarly. We illustrate this construction in Figure 3. Denote the usual ordered configuration space on \(n\) points by \(\text{Conf}_{n}(X)\). The space \(\text{Conf}(X,T)\) is given by \[\text{Conf}(X,T):=\Big{\{}x=(x_{1},\cdots,x_{n})\in\text{Conf}_{n}(X)\mid\text{ Conv}_{x}(v)\cap\text{Conv}_{x}(v^{\prime})\neq\emptyset\Rightarrow v\preceq v^{ \prime}\text{ or }v^{\prime}\preceq v\Big{\}}.\] A configuration \((x_{1},\cdots,x_{n})\in\text{Conf}(X,T)\) should be thought of as the points where a function \(f\) with merge tree \(T\) achieves its local minima \(m_{1},\cdots,m_{n}\). Because in general a function \(f\) with merge tree \(T\) could achieve its minima on arbitrary closed sets rather than points, it is natural to extend \(\text{Conf}(X,T)\) to the following configuration space of closed sets: \[\text{Conf}_{\text{Min}}(X,T):= \bigg{\{}\tilde{X}=(X_{1},\cdots,X_{n})\subseteq X^{n}\text{ disjoint connected closed sets }|\] \[\text{Conv}_{\tilde{X}}(v)\cap\text{Conv}_{\tilde{X}}(v^{\prime}) \neq\emptyset\Rightarrow v\preceq v^{\prime}\text{ or }v^{\prime}\preceq v\bigg{\}}.\] Note that a set configuration \(\tilde{X}=(X_{1},\cdots,X_{n})\in\text{Conf}_{\text{Min}}(X,T)\) induces convex hulls \(\text{Conv}_{\tilde{X}}(v)\) for any node \(v\in T\). For \(v\) an internal node with children \(Lv\) and \(Rv\), we will also consider the following subset of \(X\): \[\text{ShortPath}_{\tilde{X}}(v):=\text{ShortPath}\big{(}\text{Conv}_{\tilde{X} }(Lv),\text{Conv}_{\tilde{X}}(Rv)\big{)}.\] We are now ready introduce our last configuration space \(\text{Conf}_{\text{Crit}}(X,T)\) of closed sets where we also record saddles of a function \(g\in\text{MT}^{-1}(T)\): \[\text{Conf}_{\text{Crit}}(X,T):= \bigg{\{}(\tilde{X},\tilde{Y})=(X_{1},\cdots,X_{n},Y_{1},\cdots,Y_ {n-1})\subseteq X^{2n-1}\text{ disjoint connected closed sets }|\] \[(X_{1},\cdots,X_{n})\in\text{Conf}_{\text{Min}}(X,T),\] \[\forall j,\,Y_{j}\cong[0,1],\,Y_{j}\text{ subset of the interior of }\text{ShortPath}_{\tilde{X}}(v_{j})\bigg{\}}.\] Figure 3: (Left) A cellular merge tree \(T\) with four leaves. (Right) An element \(x=(x_{1},x_{2},x_{3},x_{4})\) of \(\text{Conf}_{4}(X)\), for \(X\) a tree. Here, the set \(\text{Conv}_{x}(v)\) associated to a node \(v\in T\) is highlighted. Note that the Hausdorff distance inherited from the ground metric on \(X\) induces topologies on configuration spaces of closed sets, hence subset topologies for our spaces \(\operatorname{Conf}_{\operatorname{Crit}}(X,T),\operatorname{Conf}_{ \operatorname{Min}}(X,T)\) and \(\operatorname{Conf}(X,T)\). **Theorem 17**.: _The spaces \(\operatorname{MT}^{-1}(T),\operatorname{Conf}_{\operatorname{Crit}}(X,T), \operatorname{Conf}_{\operatorname{Min}}(X,T)\), and \(\operatorname{Conf}(X,T)\) are homotopy equivalent._ The proof of Theorem 17 decomposes as the construction of three consecutive homotopy equivalences: \[\operatorname{MT}^{-1}(T)\xrightarrow{\sim}\operatorname{Conf}_{ \operatorname{Crit}}(X,T)\xrightarrow{\sim}\operatorname{Conf}_{ \operatorname{Min}}(X,T)\xrightarrow{\sim}\operatorname{Conf}(X,T).\] Proof.: For the proof we will fix a metric \(d\) on \(X\). **Step 1:**\(\operatorname{Conf}_{\operatorname{Min}}(X,T)\xrightarrow{\sim}\operatorname{ Conf}(X,T)\) We choose an arbitrary leaf \(\tau\in X\) as the root of the tree, hence for any connected closed subset \(A\subseteq X\), the point \(\tau(A)\) which is closest to the root is uniquely defined. We can continuously contract \(A\) to \(\tau(A)\) with a family \((A_{t})_{0\leq t\leq 1}\) of enclosed subsets: \[A_{t}:=\big{\{}x\in A\mid d(a,\tau(A))\leq(1-t)\mathrm{diam}(A)\big{\}}.\] Given a configuration of connected closed sets \((X_{1},\cdots,X_{n})\), the continuous contraction of each \(X_{i}\) to \(\tau(X_{i})\) defines a deformation retract of \(\operatorname{Conf}_{\operatorname{Min}}(X,T)\) to \(\operatorname{Conf}(X,T)\): \[\operatorname{H}:\big{(}t,(X_{1},\cdots,X_{n})\big{)}\in[0,1]\times \operatorname{Conf}_{\operatorname{Min}}(X,T)\longmapsto((X_{1})_{t},\cdots,( X_{n})_{t})\in\operatorname{Conf}_{\operatorname{Min}}(X,T)\] Indeed, under this map, any \(\tilde{X}\in\operatorname{Conf}_{\operatorname{Min}}(X,T)\) is mapped to an element \(\operatorname{H}(t,\tilde{X})\) that satisfies the condition for being in \(\operatorname{Conf}_{\operatorname{Min}}(X,T)\): for any nodes \(v,v^{\prime}\in T\), if \(\operatorname{Conv}_{\operatorname{H}(t,\tilde{X})}(v)\cap\operatorname{ Conv}_{\operatorname{H}(t,\tilde{X})}(v^{\prime})\neq\emptyset\), since \(\operatorname{Conv}_{\operatorname{H}(t,\tilde{X})}(v)\subseteq\operatorname{ Conv}_{\tilde{X}}(v)\) and \(\operatorname{Conv}_{\operatorname{H}(t,\tilde{X})}(v^{\prime})\subseteq \operatorname{Conv}_{\tilde{X}}(v^{\prime})\), then we also have \(\operatorname{Conv}_{\tilde{X}}(v)\cap\operatorname{Conv}_{\tilde{X}}(v^{ \prime})\neq\emptyset\), and so \(v\preceq v^{\prime}\) or \(v^{\prime}\preceq v\), as desired. **Step 2:**\(\operatorname{Conf}_{\operatorname{Crit}}(X,T)\xrightarrow{\sim}\operatorname{ Conf}_{\operatorname{Min}}(X,T)\) We continue to use the root \(\tau\in X\). Let \(\tilde{X}=(X_{1},\cdots,X_{n})\in\operatorname{Conf}_{\operatorname{Min}}(X,T)\) be a configuration of closed sets. Given \(1\leq j\leq(n-1)\), we have that \(\operatorname{ShortPath}_{\tilde{X}}(v_{j})\) is a closed segment \([a_{j},b_{j}]\) in \(X\), where by convention the extreme \(a_{j}\) is the closest to the root \(\tau\). Let \((s,t)\) be an element of the open standard simplex \(\Delta_{2}:=\{(s,t)\mid 0<s\leq t<1\}\). Define \[Y_{j}^{s,t}:=\bigg{\{}x\in[a_{j},b_{j}]\mid s\leq\frac{d(a_{j},x)}{d(a_{j},b_{ j})}\leq t\bigg{\}}.\] Note that, for varying \(1\leq j\leq(n-1)\), the shortest paths \(\operatorname{ShortPath}_{\tilde{X}}(v_{j})\) are disjoint from each other. Indeed, let's assume, seeking contradiction, that \(\operatorname{ShortPath}_{\tilde{X}}(v)\) intersects \(\operatorname{ShortPath}_{\tilde{X}}(v^{\prime})\) for some distinct nodes \(v,v^{\prime}\in T\). We then have \(\operatorname{Conv}_{\tilde{X}}(v)\cap\operatorname{Conv}_{\tilde{X}}(v^{ \prime})\neq\emptyset\), and since \(\tilde{X}\in\operatorname{Conf}_{\operatorname{Min}}(X,T)\), we can assume without loss of generality that \(v\) is a descendant of \(v^{\prime}\). But then \(\operatorname{Conv}_{\tilde{X}}(v)\) is disjoint from \(\operatorname{ShortPath}_{\tilde{X}}(v^{\prime})\), and since \(\operatorname{ShortPath}_{\tilde{X}}(v)\subseteq\operatorname{Conv}_{\tilde{X }}(v)\), we reach the contradiction \(\operatorname{ShortPath}_{\tilde{X}}(v)\cap\operatorname{ShortPath}_{\tilde{X }}(v^{\prime})=\emptyset\). Therefore the sets \(Y_{j}^{s_{j},t_{j}}\) are disjoints from each other and from the sets \(X_{i}\), and we have the homeomorphism: \[[(X_{i})_{i=1}^{n},(s_{j},t_{j})_{j=1}^{n-1}]\in\operatorname{Conf}_{ \operatorname{Min}}(X,T)\times(\Delta_{2})^{n-1}\longmapsto[(X_{i})_{i=1}^{n},( Y_{j}^{s_{j},t_{j}})_{j=1}^{n-1}]\in\operatorname{Conf}_{\operatorname{Crit}}(X,T).\] The deformation retract of each copy of \(\Delta_{2}\) to a point gives us the homotopy equivalence from \(\operatorname{Conf}_{\operatorname{Crit}}(X,T)\) to \(\operatorname{Conf}_{\operatorname{Min}}(X,T)\). **Step 3:**\(\operatorname{MT}^{-1}(T)\xrightarrow{\sim}\operatorname{Conf}_{\operatorname{ Crit}}(X,T)\) To show that there is a homotopy equivalence between these two spaces, we will need to define a map which sends a function \(f\) to its local minima \(X_{i}(f)\) and saddles \(Y_{j}(f)\), see Figure 4. Let \(f\in\operatorname{MT}^{-1}(T)\). For \(1\leq i\leq n\), let \(X_{i}(f)\) denote the connected subset of \(X\) where \(f\) achieves the minimum \(m_{i}\) corresponding to leaf \(l_{i}\) of \(T\). The map \(f\mapsto X_{i}(f)\) is continuous. Let \(v,v^{\prime}\in T\) be two nodes such that \(A:=\operatorname{Conv}_{\tilde{X}}(v)\cap\operatorname{Conv}_{\tilde{X}}(v^{ \prime})\neq\emptyset\). We assume without loss of generality that \(\pi(v)\leq\pi(v^{\prime})\) and show that \(v\) is a descendant of \(v^{\prime}\), i.e. \(v\preceq v^{\prime}\). The node \(v\), viewed as a connected component in \(f^{-1}(-\infty,\pi(v)]\), contains \(A\). Since the connected component of \(f^{-1}(-\infty,\pi(v^{\prime})]\) represented by \(v^{\prime}\) also contains \(A\), viewing nodes once again as connected components of sublevel sets, \(v\) is a subset of \(v^{\prime}\). So \(v\preceq v^{\prime}\) (see Remark 3). Therefore \(\tilde{X}:=(X_{1}(f),\dots,X_{n}(f))\in\operatorname{Conf}_{\operatorname{Min }}(X,T)\). By Proposition 16, for \(1\leq j\leq n-1\), the restriction of \(f\) to \(\operatorname{ShortPath}_{\tilde{X}}(v_{j})\) attains its maximum \(\pi(v_{j})\) on a unique connected closed set \(Y_{j}(f)\). Since local minima \(X_{i}(f)\) vary continuously with \(f\), so do the convex hulls \(\operatorname{Conv}_{\tilde{X}}(v_{j})\) and the shortest paths \(\operatorname{ShortPath}_{\tilde{X}}(v_{j})\) between them, and therefore the maps \(f\mapsto Y_{j}(f)\) are continuous, and so we have defined a continuous map: \[F:f\in\operatorname{MT}^{-1}(T)\longmapsto\big{(}X_{1}(f),\cdots,X_{n}(f),Y_{ 1}(f),\cdots,Y_{n-1}(f)\big{)}\in\operatorname{Conf}_{\operatorname{Crit}}(X,T).\] To show that \(F\) is a homotopy equivalence, we define a map in the other direction. Let \(Z=(\tilde{X},\tilde{Y})=(X_{1},\cdots,X_{n},Y_{1},\cdots,Y_{n-1})\in \operatorname{Conf}_{\operatorname{Crit}}(X,T)\). We construct a function \(f_{Z}\) by induction on the nodes of \(T\). To begin with, we define \(f_{Z}(X_{i}):=m_{i}\). Next, let \(v_{j}\in T\) be a node such that \(f_{Z}\) is already defined on \(\operatorname{Conv}_{\tilde{X}}(Lv_{j})\) and \(\operatorname{Conv}_{\tilde{X}}(Rv_{j})\). We extend \(f_{Z}\) on \[\operatorname{Conv}_{\tilde{X}}(v_{j})=\operatorname{Conv}_{\tilde{X}}(Lv_{j}) \cup\operatorname{Conv}_{\tilde{X}}(Rv_{j})\cup\operatorname{ShortPath}_{ \tilde{X}}(v_{j}),\] by letting \(f_{Z}(Y_{j}):=s_{j}\) and by linear interpolation on the rest of \(\operatorname{ShortPath}(v_{j})\). At the end of this process, \(f_{Z}\) is defined on the convex hull of all the \(X_{i}\), outside of which we let \(f_{Z}\) increase in all directions: \[\forall x\in X\setminus\operatorname{Conv}(X_{1},\cdots,X_{n}),\qquad f_{Z}(x ):=f_{Z}(\operatorname{proj}_{\operatorname{Conv}(X_{1},\cdots,X_{n})}(x))+d(x,\operatorname{Conv}(X_{1},\cdots,X_{n})).\] By Proposition 16\(\operatorname{MT}(f_{Z})=T\) as desired. This gives us a continuous map: \[G:Z\in\operatorname{Conf}_{\operatorname{Crit}}(X,T)\longmapsto f_{Z}\in \operatorname{MT}^{-1}(T),\] and clearly \(F\circ G=\operatorname{Id}\). Conversely, by Proposition 16, the straight-line interpolation \[(t,f)\in[0,1]\times\operatorname{MT}^{-1}(T)\longmapsto(1-t)f+tG\circ F(f)\] is valued in \(\operatorname{MT}^{-1}(T)\), hence it defines a homotopy equivalence \(G\circ F\sim\operatorname{Id}\). Figure 4: The local minima and saddles of a function on a tree. Directions of arrows on the depicted tree \(X\) indicate where the function is increasing. The three red regions indicate the locations of local minima \(X_{i}(f)\) while the two blue regions indicate the locations of saddles \(Y_{j}(f)\). Saddles do not need to be local maxima: in this example, one saddle is a local maximum while the other is not. The topology of functions on a tree with a given barcode Let \(X\) be a geometric tree. In this section we fix a barcode \(D\) and analyze the space \(\operatorname{PH}^{-1}(D)\) of continuous pfd functions with barcode \(D\). For \(f\in\operatorname{PH}^{-1}(D)\) we let \(\operatorname{PH}_{f}^{-1}(D)\subseteq\operatorname{PH}^{-1}(D)\) be the connected component of the fiber that contains \(f\). In the rest of the section we assume that \(D\) is generic in the following sense: **Definition 18**.: _A barcode \(D\) is generic if it is finite and all its interval endpoints are distinct._ Computing the fiber \(\operatorname{PH}^{-1}(D)\) can be solved by computing the fibers of two composite maps: The fiber of the second map is known from [6]: the number of merge trees giving rise to a generic barcode \(D\) is finite and computed computed in [6, Theorem 4.8]. Therefore it remains to analyze functions with a given merge tree \(T\). We first identify \(\operatorname{MT}^{-1}(T)\) with a connected component of \(\operatorname{PH}^{-1}(D)\) in subsection 6.1, and then, in subsection 6.2, we derive some topological properties of these components. Before we begin we first record the following two useful properties. **Proposition 19**.: _If \(D\) is a generic barcode and \(f\in\operatorname{PH}^{-1}(D)\), then the merge tree \(\operatorname{MT}(f)\) is a generic cellular merge tree._ Proof.: Since \(D\) is finite, by Lemma 6\(f\) has finitely many local minima, and in turn \((\operatorname{MT}(f),\pi_{f})\) is cellular by Theorem 8. Since \(\operatorname{PH}(\pi_{f})=\operatorname{PH}(f)\) has distinct interval endpoints, no more than two connected subsets can merge at a time in the sublevel-sets of \(\pi_{f}\), hence \(\operatorname{MT}(f)\) is a binary tree. Similarly, no two leaves of \(\operatorname{MT}(f)\) can have the same value of \(\pi_{f}\), as this would force \(\operatorname{PH}(f)\) to have a repeated left endpoint. **Proposition 20**.: _Given a geometric tree \(X\neq\emptyset\) and a cellular merge tree \(T\), the fiber \(\operatorname{MT}^{-1}(T)\) and the configuration space \(\operatorname{Conf}(X,T)\) are non-empty._ Proof.: A function on the unit interval (hence more generally on any non-empty tree \(X\)) with merge tree \(T\) can be constructed e.g. as in [6, Proposition 6.8]. Therefore \(\operatorname{MT}^{-1}(T)\neq\emptyset\), and by Theorem 17, \(\operatorname{Conf}(X,T)\neq\emptyset\) as well. ### Counting connected components in the fiber over binary trees Recall that the barcode \(D\) is generic. To avoid trivial cases where \(\operatorname{PH}^{-1}(D)=\emptyset\), we further assume that \(D\) has no interval in degree greater than \(0\), and only one unbounded interval \([b,\infty)\) in degree \(0\) that contains all other intervals. **Proposition 21**.: _Given \(T\) a generic cellular merge tree with barcode \(D\), \(\operatorname{MT}^{-1}(T)\) is a non-empty union of connected components in \(\operatorname{PH}^{-1}(D)\)._ Proof.: \(\operatorname{MT}^{-1}(T)\neq\emptyset\) (Proposition 20) and \(\operatorname{MT}\) is locally constant on \(\operatorname{PH}^{-1}(D)\) (Proposition 13). **Theorem 22**.: _Let \(X\) be a tree not homeomorphic to the unit interval. Given a generic cellular merge tree \(T\), the fiber \(\operatorname{MT}^{-1}(T)\) is nonempty and path-connected. In particular, if \(f\in\operatorname{MT}^{-1}(T)\) has barcode \(D\), then \(\operatorname{MT}^{-1}(T)\) equals \(\operatorname{PH}_{f}^{-1}(D)\), the path connected component of the fiber \(\operatorname{PH}^{-1}(D)\) containing \(f\)._ The proof mainly relies on the following result. **Proposition 23**.: _Let \(X\) be a tree not homeomorphic to the unit interval. Let \(T\) be a generic cellular merge tree. Then \(\operatorname{Conf}(X,T)\) is path-connected._ Before proving the proposition, let us see how it leads to the theorem. Proof of Theorem 22.: From Proposition 20, \(\operatorname{MT}^{-1}(T)\neq\emptyset\), and from Theorem 17, it is homotopy equivalent to \(\operatorname{Conf}(X,T)\), which is path-connected by Proposition 23. If \(T\) has barcode \(D\), by Proposition 21, \(\operatorname{MT}^{-1}(T)\) is a non-empty union of connected components of \(\operatorname{PH}^{-1}(D)\). Therefore it equals exactly one such connected component. To prove that \(\operatorname{Conf}(X,T)\) is path-connected, we proceed in two steps, each relying on a key lemma. First, we show how to deform a configuration of points into one whose points all lie on a common edge of \(X\). The main result to achieve this step is Lemma 26. Then, given two configurations whose points lie on an edge, we show how to connect them using a branch point of \(X\). We achieve this step with Lemma 31. Throughout, we fix a labelling of the cellular merge tree \(T\). The following two results, Lemma 24 and Lemma 25, will be used repeatedly during our argument. **Lemma 24**.: _Let \(x=(x_{1},\ldots,x_{n})\in\operatorname{Conf}(X,T)\), and \(x^{\prime}=(x_{1},\ldots,x_{i-1},y,x_{i+1},\ldots,x_{n})\in\operatorname{Conf}_ {n}(X)\). Fix \(P\), the image of some path from \(x_{i}\) to \(y\). If \(\operatorname{Conv}_{x}(v)\) does not intersect \(P\) for any \(v\) that is not an ancestor of \(l_{i}\), then \(x^{\prime}\in\operatorname{Conf}(X,T)\) and there is a path from \(x\) to \(x^{\prime}\) in \(\operatorname{Conf}(X,T)\)._ Proof.: For \(p\in P\), let \(x(p)=(x_{1},\ldots,x_{i-1},p,x_{i+1},\ldots,x_{n})\). Let \(v,v^{\prime}\) be nodes of \(T\) with neither \(v\preceq v^{\prime}\) nor \(v^{\prime}\preceq v\). Therefore it cannot be the case that both \(v\) and \(v^{\prime}\) are ancestors of \(l_{i}\) as ancestors of \(l_{i}\) are totally ordered. If neither \(v\) nor \(v^{\prime}\) are ancestors of \(l_{i}\) then \[\operatorname{Conv}_{x(p)}(v)\cap\operatorname{Conv}_{x(p)}(v^{\prime})= \operatorname{Conv}_{x}(v)\cap\operatorname{Conv}_{x}(v^{\prime})=\emptyset,\] with the final equality following from the fact that \(x\in\operatorname{Conf}(X,T)\). Lastly, consider the case where either \(v\) or \(v^{\prime}\) is an ancestor of \(l_{i}\), but not both. Without loss of generality assume \(v^{\prime}\) is an ancestor of \(l_{i}\) and \(v\) is not. Note that \[\operatorname{Conv}_{x(p)}(v^{\prime})\subseteq\operatorname{Conv}_{x}(v^{ \prime})\cup\operatorname{ShortPath}(x_{i},p)\subseteq\operatorname{Conv}_{x} (v^{\prime})\cup P.\] Thus, since \(\operatorname{Conv}_{x}(v)\cap\operatorname{Conv}_{x}(v^{\prime})=\emptyset\), we have \[\operatorname{Conv}_{x(p)}(v)\cap\operatorname{Conv}_{x(p)}(v^{\prime}) \subseteq\operatorname{Conv}_{x}(v)\cap\left(\operatorname{Conv}_{x}(v^{ \prime})\cup P\right)=\operatorname{Conv}_{x}(v)\cap P=\emptyset.\] Thus the criteria for \(x(p)\) to be an element of \(\operatorname{Conf}(X,T)\) are satisfied for all \(p\in P\). The result follows. **Lemma 25**.: _Let \(b\) be a branch point or leaf in \(X\) with incident edge \(e\), and \(x=(x_{1},\ldots,x_{n})\in\operatorname{Conf}(X,T)\). If \(x_{i}=b\) for some \(i\), then sufficiently short paths from \(x_{i}\) to points \(y\) in the interior of \(e\) define paths from \(x\) to \((x_{1},\ldots,x_{i-1},y,x_{i+1},\ldots,x_{n})\) in \(\operatorname{Conf}(X,T)\)._ Proof.: Let \(y\) be sufficiently close to \(x_{i}=b\) that there is no \(x_{j}\), here \(j\neq i\), and no additional branch point in the shortest path from \(y\) to \(x_{i}\). Denote this path by \(P\) and let \(x^{\prime}=(x_{1},\ldots,x_{i-1},y,x_{i+1},\ldots,x_{n})\). For \(v\in T\) a node not ancestral to \(l_{i}\), the set \(\operatorname{Conv}_{x}(v)\) does not intersect \(P\), as doing so would mean it would contain \(x_{i}\). The result follows by Lemma 24. **Lemma 26**.: _Fix any \(x=(x_{1},\ldots,x_{n})\in\operatorname{Conf}(X,T)\). There is an edge \(e\) in \(X\) such that there is a path from \(x\) to \(x^{\prime}=(x^{\prime}_{1},\ldots,x^{\prime}_{n})\) in \(\operatorname{Conf}(X,T)\) where every \(x^{\prime}_{i}\) is in the interior of \(e\)._ Proof.: Using Lemma 25, we assume without loss of generality that no point of \(x\) is on a branch point or leaf of \(X\). Let \(e\in X\) be an edge with some but not all entries of \(x\) in its interior. We will build a path to an \(x^{\prime}\) with one more entry in \(e\), giving us the lemma by induction. The construction of the path is illustrated in Figure 5. By hypothesis, there is an entry of \(x\) in at least one of the two path components of \(X\) minus the interior of \(e\). Let \(b\) denote the endpoint of \(e\) in this path component, which we will call \(C\). Let \(x_{j}\) denote the entry of \(x\) in the interior of \(e\) that is closest to \(b\). We will construct a path from \(b\) to some \(x_{i}\in C\) and show that this path lifts to a path in \(\operatorname{Conf}(X,T)\), using Lemma 24. Let \(v_{0}\in T\) be the least ancestor of \(l_{j}\) such that \(\operatorname{Conv}_{x}(v_{0})\) contains \(b\). Thus \(v_{0}\) is not \(l_{j}\) itself, so \(v_{0}\) has two children, \(v_{1}\) and \(v^{\prime}_{1}\), exactly one of which is an ancestor of \(l_{j}\). Without loss of generality, suppose \(v^{\prime}_{1}\) is an ancestor of \(l_{j}\) and \(v_{1}\) is not. Let \(P\) denote the shortest path from \(b\) to \(\operatorname{Conv}_{x}(v_{1})\), and let \(z_{1}\) denote the endpoint of \(P\) intersecting \(\operatorname{Conv}_{x}(v_{1})\). Since both \(b\) and \(z_{1}\) are in \(\operatorname{Conv}_{x}(v_{0})\), so is \(P\). If \(b\) is in \(\operatorname{Conv}_{x}(v_{1})\), then \(P\) consists of the singleton \(b\) and does not intersect \(\operatorname{Conv}_{x}(v^{\prime}_{1})\). Otherwise, \(b\) is in neither \(\operatorname{Conv}_{x}(v_{1})\) nor \(\operatorname{Conv}_{x}(v^{\prime}_{1})\). Thus \(b\) lies on the shortest path between these two sets. Hence \(P\) is a subset of this shortest path, and therefore does not intersect \(\operatorname{Conv}_{x}(v^{\prime}_{1})\). So \(P\) does not intersect \(\operatorname{Conv}_{x}(v^{\prime}_{1})\) in either case. We will continue to augment \(P\) until it has reached some \(x_{i}\). Inductively, assume we have already constructed a path \(P\) from \(b\) to a point \(z_{m-1}\) in \(\operatorname{Conv}_{x}(v_{m-1})\). If \(v_{m-1}\) has no children, then \(\operatorname{Conv}_{x}(v_{m-1})\) is a singleton containing \(x_{i}\) for some \(i\), and we are done. Otherwise, let \(v_{m}\) and \(v^{\prime}_{m}\) denote the children of \(v_{m-1}\). Either the shortest path from \(z_{m-1}\) to \(\operatorname{Conv}_{x}(v_{m})\) does not intersect \(\operatorname{Conv}_{x}(v^{\prime}_{m})\) or the shortest path from \(z_{m-1}\) to \(\operatorname{Conv}_{x}(v^{\prime}_{m})\) does not intersect \(\operatorname{Conv}_{x}(v_{m})\). Without loss of generality, assume we are in the first case. Since \(z_{m-1}\) is in \(\operatorname{Conv}_{x}(v_{m-1})\), so is the shortest path from \(z_{m-1}\) to \(\operatorname{Conv}_{x}(v_{m})\). We augment \(P\) by this path and refer to its augmented endpoint as \(z_{m}\). Since \(v_{0}\) has finitely many descendants, this process is guaranteed to terminate eventually, and we will obtain a path \(P\) from \(b\) to some \(x_{i}\). Further, since every augmented portion of \(P\) lies in \(\operatorname{Conv}_{x}(v_{m})\) for some \(m\), all of \(P\) lies in \(\operatorname{Conv}_{x}(v_{0})\). Let \(v\) bet a node of \(T\) not ancestral to \(l_{j}\). If \(\operatorname{Conv}_{x}(v)\) intersects \(P\) then it intersects \(\operatorname{Conv}_{x}(v_{0})\), which contains \(P\). Therefore either \(v\preceq v_{0}\) or \(v_{0}\preceq v\). It cannot be that \(v_{0}\preceq v\) since \(l_{i}\) is not a descendant of \(v\), so \(v\preceq v_{0}\). Again since \(l_{i}\) is not a descendant of \(v\), \(v\) cannot be \(v_{m}\) for any \(m\), so \(v\) is a descendant of some \(v^{\prime}_{m}\). However, by construction, \(P\) does not intersect \(\operatorname{Conv}_{x}(v^{\prime}_{m})\) for any \(m\), so it cannot intersect \(\operatorname{Conv}_{x}(v)\) either. Applying Lemma 24 allows us to move \(x_{i}\) onto \(b\). Then applying Lemma 25, we can further move \(x_{i}\) into the interior of \(e\). Now that we know we can get all points onto one edge, we want to be able to move groups of points along curves. Let \(x=(x_{1},\ldots,x_{n})\in\operatorname{Conf}_{n}(X)\), here \(\operatorname{Conf}_{n}(X)\) is the usual ordered configuration space of \(n\) points on \(X\), and suppose that there is a subset \(Y\subseteq X\) homeomorphic via a map \(h\) to the unit interval \([0,1]\), such that every \(x_{i}\) lies in \(Y\). The coordinates of \(x\) thus inherit a total order via the total order of their images under \(h\). Thus for some permutation \(\sigma\) of \(\{1,\ldots,n\}\), the inherited total order has form \(x_{\sigma(1)}\leq\ldots\leq x_{\sigma(n)}\). We refer to \(\sigma\) as the \(h\)_-permutation of \(x\)_. Notice that \(\sigma\) only depends on the orientation determined by \(h\). **Lemma 27**.: _Let \(x\in\operatorname{Conf}(X,T)\), \(y\in\operatorname{Conf}_{n}(X)\) and suppose that there is a subset \(Y\subseteq X\) homeomorphic via a map \(h\) to the unit interval \([0,1]\), such that every coordinate of \(x\) and \(y\) lies in \(Y\). Then there is a path from \(x\) to \(y\) in \(\operatorname{Conf}(X,T)\) if \(x\) and \(y\) have the same \(h\)-permutation._ Proof.: Let \(h\) and \(h^{-1}\) induce maps on \(\operatorname{Conf}_{n}(Y)\) and \(\operatorname{Conf}_{n}([0,1])\) by acting component-wise. Consider the path in \(\operatorname{Conf}_{n}(X)\) from \(x\) to \(y\) \[\gamma(t):=h^{-1}\big{[}(1-t)h(x)+th(y)\big{]}\] Figure 5: A merge tree \(T\) (left) and an example geometric tree \(X\) (right) illustrating the main construction of Lemma 26. Highlighted in red is the path constructed from from a point \(x_{i}\) in \(X\) to \(b\), which lifts to a path in \(\operatorname{Conf}(X,T)\). Here, \(x_{1}\) plays the role of \(x_{j}\) in the proof. Note that in this example we could have also constructed a path from \(b\) to \(x_{3}\), but not to \(x_{4}\), because \(\operatorname{Conv}_{x}(v_{2})\) interrupts the path from \(b\) to \(\operatorname{Conv}_{x}(v^{\prime}_{2})\). whose image under \(h\) linearly interpolates between \(h(x)\) and \(h(y)\). Thus the \(h\)-permutation of \(\gamma(t)\) is the same as that of \(x\) for all \(t\in[0,1]\). Suppose \(\operatorname{Conv}_{\gamma(t)}(v)\cap\operatorname{Conv}_{\gamma(t)}(v^{\prime})\) is nonempty. Then either there exists \(i\), \(j\), and \(k\) such that \(l_{i}\) and \(l_{k}\) are descendants of \(v\), \(l_{j}\) is a descendent of \(v^{\prime}\) and \(\gamma(t)_{i}\leq\gamma(t)_{j}\leq\gamma(t)_{k}\) or the same is true with the roles of \(v\) and \(v^{\prime}\) reversed. Without loss of generality suppose we are in the first case. Thus \(x_{i}\leq x_{j}\leq x_{k}\), since the \(h\)-permutation is constant along \(\gamma\). So \(\operatorname{Conv}_{x}(v)\cap\operatorname{Conv}_{x}(v^{\prime})\) is also nonempty and \(v\preceq v^{\prime}\) or \(v^{\prime}\preceq v\). Hence \(\gamma\) is a path in \(\operatorname{Conf}(X,T)\). **Lemma 28**.: _Suppose \(x\in\operatorname{Conf}(X,T)\) consists of points on a subset \(Y\) of \(X\) homeomorphic to the unit interval via a map \(h\). Then there exists \(1\leq i<k\leq n\) such that \(l_{\sigma(i)},\ldots,l_{\sigma(k)}\) are the descendants of \(v\)._ Proof.: It suffices to show that the set \(\{j\mid l_{\sigma(j)}\preceq v\}\) is convex. Consider two leaves \(l_{\sigma(i)}\) and \(l_{\sigma(k)}\), with \(i<k\), that have \(v\) as ancestor, and let \(j\in\{i,\cdots,k\}\). Since \(x_{\sigma(i)}<x_{\sigma(j)}<x_{\sigma(k)}\), we have \(x_{\sigma(j)}\in\operatorname{Conv}(x_{\sigma(i)},x_{\sigma(k)})\subseteq \operatorname{Conv}_{x}(v)\). Since \(\operatorname{Conv}_{x}(l_{\sigma(j)})=\{x_{\sigma(j)}\}\), \(\operatorname{Conv}_{x}(l_{\sigma(j)})\) intersects \(\operatorname{Conv}_{x}(v)\). Therefore either \(l_{\sigma(j)}\preceq v\) or \(v\preceq l_{\sigma(j)}\). But \(l_{\sigma(j)}\) is a leaf so it must be the case that \(l_{\sigma(j)}\preceq v\). This lemma implies that the homeomorphism \(h\) of the edge \(e\) where a configuration \(x\in\operatorname{Conf}(X,T)\) lies determines, for each internal node \(v\in T\), its left child \(Lv\) and right child \(Rv\). Explicitly, we can choose \(Lv\) and \(Rv\) to be such that \(i<j\) whenever \(l_{\sigma(i)}\preceq Lv\) and \(l_{\sigma(j)}\preceq Rv\). Therefore, the configuration \(x\) induces the structure \(T_{x}\) of a _chiral merge tree_ on \(T\), as defined in [6, Definition 5.3]: **Definition 29**.: _A chiral merge tree is a binary cellular merge tree where the two children of any internal node are labelled as either the left or right child._ The next lemma will let us alter the chiral merge tree structure assigned to a configuration, when \(X\) is especially simple. **Lemma 30**.: _Let \(X\) be a geometric starlike tree of degree 3, i.e. \(X\) is homeomorphic to three copies of \([0,1]\) identified at \(0\). Let \(x\in\operatorname{Conf}(X,T)\) be a configuration lying on the interior of an edge \(e\) of \(X\). Fix a homeomorphism \(h\) of \(e\) with \([0,1]\). Let \(T_{c}\) be any chiral merge tree structure on \(T\). Then there exists a path from \(x\) to some \(y\in\operatorname{Conf}(X,T)\) lying on the interior of \(e\), such that \(T_{y}=T_{c}\)._ Proof.: By induction, it is sufficient to prove the case where \(T_{c}\) differs from \(T_{x}\) by only inverting the left child \(Lv\) and right child \(Rv\) of a given node \(v\). For simplicity, assume that the permutation \(\sigma\) induced by \(x\) and \(h\) is the identity, thus \(x_{1}\leq\ldots\leq x_{n}\). Denote by \(l_{i},\ldots,l_{k}\) the descendants of \(v\). There is some \(i\leq j<k\) such that \(l_{i},\ldots,l_{j}\) are the descendants of \(Lv\) while \(l_{j+1},\ldots,l_{k}\) are the descendants of \(Rv\). The goal is thus to build a path from \(x\) to some \(y=(y_{1},\ldots,y_{n})\) in \(\operatorname{Conf}(X,T)\), where each entry of \(y\) is interior to \(e\), satisfying \[y_{1}\leq\ldots\leq y_{i-1}\leq y_{j+1}\leq\ldots\leq y_{k}\leq y_{i}\leq\ldots \leq y_{j}\leq y_{k+1}\leq\ldots\leq y_{n}.\] Let \(e_{1}=e\), \(e_{2}\), and \(e_{3}\) be the edges of \(X\), and \(b\) be the branch point of \(X\). Without loss of generality assume that \(h\) is such that \(h(b)=0\). We will show that the following sequence of moves from \(x\) to \(y\) are allowable in \(\operatorname{Conf}(X,T)\): 1. Move the \(1^{\text{st}}\) through \(j^{\text{th}}\) coordinates from \(e_{1}\) into \(e_{2}\), in that order. 2. Move the \((j+1)^{\text{th}}\) through \(k^{\text{th}}\) coordinates from \(e_{1}\) into \(e_{3}\), in that order. 3. Move the \(i^{\text{th}}\) through \(j^{\text{th}}\) coordinates from \(e_{2}\) into \(e_{1}\), in reverse order. 4. Move the \((j+1)^{\text{th}}\) through \(k^{\text{th}}\) coordinates from \(e_{3}\) into \(e_{1}\), in reverse order. 5. Move the \(1^{\text{st}}\) through \(i^{\text{th}}\) coordinates from \(e_{2}\) into \(e_{1}\), in reverse order. Figure 6 shows a visualization of this sequence of moves in an example where there are five coordinates. Moves 1 and 5 can be realized by a path in \(\operatorname{Conf}(X,T)\) via Lemma 27 with \(Y=e_{1}\cup e_{2}\). For move 2, suppose we have already moved coordinates \(j+1\) through \(m-1\) into \(e_{3}\) to attain a configuration \(z=(z_{1},\ldots,z_{n})\). Let \(P\) denote the shortest path from \(z_{m}\) to \(b\) and \(z^{\prime}=(z_{1},\ldots,z_{m-1},b,z_{m+1},\ldots,z_{n})\). Let \(v_{0}\) be a node of \(T\) not ancestral to \(l_{m}\). Suppose that \(\operatorname{Conv}_{z}(v_{0})\) intersects \(P\). Since \(l_{m}\nsubseteq v_{0}\), \(\operatorname{Conv}_{z}(v_{0})\) can only intersect \(P\) at \(b\). This means that \(\operatorname{Conv}_{z^{\prime}}(v_{0})\) intersects the interiors of both \(e_{2}\) and \(e_{3}\), so there must be points on \(e_{3}\) in \(z\) already, i.e. \(m>j+1\). In particular, \(\operatorname{Conv}_{z}(v_{0})\) contains both \(z_{j}\) and \(z_{m-1}\). Since \(l_{j}\) and \(l_{m-1}\) are descendants of \(Lv\) and \(Rv\) respectively, \(v_{0}\) must be an ancestor of \(v\). Since \(l_{m}\) is a descendant of \(v\), \(v_{0}\) is an ancestor of \(l_{m}\), a contradiction. Hence \(\operatorname{Conv}_{z}(v)\) does not intersect intersect \(P\), and so by Lemma 24 we can move the \(m^{\text{th}}\) coordinate of \(z\) to \(b\). Then applying Lemma 25 we can move the \(m^{\text{th}}\) coordinate of \(z\) into the interior of \(e_{3}\). Induction on \(m\) then allows us to complete move 2. The cases of moves 3 and 4 are handled similarly. **Lemma 31**.: _Let \(X\) be a geometric starlike tree of degree 3. Then \(\operatorname{Conf}(X,T)\) is path-connected._ Proof.: Fix an edge \(e\in X\) with an orientation \(h\). By Proposition 20, \(\operatorname{Conf}(X,T)\neq\emptyset\). Let \(x,y\in\operatorname{Conf}(X,T)\). Up to applying Lemma 26 and Lemma 27, we can assume that \(x\) lies in the interior of \(e\). Similarly we may assume for \(y\) lies in the interior of \(e\). By Lemma 30, we can connect \(x\) to a configuration \(x^{\prime}\in\operatorname{Conf}(X,T)\) lying on \(e\) and such that \(T_{x^{\prime}}=T_{y}\). In particular \(x^{\prime}\) and \(y\) induce the same \(h\)-permutation, and therefore, thanks to Lemma 27, there is a path between them in \(\operatorname{Conf}(X,T)\). Finally we can prove the central proposition of the section, restated below for convenience. **Proposition 23**.: _Let \(X\) be a tree not homeomorphic to the unit interval. Let \(T\) be a generic cellular merge tree. Then \(\operatorname{Conf}(X,T)\) is path-connected._ Proof of Proposition 23.: Let \(l\) be a leaf of \(X\), and \(e\) be the edge incident to \(l\). Since \(X\) is connected and not homeomorphic to the unit interval, the other endpoint \(b\) of \(e\) must be a branch point. Hence, there is a subtree \(Y\subseteq X\) which is a geometric starlike tree of degree 3 containing \(e\). By Proposition 20, \(\operatorname{Conf}(Y,T)\neq\emptyset\). Let \(y\in\operatorname{Conf}(Y,T)\subseteq\operatorname{Conf}(X,T)\) be a fixed, target configuration on \(X\). Let \(x=(x_{1},\dots,x_{n})\in\operatorname{Conf}(X,T)\). Applying Lemma 26 and then Lemma 27, we find a path in \(\operatorname{Conf}(X,T)\) from \(x\) to a configuration \(x^{\prime}\) whose points lie in the interior of \(e\). Viewing \(x^{\prime}\) as a Figure 6: A visual representation of the proof of Lemma 30. In the depicted example, \(T\) has five leaves, three of which are descendant from \(v\). One of these nodes, highlighted in red, is moreover a descendant of \(Lv\). The other two descendants of \(v\), highlighted in blue, are descendants of \(Rv\). Panels (a) through (f) show the path used to reconfigure points in the proof. configuration in \(\operatorname{Conf}(Y,T)\subseteq\operatorname{Conf}(X,T)\), by Lemma 31 it can be joined to \(y\) via a path in \(\operatorname{Conf}(Y,T)\), which also defines a path in \(\operatorname{Conf}(X,T)\). **Corollary 32**.: _Let \(X\) be a tree not homeomorphic to the unit interval. Let \(\delta_{L}\) be the minimum distance between pairs of non-equal interval left endpoints in \(D\) and \(\delta_{R}\) be the minimum distance between pairs of non-equal right endpoints. Then the path connected components of the fiber \(\operatorname{PH}^{-1}(D)\) are at distance at least \(\min(\delta_{L},\delta_{R})\) from each other. In particular, the path connected components of \(\operatorname{PH}^{-1}(D)\) are the connected components of \(\operatorname{PH}^{-1}(D)\)._ Proof.: Let \(f,g:X\to\mathbb{R}\) be functions in distinct path connected components of the fiber \(\operatorname{PH}^{-1}(D)\). Then by Theorem 22, their merge trees \(\operatorname{MT}(f)\) and \(\operatorname{MT}(g)\) are non-isomorphic, and therefore by Proposition 13, we have that \(\|f-g\|_{\infty}\geq\min(\delta_{L},\delta_{R})\). **Corollary 33**.: _Let \(X\) be a tree not homeomorphic to the unit interval. The fiber \(\operatorname{PH}^{-1}(D)\) has a finite number of connected components given by:_ \[\#\pi_{0}(\operatorname{PH}^{-1}(D))=\prod_{[b,d)\in D}\#\big{\{}[b^{\prime}, d^{\prime})\in D\mid[b,d)\subset[b^{\prime},d^{\prime})\big{\}}.\] Proof.: This is the number of distinct cellular merge trees with barcode \(D\), see [6, Theorem 4.8]. From Theorem 22, such merge trees are in bijection with path connected components of \(\operatorname{PH}^{-1}(D)\), which equal connected components of \(\operatorname{PH}^{-1}(D)\) by Corollary 33. ### Topology of connected components in the fiber When there are very few leaves in the cellular merge tree \(T\), and hence very few points in \(\operatorname{Conf}(X,T)\), we are able to deduce the homotopy type of the connected components of \(\operatorname{MT}^{-1}(T)\), and hence \(\operatorname{PH}^{-1}(D)\) for simple barcodes \(D\). The simplest case is when \(T\) has only one leaf. **Corollary 34**.: _Let \(X\) be the geometric realization of a tree, \(T\) be a merge tree with one leaf and \(D\) be the barcode associated to \(T\). Then \(\operatorname{MT}^{-1}(T)=\operatorname{PH}^{-1}(D)\) and both are contractible._ Proof.: The barcode \(D\) consists of one infinte interval \([a,\infty)\), where \(a\) is the value assigned to the one leaf of \(T\). The only merge tree that can give rise to \(D\) is \(T\), by [6, Theorem 4.8]. Hence \(\operatorname{MT}^{-1}(T)=\operatorname{PH}^{-1}(D)\). By Theorem 17, \[\operatorname{MT}^{-1}(T)\simeq\operatorname{Conf}(X,T)=\operatorname{Conf}_{1 }(X)=X,\] and \(X\) is contractible, so \(\operatorname{MT}^{-1}(T)\) is contractible. If there are only two points in \(\operatorname{Conf}(X,T)\), its structure is still fairly simple, and has already been computed up to homotopy. We derive the following as an immediate consequence. **Corollary 35**.: _Let \(X\) be the geometric realization of a tree with at least one vertex of degree \(\geq 3\) and \(T\) be a cellular merge tree with exactly two leaves \(l_{1}\) and \(l_{2}\). Suppose \(\pi(l_{1})\neq\pi(l_{2})\). Then \(\operatorname{MT}^{-1}(T)\) is homotopy equivalent to the wedge sum of_ \[-1+\sum_{v\in N(X)}(\eta(v)-1)(\eta(v)-2)\] _circles, where \(N(X)\) denotes the nodes in any cellular decomposition of \(X\) and \(\eta(v)\) denotes the degree of node \(v\) in \(X\). Moreover, denoting by \(D\) the barcode arising from the merge tree \(T\), we have \(\operatorname{PH}^{-1}(D)=\operatorname{MT}^{-1}(T)\)._ Proof.: Theorem 17 tells us that \(\operatorname{MT}^{-1}(T)\) is homotopy equivalent to \(\operatorname{Conf}(X,T)\), here \(\operatorname{Conf}(X,T)=\operatorname{Conf}_{2}(X)\) is the configuration space of two points on \(X\), whose homotopy type is computed in [9, Theorem 11.1]. The last statement follows from [6, Theorem 4.8]: \(D\) is the barcode with two intervals, one finite contained by the other, infinite interval, and therefore \(T\) is the only merge tree giving rise to the barcode \(D\). **Remark 4**.: _Let \(X\) be the star-like tree made of \(n\) edges joined at one vertex, and let \(D\) be a barcode as in Corollary 35. Then the corollary tells us that \(\operatorname{PH}^{-1}(D)\) is homotopy equivalent to a wedge of \(n^{2}-3n+1\) circles._ _In [20], the authors consider the same barcode \(D\) and the discrete tree \(Y\) obtained from \(X\) by inserting an additional vertex in the middle of each edge. Consider the space of functions \(f\) on the vertices and edges of \(Y\), where \(f(e)=\max(f(v),f(w))\) for any edge \(e=(v,w)\). In this scenario, the authors find that \(\operatorname{PH}^{-1}(D)\) is also homotopy equivalent to a wedge of \(n^{2}-3n+1\) circles [20, Theorem 8.1]. This suggests that there may be a general relationship between the fiber of persistent homology of geometric trees and discrete trees with sufficiently fine triangulations._ ### Acknowledgements DB is a member of the Centre for Topological Data Analysis, funded in part by EPSRC EP/R018472/1. JL was supported by an LMS Early Career Fellowship and in the last stage of the project by St John's College for his research assistantship role with Heather Harrington.
2307.04085
Vector Commitments with Efficient Updates
Dynamic vector commitments that enable local updates of opening proofs have applications ranging from verifiable databases with membership changes to stateless clients on blockchains. In these applications, each user maintains a relevant subset of the committed messages and the corresponding opening proofs with the goal of ensuring a succinct global state. When the messages are updated, users are given some global update information and update their opening proofs to match the new vector commitment. We investigate the relation between the size of the update information and the runtime complexity needed to update an individual opening proof. Existing vector commitment schemes require that either the information size or the runtime scale linearly in the number $k$ of updated state elements. We construct a vector commitment scheme that asymptotically achieves both length and runtime that is sublinear in $k$, namely $k^\nu$ and $k^{1-\nu}$ for any $\nu \in (0,1)$. We prove an information-theoretic lower bound on the relation between the update information size and runtime complexity that shows the asymptotic optimality of our scheme. For $\nu = 1/2$, our constructions outperform Verkle commitments by about a factor of $2$ in terms of both the update information size and runtime, but makes use of larger public parameters.
Ertem Nusret Tas, Dan Boneh
2023-07-09T02:34:46Z
http://arxiv.org/abs/2307.04085v5
# Vector Commitments with Efficient Updates ###### Abstract Dynamic vector commitments that enable local updates of opening proofs have applications ranging from verifiable databases with membership changes to stateless clients on blockchains. In these applications, each user maintains a relevant subset of the committed messages and the corresponding opening proofs with the goal of ensuring a succinct global state. When the messages are updated, users are given some global update information and update their opening proofs to match the new vector commitment. We investigate the relation between the size of the update information and the runtime complexity needed to update an individual opening proof. Existing vector commitment schemes require that either the information size or the runtime scale _linearly_ in the number \(k\) of updated state elements. We construct a vector commitment scheme that asymptotically achieves both length and runtime that is _sublinear_ in \(k\), namely \(k^{\nu}\) and \(k^{1-\nu}\) for any \(\nu\in(0,1)\). We prove an information-theoretic lower bound on the relation between the update information size and runtime complexity that shows the asymptotic optimality of our scheme. While in practice, the construction is not yet competitive with Verkle commitments, our approach may point the way towards more performant vector commitments. ## 1 Introduction A Vector Commitment (VC) scheme [15, 25, 14] enables a committer to succinctly commit to a vector of elements. Later, the committer can generate an _opening proof_ to prove that a particular position in the committed vector is equal to a certain value. VCs have found many applications in databases and blockchains [27, 39] as they enable a storage system to only store a commitment to the vector instead of the entire vector. The data itself can be stored elsewhere along with opening proofs. In a multiuser system, every user might store only one position of the vector along with the opening proof for that position. Dynamic VCs [14] are vector commitments that support updates to the vector. Suppose the committed vector is of length \(N\) and some \(k<N\) positions in the vector are updated, so that a new vector commitment is published. Then, every user in the system will need to update their local opening proof to match the updated commitment, and this is done with the help of some global _update information_\(U\) that is broadcast to all users. This information is typically generated and published by a manager who maintains the entire vector. Applications of dynamic VCs include verifiable databases, zero-knowledge sets with frequent updates [14] and stateless clients for blockchains [10]. The challenge is to design a VC scheme that minimizes the size of the update information \(U\) as well as the computation work by each user to update their local opening proof. For example, consider stateless clients on a blockchain as an important application for dynamic VCs. The state of the chain can be represented as a vector of length \(N\), where position \(i\) corresponds to the state of account number \(i\). Every user will locally maintain its own state (corresponding to some position in the vector) along with an opening proof that enables the user to convince a third party as to its current state. Whenever a new block is published, the state of the chain changes. In particular, suppose \(k\) out of the \(N\) positions in the vector need to be updated. The block proposer will publish the _update information_\(U\) along with the new block, and every user will update their opening proof to match the new committed state of the chain. Thus, users can ensure that their opening proofs are up to date with respect to the latest committed state of the chain. We stress that in this application, the data being updated, namely the updated positions and diffs, is published as part of the block. The update information \(U\) only contains additional information that is needed to update the opening proofs. When we refer to the size of \(U\), we refer to its size, excluding the updated data (i.e., excluding the updated positions and diffs). In this paper, we investigate the trade-off between the length \(|U|\) of the update information and the time complexity of proof updates. Dynamic VCs can be grouped into two categories in terms of these parameters (Table 1). Tree-based VCs [26; 34] enable users to update their proofs in time \(O(\mathrm{polylog}\,N)\). Each opening proof typically consists of \(\mathrm{polylog}\,(N)\) inner nodes, and the update information \(U\) contains the changes in the inner nodes affected by the message \begin{table} \begin{tabular}{|c|c|c|c|} \hline Vector Commitment & \(|U|\) & \(T\) & PP \\ \hline \hline Merkle tree [26] & \(\tilde{O}(k)\ |H|\) & \(\tilde{O}(1)\) & N \\ \hline Hyperproofs [34] & \(\tilde{O}(k)\ |G|\) & \(\tilde{O}(1)\) & N \\ \hline Verkle tree [12] & \(\tilde{O}(k)\ |G|\) & \(\tilde{O}(1)\ |H|\ T_{G}\) & Y \\ \hline \hline This work with \(\nu\in[0,1]\) & \(\tilde{\Theta}(k^{\nu})|H|\) & \(\tilde{\Theta}(k^{1-\nu})\ T_{f}\) & N \\ \hline \hline KZG commitments [20] & \(O(1)\) & \(\tilde{\Theta}(k)\ T_{G}\) & Y \\ \hline RSA accumulators and VCs [8; 9] & \(O(1)\) & \(\tilde{\Theta}(k)\ T_{G}\) & N \\ \hline Bilinear accumulators [28; 35] & \(O(1)\) & \(\tilde{\Theta}(k)\ T_{G}\) & N \\ \hline \end{tabular} \end{table} Table 1: Comparison of different VCs. \(|U|\) denotes the length of the update information. \(T\) denotes the runtime of a single proof update. \(|G|\) and \(|H|\) denote the size of a single group element and a single hash value, respectively. \(T_{G}\) and \(T_{f}\) denote the time complexity of a single group operation and a single function evaluation for the hash function used by the VC. The last column PP is ‘Y’ if the proof update requires pre-processing to generate a global and fixed table of auxiliary data needed for proof updates. updates. Each user calculates its new opening proof by downloading the relevant inner nodes published as part of \(U\). When \(k\) positions are updated, a total of \(O(k\log{(N)})\) inner nodes in the tree are affected in the worst case. Thus, when each inner node has length \(\Theta(\lambda)\), proportional to the security parameter \(\lambda\), the update information consists of \(O(k\log{(N)}\lambda)\) bits. In contrast, algebraic VCs [20, 8, 9, 28, 35] enable users to update their opening proofs with only knowledge of the updated data. They do not require any additional update information \(U\) to be published beyond the indices and the 'diffs' of the updated data. Thus, the length of the update information needed to update the opening proofs is \(O(1)\). However, algebraic VCs typically require each user to read all of the changed messages and incorporate the effect of these changes on their proofs, resulting in \(\Theta(k)\) work per proof update. To summarize, while tree-based VCs support efficient calculation of the new opening proofs by publishing a large amount of update information, linear in \(k\), algebraic VCs do not require any additional update information beyond the updated data, but suffer from a large runtime for proof updates, linear in \(k\). We formalize the dichotomy of VCs in Section 3. ### Our Results We propose a family of VCs that can support _sublinear update_, where both the length \(|U|\) of the update information and the complexity of proof updates are sublinear in \(k\). More specifically, our VCs can attain \(|U|=\Theta(k^{\nu}\lambda)\), \(\nu\in(0,1)\), with a proof update complexity of \(\Theta(k^{1-\nu})\) operations. Our candidate construction with sublinear update is a _homomorphic Merkle tree_, first developed by [30, 33], where each inner node can be expressed as a _sum_ of the _partial digests_ of the messages underneath (Section 4). The algebraic structure of these trees enable each user to calculate the _effect_ of a message update on any inner node without reading other inner nodes or messages. We identify homomorphic Merkle tree constructions based on lattices, from the literature [31, 30, 33]. In Section 4, we provide the update algorithms (Alg. 1) for homomorphic Merkle trees, parameterized by \(\nu\in(0,1)\). Our algorithm identifies a special subset of size \(\tilde{\Theta}(k^{\nu})\) of the inner nodes affected by the message updates, and publish their new values as \(U\); so that the users need not calculate these values. These inner nodes are selected carefully to ensure that any inner node _outside_ of \(U\) is affected by at most \(\Theta(k^{1-\nu})\) updated messages. Thus, to modify its opening proof, each user has to calculate the partial digests of at most \(\Theta(k^{1-\nu})\) updated messages per inner node within its proof (that consists of \(\Theta(\log{(N)})\) inner nodes). Moreover, to calculate these partial digests, the user only needs the 'diffs' of the updated messages. This brings the asymptotic complexity of proof updates to \(\tilde{\Theta}(k^{1-\nu})\) operations, while achieving an update information size of \(\tilde{\Theta}(k^{\nu}\lambda)\) as opposed to \(\tilde{\Theta}(k\lambda)\) on Merkle trees using SHA256. In Section 6, we prove an information theoretic lower bound on the size of the update information given an upper bound on the runtime complexity of proof updates. The bound implies the asymptotic optimality of our scheme with sublinear update. Its proof is based on the observation that if the runtime complexity is bounded by \(O(k^{1-\nu})\), a user that wants to update its proof cannot read beyond \(O(k^{1-\nu})\) updated messages. Then, to calculate the effect of the remaining \(k-O(k^{1-\nu})\) messages on its opening proof, the user has to download parts of the structured update information \(U\). Finally, to obtain the lower bound on \(|U|\), we use Shannon entropy and lower bound the number of bits, namely \(O(k^{\nu}\lambda)\), required to capture the total information that will be downloaded by the users; while maintaining the security of the VC with parameter \(\lambda\). ### Applications We identify three main applications for VCs with sublinear update. #### 1.2.1 Stateless clients for Ethereum Ethereum is the largest decentralized general purpose computation platform by market cap. Ethereum state (_e.g._, user accounts) is currently stored in the form of a Merkle tree [5] and grows approximately by half every year [11]. Stateless clients [10, 11] were proposed to mitigate the problem of state bloat and prevent the state storage and maintenance from becoming a bottleneck for decentralization. Stateless clients maintain an opening proof to their account balances within the Ethereum state, thus can effortlessly prove the inclusion of their accounts within the latest state. This enables the other Ethereum clients to verify the transactions that come with opening proofs without having to download the full state and check the validity of the claimed account balances. Since block verification now requires downloading the proofs for the relevant state elements, Verkle trees [21, 12, 17] were proposed as a replacement for Merkle trees due to their short proof size. Each new Ethereum block contains transactions that update the state elements and their opening proofs. Archival nodes and block producers still maintain the full state so that they can inform the stateless clients about their new opening proofs [11]. For this purpose, block producers must broadcast enough information to the clients over the peer-to-peer gossip network of Ethereum1. As minimizing the proof size was paramount to decentralizing verification for blocks, minimizing the update information size becomes necessary for decentralizing the role of the block producer who has to disseminate this information. However, reducing the length of the update information must not compromise the low overhead of stateless clients by requiring larger number of operations per proof update. Therefore, the ideal VC scheme for stateless clients must strike a delicate balance between the size of the update information and the runtime complexity of proof updates. Footnote 1: Block producers can enable the clients to succinctly verify the correctness of this information via SNARK proofs, thus still keeping the verification cost of blocks small. In Section 5, we provide the update algorithms for Verkle trees given their role in supporting stateless clients. We observe that Verkle trees do not support sublinear update, and fall under the same category as tree-based VCs with update information length \(\tilde{\Theta}(k\lambda)\). Despite this fact, Verkle trees are highly practical in terms of updates. In Section 5.5, we estimate that the update information size after a typical Ethereum block does not exceed \(|U|\approx 100\) kBytes (compared to the typical block size of \(<125\) kBytes). Moreover, each Verkle proof can be updated within approximately less than a second on commodity hardware. In contrast, even the most efficient homomorphic Merkle tree construction [33] requires an update information size of \(110.88\) MBytes and an update time of \(32.6\) seconds when the trade-off parameter \(\nu\) is \(1/2\), despite its asymptotic optimality (_cf._ Section 4.4). The large update information size is due to the lattice-based construction of these VCs. Despite their advantage in terms of concrete performance, unlike these lattice-based constructions, Verkle trees are not secure against quantum computers. Designing dynamic VCs that are asymptotically optimal, practically efficient and post-quantum resilient remains an open problem. #### 1.2.2 Databases with frequent membership changes VCs with sublinear update can support databases with frequent membership changes. When a user first registers, a message is updated to record the membership of the user. The user receives this record and its opening proof, using which it can later anonymously prove its membership. When the user leaves the system, the message is once again updated to delete the record. In all these steps, membership changes result in updates to the opening proofs of other members. When these changes are frequent, it becomes infeasible to distribute new proofs after each change. VCs with sublinear update offer an alternative and efficient way to update the opening proofs of the users in the event of such changes. ### Related Work There are many VC constructions, each with different guarantees regarding the proof, commitment and public parameter sizes, verification time, updatability and support for subvector openings [15, 25, 14, 37, 30, 24, 22, 19, 18, 6, 40, 9, 35, 34, 20, 12] (cf [29] for an SoK of VCs). First formalized by [14], almost all VCs allow some degree of updatability. Whereas [30, 6, 9, 35] enable updating the commitment and the opening proofs with only the knowledge of the old and the new messages, most VCs require some structured update information beyond the messages when the users do not have access to the internal data structures. Among the lattice-based accumulators, vector commitments and functional commitments [19, 23, 32, 30, 33, 31, 38], constructions amenable to sublinear update are presented in [31, 30, 33, 32]. Homomorphic Merkle trees were formalized and instantiated by [31, 30, 33] in the context of streaming authenticated data structures and parallel online memory checking. The construction presented in [32, Section 3.4] offers an alternative VC with sublinear update as it is not a Merkle tree, yet has the property that each inner node can be expressed as a _sum_ of the partial digests of individual messages. An alternative design to support stateless clients is the aggregatable subvector commitment (aSVC) scheme [36], which is a VC that enables aggregating multiple opening proofs into a succinct subvector proof. It enables each user to update its opening proof with the knowledge of the transactions in the blocks, and block producers to prove the validity of these transactions succinctly by aggregating the proofs submitted by the transacting users. As the scheme is based on KZG commitments, no update information is needed, yet, the update time complexity is linear in the number of transactions per block. For dynamic accumulators that support additions, deletions and membership proofs, Camacho and Hevia proved that after \(k\) messages are deleted, \(\Omega(k)\) bits of data must be published to update the proofs of the messages in the initial accumulated set [13, Theorem 1]. Their lower bound is information-theoretic and follows from a compression argument. Christ and Bonneau subsequently used a similar method to prove a lower bound on the global state size of a _revocable proof system_ abstraction [16]. As revocable proof systems can be implemented by dynamic accumulators and vector commitments, their lower bound generalizes to these primitives, _i.e._, after \(k\) messages are updated in a dynamic VC, at least \(\Omega(k)\) bits of data must be published to update the opening proofs (see Appendix 0.A for the proof). They conclude that a stateless commitment scheme must either have a global state with linear size in the number of accounts, or require a near-linear rate of local proof updates. In our work, we already assume a linear rate of local proof updates, _i.e._, after every Ethereum block or \(k\) messages in our parameterization, and that the message updates are publicized by the blockchain. We instead focus on the trade-off between the global structured update information size (beyond the published messages) and the runtime complexity of proof updates. ## 2 Preliminaries ### Notation We denote the security parameter by \(\lambda\). An event is said to happen with _negligible probability_, if its probability, as a function of \(\lambda\), is \(o(1/\lambda^{d})\) for all \(d>0\). An event happens with _overwhelming probability_ if it happens except with negligible probability. We denote the set \(\{0,1,2,..,N-1\}\) by \([N]\). When \(y=O(h(x)\operatorname{polylog}{(x)})\), we use the shorthand \(y=\tilde{O}(h(x))\) (similarly for \(\Theta(.)\) and \(\tilde{\Theta}(.)\)). The function \(H(.)\colon\mathcal{M}\to\{0,1\}^{\lambda}\) represents a collision-resistant hash function. We denote the binary decomposition of an integer \(x\) by \(\operatorname{bin}(x)\), and for \(c>2\), its base \(c\) decomposition by \(\operatorname{bin}_{c}(x)\). A vector of \(N\) elements \((n_{0},..,n_{N-1})\) is shown as \((n_{i})_{i}\). The notation \(\mathbf{x}[i{:}j]\) denotes the substring starting at the \(i^{\text{th}}\) index and ending at the \(j^{\text{th}}\) index within the sequence \(\mathbf{x}\). The indicator function \(1_{P}\) is equal to one if the predicate \(P\) is true, otherwise, it is zero. In the subsequent sections, \(k\) will be used to denote the number of updated messages. For a prime \(p\), let \(\mathbb{F}_{p}\) denote a finite field of size \(p\). We use \(\mathbb{G}\) to denote a cyclic group of prime order \(p\) with generator \(g\). The Lagrange basis polynomial for a given \(x\in\mathbb{F}_{p}\) is denoted as \(L_{x}(X)\): \[L_{x}(X)=\prod_{\begin{subarray}{c}i\in\mathbb{F}_{p}\\ i\neq x\end{subarray}}\frac{X-i}{x-i}\] We will use \(|G|\) and \(|H|\) to denote the maximum size of the bit representation of a single group element and a single hash value respectively. We will use \(T_{G}\) and \(T_{f}\) to denote the time complexity of a single group operation and a single function evaluation for the hash functions in Section 4.1. ### Vector Commitments A vector commitment (VC) represents a sequence of messages such that each message can be proven to be the one at its index via an _opening proof_. A dynamic vector commitment allows updating the commitment and the opening proofs with the help of an _update information_ when the committed messages are changed. Definition 1 (from [14]): Dynamic (updateable) vector commitments can be described by the following algorithms: \(\textsc{KeyGen}(1^{\lambda},N)\to pp\): Given the security parameter \(\lambda\) and the size \(N=\operatorname{poly}(\lambda)\) of the committed vector, the key generation algorithm outputs public parameters \(pp\), which implicitly define the message space \(\mathcal{M}\). \(\textsc{Commit}_{pp}(m_{0},..,m_{N-1})\rightarrow(C,\mathsf{data})\): Given a sequence of \(N\) messages in \(\mathcal{M}\) and the public parameters \(pp\), the commitment algorithm outputs a commitment string \(C\) and the data \(\mathsf{data}\) required to produce the opening proofs for the messages. Here, \(\mathsf{data}\) contains enough information about the current state of the VC's data structure (i.e., the current list of committed messages) to help generate the opening proofs. \(\textsc{Open}_{pp}(m,i,\mathsf{data})\rightarrow\pi_{i}\): The opening algorithm is run by the committer to produce a proof \(\pi_{i}\) that \(m\) is the \(i^{\text{th}}\) committed message. \(\textsc{Verify}_{pp}(C,m,i,\pi_{i})\rightarrow\{0,1\}\): The verification algorithm accepts (i.e., outputs 1) or rejects a proof. The security definition will require that \(\pi_{i}\) is accepted only if \(C\) is a commitment to some \((m_{0},..,m_{N-1})\) such that \(m=m_{i}\). \(\textsc{Update}_{pp}(C,(i,m_{i})_{i\in[N]},(i,m^{\prime}_{i})_{i\in[N]}, \mathsf{data})\rightarrow(C^{\prime},U,\mathsf{data}^{\prime})\): The algorithm is run by the committer to update the commitment \(C\) when the messages \((m_{i_{j}})_{j\in[k]}\) at indices \((i_{j})_{j\in[k]}\) are changed to \((m^{\prime}_{i_{j}})_{j\in[k]}\). The other messages in the vector are unchanged. It takes as input the old and the new messages, their indices and the data variable \(\mathsf{data}\). It outputs a new commitment \(C^{\prime}\), update information \(U\) and the new data variable \(\mathsf{data}^{\prime}\). \(\textsc{ProofUpdate}_{pp}(C,p((i,m_{i})_{i\in[N]},(i,m^{\prime}_{i})_{i\in[N]} ),\pi_{j},m^{\prime},i,U)\rightarrow\pi^{\prime}_{j}\): The proof update algorithm can be run by any user who holds a proof \(\pi_{j}\) for some message at index \(j\) and a (possibly) new message \(m^{\prime}\) at that index. It allows the user to compute an updated proof \(\pi^{\prime}_{j}\) (and the updated commitment \(C^{\prime}\)) such that is valid with respect to \(C^{\prime}\), which contains \(m^{\prime}_{i}\), \(i\in N\), as the new messages at the indices \(i\in N\) (and \(m^{\prime}\) as the new message at index \(i\)). Here, \(p(.)\) specifies what portion of the old and the new messages is sufficient to update the opening proof. For instance, the proof update algorithm often does not need the old and the new messages in the open; but can carry out the proof update using only their differences. In this case, \(p((i,m_{i})_{i\in[N]},(i,m^{\prime}_{i})_{i\in[N]})=(i,m^{\prime}_{i}-m_{i})_{ i\in N}\)._ _Correctness_ of a VC requires that \(\forall N=\operatorname{poly}(\lambda)\), for all honestly generated parameters \(pp\leftarrow\textsc{KeyGen}(1^{\lambda},N)\), given a commitment \(C\) to a vector of messages \((m_{0},..,m_{N-1})\in\mathcal{M}^{N}\), generated by \(\textsc{Commit}_{pp}\) (and possibly followed by a sequence of updates), and an opening proof \(\pi_{i}\) for a message at index \(i\), generated by \(\textsc{Open}_{pp}\) or \(\textsc{ProofUpdate}_{pp}\), it holds that \(\textsc{Verify}_{pp}(C,m_{i},i,\pi_{i})=1\) with overwhelming probability. _Security_ of a VC is expressed by the position-binding property: Definition 2 (Definition 4 of [14]): A VC satisfies position-binding if \(\forall i\in[N]\) and for every PPT adversary \(\mathcal{A}\), the following probability is negligible in \(\lambda\): \[\Pr\left[\textsc{Verify}_{pp}(C,m,i,\pi_{i})=1\wedge\atop{\textsc{Verify}_{pp }(C,m^{\prime},i,\pi^{\prime}_{i})=1\wedge m\neq m^{\prime}}:\,\textsc{pp} \leftarrow\textsc{KeyGen}(1^{\lambda},N)\atop(C,m,m^{\prime},\pi_{i},\pi^{ \prime}_{i})\leftarrow\mathcal{A}(pp)}\right]\] We relax the _succinctness_ assumption of [14] and denote a value to be succinct in \(x\) if it is \(\operatorname{polylog}(x)\). Many VC constructions also satisfy the hiding property: informally, no PPT adversary \(\mathcal{A}\) should be able to distinguish whether the VC was calculated for a vector \((m_{0},..,m_{N-1})\) or a vector \((m^{\prime}_{0},..,m^{\prime}_{N-1})\neq(m_{0},..,m_{N-1})\). In this work, we do not consider the hiding property since it is not explicitly required by our applications, and VCs can be made hiding by combining them with a hiding commitment [14]. ### KZG Polynomial Commitments The KZG commitment scheme [20] commits to polynomials of degree bounded by \(\ell\) using the following algorithms: \(\textsc{KeyGen}(1^{\lambda},\ell)\to pp\): outputs \(pp=(g,g^{\tau},g^{(\tau^{2})},..,g^{(\tau^{\ell})})\) as the public parameters, where \(g\) is the generator of the cyclic group \(\mathbb{G}\) and \(\tau\) is a trapdoor \((pp[i]=g^{\tau^{i}})\). \(\textsc{Commit}\big{(}pp,\phi(X)\big{)}\rightarrow(C,\mathsf{data})\): The commitment to a polynomial \(\phi(X)=\sum_{i=0}^{\ell-1}a_{i}X^{i}\) is denoted by \([\phi(X)]\), and is computed as \([\phi(X)]=\prod_{i=0}^{\ell}(pp[i])^{a_{i}}\). The commitment algorithm outputs \(C=[\phi(X)]\) and \(\mathsf{data}=\phi(X)\). \(\textsc{Open}_{pp}(m,i,\mathsf{data})\rightarrow\pi:\) outputs the opening proof \(\pi_{i}\) that \(\phi(i)=m\), calculated as the commitment to the quotient polynomial \((\phi(X)-\phi(i))/(X-i)\). \(\textsc{Verify}(C,m,i,\pi)\) accepts if the pairing check \(e\left(C/g^{m},g\right)=e\left(\pi,pp[1]/g^{i}\right)\) holds. We refer to [20] for the security analysis of this scheme. ### Merkle Trees Merkle Tree is a vector commitment using a collision-resistant hash function. In a Merkle tree, hashes of the committed messages constitute the leaves of a \(c\)-ary tree of height \(h=\log_{c}(N)\), where each inner node is found by hashing its children. The depth of the root is set to be \(0\) and the depth of the leaves is \(\lceil\log_{c}(N)\rceil\). The commitment function outputs the Merkle root as the commitment \(C\) and the Merkle tree as data. The opening proof for a message \(m_{x}\) at some index \(x\) is the sequence of \(h(c-1)\) hashes consisting of the siblings of the inner nodes on the path from the root to the hash of the message \(m_{x}\). We hereafter consider binary Merkle trees (\(c=2\)) and assume \(N=c^{h}=2^{h}\) unless stated otherwise. Let \(u_{b_{0},b_{1},..,b_{i-1}}\), \(b_{j}\in\{0,1\}\), \(j\in[i]\), denote an inner node at depth \(i-1\) that is reached from the root by choosing the left child at depth \(j\) if \(b_{j}=0\) and the right child at depth \(j\) if \(b_{j}=1\) (\(b_{0}=\bot\) and \(u_{\bot}\) is the root). By definition, for a message \(m_{x}\) at index \(x\), \(H(m_{x})=u_{\bot,\text{bin}(x)}\). ### Verkle Trees A Verkle tree [12, 17] is similar to a Merkle tree except that each inner node is calculated as the hash of the KZG polynomial commitment to its children. Let \(b_{j}\in[c]\), \(j=1,..,h\), denote the indices of the inner nodes on the path from the root to a leaf at index \(x\), \(\text{bin}_{c}(x)=(b_{1},..,b_{h})\), relative to their siblings. Define \(f_{b_{0},..,b_{j}}\), \(j\in[h]\), as the polynomials determined by the children of the inner nodes on the path from the root to the leaf, where \(f_{b_{0}}=f_{\bot}\) is the polynomial determined by the children of the root. Let \(C_{b_{0},..,b_{j}}=[f_{b_{0},..,b_{j}}]\), \(j\in[h]\), denote the KZG commitments to these polynomials. By definition, \(u_{b_{0},..,b_{j}}=H(C_{b_{0},..,b_{j}})\), and the value of the polynomial \(f_{b_{0},..,b_{j}}\) at index \(b_{j+1}\) is \(u_{b_{0},..,b_{j+1}}\) for each \(j\in[h]\). Here, \(u_{b_{0}}=H(C_{b_{0}})\) is the root of the tree, and \(u_{b_{0},..,b_{h}}\) equals the hash \(H(m_{x})\) of the message at index \(x\). For consistency, we define \(C_{b_{0},..,b_{h}}\) as \(m_{x}\). For example, given \(h=3\) and \(c=4\), the inner nodes from the root to the message \(m_{14}\) have the indices \(b_{0}=0\), \(b_{1}=3\) and \(b_{2}=2\), and they are committed by the polynomials \(f_{\bot}\), \(f_{\bot,0}\) and \(f_{\bot,0,3}\) respectively. The commitment function \(\textsc{Commit}_{pp}(m_{0},..,m_{N-1})\) outputs the root \(u_{b_{0}}\) as the commitment \(C\) and the Verkle tree itself as data. The Verkle opening proof for the message \(m_{x}\), \(\text{bin}(x)=(b_{1},..,b_{h})\), consists of two parts: (i) the KZG commitments \((C_{b_{0},b_{1}},..,C_{b_{0},..,b_{h-1}})\) on the path from the root to the message, and (ii) a Verkle multiproof. The goal of the Verkle multiproof is to show that the following evaluations hold for the inner nodes from the root to the message: \(f_{b_{0},..,b_{j}}(b_{j+1})=u_{b_{0},..,b_{j+1}}=H(C_{b_{0},..,b_{j+1}})\), \(j\in[h]\). It has two components: (i) the commitment \([g(X)]\) and (ii) the opening proof \(\pi^{\prime}\) for the polynomial \(h(X)-g(X)\) at the point \(t=H(r,[g(X)])\), where \[g(X)=\sum_{j=0}^{h-1}r^{j}\frac{f_{b_{0},..,b_{j}}(X)-u_{b_{0},..,b_{j+1}}}{X- b_{j+1}},\ \ \ \ h(X)=\sum_{j=0}^{h-1}r^{j}\frac{f_{b_{0},..,b_{j}}(X)}{t-b_{j+1}},\] and \(r=H(C_{b_{0}},..,C_{b_{0},..,b_{h-1}},u_{b_{0},b_{1}},..,u_{b_{0},..,b_{h}},b_ {1},..,b_{h})\). Thus, \(\textsc{Open}_{pp}(m,i,\textsf{data})\) outputs \(((C_{b_{0},b_{1}},..,C_{b_{0},..,b_{h-1}}),([g(X)],\pi^{\prime}))\). To verify a Verkle proof \(\pi=((C_{b_{0},b_{1}},..\,,C_{b_{0},..,b_{h}}),(D,\pi^{\prime}))\), the verification algorithm Verify\({}_{pp}(C,m,x,\pi)\) first computes \(r\) and \(t\) using \(u_{b_{0},..,b_{j}}=H(C_{b_{0},..,b_{j}})\), \(j\in[h]\), and \(u_{b_{0},..,b_{h}}=H(m)\). Then, given the indices \(\mathrm{bin}(x)=(b_{1},..\,,b_{h})\) and the commitments \((C_{b_{0},b_{1}},..\,,C_{b_{0},..,b_{h}})\), it calculates \[y=\sum_{j=0}^{h-1}r^{j}\frac{C_{b_{0},..,b_{j}}}{t-b_{j+1}}\qquad\quad E=\sum_ {j=0}^{h-1}\frac{r^{j}}{t-b_{j+1}}C_{b_{0},..,b_{j}}.\] Finally, it returns true if the pairing check \(e(E-D-[g(X)],[1])=e(\pi^{\prime},[X-t])\) is satisfied. As the degree \(c\) of a Verkle tree increases, size of the opening proofs and the runtime of the verification function decreases in proportion to the height \(h=\log_{c}N\) of the tree. This enables Verkle trees to achieve a short opening proof size for large number of messages (as in the case of the Ethereum state trie) by adopting a large degree (_e.g._, \(c=256\)). In comparison, each Merkle proof consists of \((c-1)\log_{c}N\) inner nodes, which grows linearly as \(c\) increases. ## 3 Formalizing the Dichotomy of VCs We first analyze the trade-off between the number of operations required by proof updates and the size of the update information \(U\) by inspecting different types of dynamic VCs. Recall that the number of updated messages is \(k\leq N\). ### Updating KZG Commitments and Opening Proofs In the subsequent sections, we assume that each user has access to a dictionary of KZG commitments to the Lagrange basis polynomials \(L_{i}(X)\), \(i\in\mathbb{F}_{p}\), and for each polynomial, its opening proofs at each point \(j\in\mathbb{F}_{p}\), \(j<N\). With the help of this table, one can instantiate a KZG based VC to the messages \((m_{i})_{i\in[N]}\), by treating them as the values of the degree \(N\) polynomial \(\phi(X)\) at inputs \(i\in\mathbb{F}_{p}\), \(i<N\). We next analyze the complexity of the update information and the proof updates in this VC. The update and proof update algorithms are described by Alg. 5 in Appendix 0.F. #### 3.1.1 Update Information Suppose the vector \((i,m_{i})_{i\in[N]}\) is updated at some index \(i\) such that \(m^{\prime}_{i}\gets m_{i}+\delta\) for some \(\delta\in\mathbb{F}_{p}\). Then, the polynomial \(\phi(X)\) representing the vector is replaced by \(\phi^{\prime}(X)\) such that \(\phi^{\prime}(X)=\phi(X)\) if \(X\neq i\), and \(\phi^{\prime}(i)=\phi(i)+\delta\) at \(X=i\). Thus, the new KZG commitment \(C^{\prime}\) to \(\phi^{\prime}(X)\) is constructed from the commitment \(C\) to \(\phi(X)\) as follows: \[[\phi^{\prime}(X)]=[\phi(X)+\delta L_{i}(X)]=[\phi(X)][L_{i}(X)]^{\delta}=C \cdot[L_{i}(X)]^{\delta}=C\cdot[L_{i}(X)]^{m^{\prime}_{i}-m_{i}}\] If the vector is modified at \(k\) different indices \(i_{1},...,i_{k}\) from message \(m_{i_{j}}\) to \(m^{\prime}_{i_{j}}\), \(j\in[k]\), then the new commitment \(C^{\prime}=[\phi^{\prime}(X)]\) becomes \[\left[\phi(X)+\sum_{j=1}^{k}(m^{\prime}_{i_{j}}-m_{i_{j}})L_{x_{i_{j} }}(X)\right] =[\phi(X)]\prod_{j=1}^{k}[L_{i_{j}}(X)]^{(m^{\prime}_{i_{j}}-m_{i_{j} })}\] \[=C\prod_{j=1}^{k}[L_{i_{j}}(X)]^{(m^{\prime}_{i_{j}}-m_{i_{j}})}.\] Thus, the commitment can updated given only the old and the new messages at the updated indices, besides the table. #### 3.1.3 Proof Update Let \(\pi_{x}\) denote the opening proof of a polynomial \(\phi(X)\) at a point \((x,m_{x})\). When \(k\) messages are updated, the new opening proof \(\pi^{\prime}_{x}\) can be found as a function of the old proof \(\pi_{x}\) and the opening proofs \(\pi_{i_{j},x}\) of the Lagrange basis polynomials \(L_{i_{j}}(X)\), \(j\in[k]\), at the index \(x\) (\(m^{\prime}_{x}=m_{x}+\sum_{j=1}^{k}(m^{\prime}_{i_{j}}-m_{i_{j}})\cdot 1_{x=i_{j}}\) is the new value of \(m_{x}\) after the \(k\) updates): \[\pi^{\prime}_{x} =\left[\frac{\phi^{\prime}(X)-m_{x}-\sum_{j=1}^{k}\delta_{j} \cdot 1_{x=i_{j}}}{X-x}\right]\] \[=\pi_{x}\prod_{j=1}^{k}\left[\frac{L_{i_{j}}(X)-L_{i_{j}}(x)}{X-x }\right]^{m^{\prime}_{i_{j}}-m_{i_{j}}}=\pi_{x}\prod_{j=1}^{k}\pi_{i_{j},x}^{ m^{\prime}_{i_{j}}-m_{i_{j}}}\] Thus, the proof can updated given only the old and the new messages at the updated indices, besides the table. The update information is set to be the empty set, _i.e._, \(U=\emptyset\). #### 3.1.4 Complexity The size of the update information is constant, _i.e._, \(\tilde{\Theta}(1)\). Each user can update its proof after \(k\) accesses to the dictionary, and in the worst case, \(\Theta(k\log|\mathcal{M}|)=\tilde{\Theta}(k)\) group operations as \(\log\left(m^{\prime}_{i}-m_{i}\right)\leq\log|\mathcal{M}|\) for all \(i\in[N]\). ### Updating Merkle Trees and Opening Proofs We next consider a Merkle tree and analyze the complexity of the update information size and the runtime for proof updates. A simple update scheme would be recalculating the new Merkle tree given all of the old messages or the old inner nodes of the Merkle tree, and the message updates. However, this implies a large complexity for the runtime of the proof update algorithm that scales as \(\Omega(k)\) when users keep track of the inner nodes, and as \(\Omega(N)\) when the users recalculate the tree from scratch at each batch of updates. Moreover, in many applications, the users do not have access to any messages or inner nodes besides those that are part of the Merkle proof held by the user. Hence, in the following sections, we describe update and proof update algorithms that reduce the runtime complexity of the proof updates at the expanse of larger update information (Alg. 6 in Appendix 0.F). #### 3.2.3 Update Information Suppose the vector \((i,m_{i})_{i\in[N]}\) is updated at some index \(x\), \((b_{1},..,b_{h})=\operatorname{bin}(x)\), to \(m^{\prime}_{x}\). Then, the root \(C=u_{b_{0}}\) and the inner nodes \((u_{b_{0},b_{1}},..,u_{b_{0},b_{1},..,b_{h}})\), \((b_{1},..,b_{h})=\operatorname{bin}(i)\), must be updated to reflect the change at that index. Given the old inner nodes, the new values for the root and these inner nodes, denoted by \(C^{\prime}=u^{\prime}_{b_{0}}\) and \((u^{\prime}_{b_{0},b_{1}},..,u^{\prime}_{b_{0},b_{1},..,b_{h}})\), are calculated recursively as follows: \[u^{\prime}_{b_{0},b_{1},..,b_{h}} \leftarrow H(m^{\prime}_{x}),\] \[u^{\prime}_{b_{0},b_{1},..,b_{j}} \leftarrow\begin{cases}H(u^{\prime}_{b_{0},b_{1},..,b_{j},0},u_{ b_{0},b_{1},..,b_{j},1})&\text{if $b_{j+1}=0,\ j<h$}\\ H(u_{b_{0},b_{1},..,b_{j},0},u^{\prime}_{b_{0},b_{1},..,b_{j},1})&\text{if $b_{j+1}=1,\ j<h$} \end{cases}\] When the messages are modified at \(k\) different points \(i_{j}\), \(j\in[k]\), the calculation above is repeated \(k\) times for each update. As the updated inner nodes are parts of the Merkle proofs, the update information consists of the new values at the inner nodes listed from the smallest to the largest depth in the canonical left to right order. For instance, \(U=((\bot,u^{\prime}_{\bot}),(\bot 0,u^{\prime}_{0}),(\bot 1,u^{\prime}_{1}),( \bot 00,u^{\prime}_{00}),(\bot 10,u^{\prime}_{10}),..)\) implies that the root \(u_{\bot}\) and the inner nodes \(u_{\bot 0}\), \(u_{\bot 1}\), \(u_{\bot 00}\) and \(u_{\bot 10}\) were updated after \(k\) messages were modified at the leaves of the Merkle tree. We reference the updated inner nodes using their indices (_e.g._, \(U[b_{0},b_{1}\,..\,b_{j}]=v\), when \((b_{1}\,..\,b_{j},v)\in U\)). #### 3.2.4 Proof Update The Merkle proof \(\pi_{x}\) for a message at index \(x\), \((b_{1},..\,b_{h})=\operatorname{bin}(x)\), is the sequence \((u_{\overline{b}_{1}},u_{b_{1},\overline{b}_{2}},..,u_{b_{1},b_{2},.., \overline{b}_{h}})\). When \(k\) messages are updated, some of the inner nodes within the proof might have changed. A user holding the Merkle proof for index \(x\) can find the new values of these inner nodes by querying the update information with their indices. #### 3.2.5 Complexity Upon receiving the update information \(U\), each user can update its proof in \(\Theta(\log^{2}{(N)}+|H|\log{(N)})=\tilde{\Theta}(1)\) time by running a binary search algorithm to find the updated inner nodes within \(U\) that are part of its Merkle proof, and reading the new values at these nodes. Since modifying each new message results in \(h=\log{(N)}\) updates at the inner nodes and some of the updates overlap, \(|U|=\Theta(k\log{(N/k)}(\log{(N)}+|H|))=\tilde{\Theta}(k)|H|\), as each updated inner node is represented by its index of size \(\Theta(\log{(N)})\) and its new value of size \(|H|\) in \(U\). ### Dichotomy of VCs In the case of KZG commitments, \(|U|=\tilde{\Theta}(1)\), and there is no information overhead on top of the message updates. For Merkle trees with an efficient proof update algorithm, \(|U|=\tilde{\Theta}(k)|H|\), thus there is an extra term scaling in \(\tilde{\Theta}(k)|H|=\tilde{\Theta}(k)\lambda\), since \(|H|=\Omega(\lambda)\) for collision-resistant hash functions. In contrast, for KZG commitments, each user has to do \(\tilde{\Theta}(k)\) group operations to update its opening proof; whereas in Merkle trees, each user can update its proof in \(\tilde{\Theta}(1)\) time, which does not depend on \(k\). Hence, KZG commitments outperform Merkle trees in terms of the update information size, whereas Merkle trees outperform KZG commitments in terms of the time complexity of proof updates. Table 1 generalizes this observation to a dichotomy between algebraic VC schemes and tree-based ones favoring shorter runtimes for proof updates. The algebraic and tree-based ones outperform each other in terms of the update information size and runtime complexity respectively. ## 4 Vector Commitments with Sublinear Update We would like to resolve the separation in Table 1 and obtain a vector commitment, where both the size of the update information and the complexity of proof updates have a sublinear dependence on \(k\). In particular, \(|U|=\tilde{\Theta}(g_{1}(k)\lambda)\) in the worst case, and the proof update algorithm requires at most \(\tilde{\Theta}(g_{2}(k))\) operations, where both \(g_{1}(k)\) and \(g_{2}(k)\) are \(o(k)\). We say that such a VC supports _sublinear update_. In this section, we describe a family of VCs with sublinear update, parameterized by the values \(\nu\in(0,1)\) and characterized by the functions \((g_{1},g_{2})=(k^{\nu},k^{1-\nu})\). ### Homomorphic Merkle Trees We first introduce homomorphic Merkle trees where messages placed in the leaves take values in a set \(\mathcal{M}\). We will use two collision-resistant hash functions \(\tilde{f}\colon\mathcal{D}\times\mathcal{D}\to\mathcal{R}\) and \(f\colon\mathcal{M}\to\mathcal{R}\), where both \(\mathcal{M}\) and \(\mathcal{D}\) are vector spaces over some field \(\mathbb{F}\), and \(\mathcal{R}\) is an arbitrary finite set. We will also need an injective mapping \(g:\mathcal{R}\to\mathcal{D}\), which need not be efficiently computable. We use \(g^{-1}:\mathcal{D}\to\mathcal{R}\) to denote the inverse of \(g\), meaning that \(g^{-1}(g(x))=x\) for all \(x\in\mathcal{R}\). We require that \(g^{-1}\) be efficiently computable. Now, for \(j\in[h]\), where \(h\) is the height of the tree, every node \(u_{b_{0},\ldots,b_{j}}\in\mathcal{D}\) of the homomorphic Merkle tree is characterized by the following expressions: \[\begin{array}{ll}\text{a leaf node:}&g^{-1}(u_{b_{0},\min(i)})=f(m_{i})\\ \text{an internal node:}&g^{-1}(u_{b_{0},\ldots,b_{j}})\ \ =\tilde{f}(u_{b_{0},\ldots,b_{j},0},\ u_{b_{0},\ldots,b_{j},1})\text{ for }j<h\end{array}\] The homomorphic property of the Merkle tree refers to the fact that there are efficiently computable functions \[h_{i,j}:\mathcal{D}\to\mathcal{D}\qquad\text{for }i\in[N]\text{ and }j\in[h],\] such that every inner node \(u_{b_{0},\ldots,b_{j}}\in\mathcal{D}\) can be expressed as \[u_{b_{0}}= \sum_{i\in[N]}\ h_{i,0}(m_{i})\] \[u_{b_{0},\ldots,b_{j}}= \sum_{i\colon\text{bin}(i)[0:j-1]=(b_{1},\ldots,b_{j})}\hskip-14.226378pt h_{i,j}(m_{i}).\] We refer to the function \(h_{i,j}\) as a _partial digest function_ and refer to \(h_{i,j}(m_{i})\) as the _partial digest_ of \(m_{i}\). In a homomorphic Merkle tree, every internal node is the sum of the partial digests of the leaves under that node. We will show in Section 4.3 that each function \(h_{i,j}\) can be expressed as an iterated composition of the functions \(f\) and \(\tilde{f}\). Evaluating \(h_{i,j}\) requires evaluating the functions \(f\) and \(\tilde{f}\) exactly \(h-j\) times. Opening proof for a message consists of _both_ children of the internal nodes on the path from the message to the root (as opposed to Merkle opening proofs that contain only the siblings of the internal nodes on the path). For instance, the opening proof for the message \(m_{i}\) at leaf index \(i\), with \(\mbox{bin}(i)=(b_{1},..,b_{h})\), is \((i,(u_{b_{0},..,b_{j},0},u_{b_{0},..,b_{j},1})_{j=0,..,h-1})\). Opening proofs are verified using the functions \(f\) and \(\tilde{f}\) (not by using the functions \(h_{i,j}\)). To verify an opening proof \((i,(u_{b_{0},..,b_{j},0},u_{b_{0},..,b_{j},1})_{j=0,..,h-1})\) for a message \(m_{i}\) with respect to the root \(u_{b_{0}}\), the verifier checks if the following equalities hold: for the leaf: \[g^{-1}(u_{b_{0},\mbox{bin}(i)})=f(m_{i})\] for the internal nodes: \[g^{-1}(u_{b_{0},..,b_{j}}) =\tilde{f}(u_{b_{0},..,b_{j},0},\ u_{b_{0},..,b_{j},1})\mbox{ for }j=h-1,..,0.\] If so, it accepts the proof, and otherwise it outputs reject. As an example, consider a homomorphic Merkle tree that commits to four messsages \(m_{0},m_{1},m_{2},m_{3}\). Then, its root \(u_{\perp}\) and inner nodes \(u_{\perp,0}\), \(u_{\perp,1}\), \(u_{\perp,0,0}\), \(u_{\perp,0,1}\), \(u_{\perp,1,0}\), \(u_{\perp,1,0}\), \(u_{\perp,1,1}\) can be calculated as follows: \[u_{\perp}=h_{0,0}(m_{0})+h_{1,0}(m_{1})+h_{2,0}(m_{2})+h_{3,0}( m_{3})\ ; u_{\perp,0,0}=h_{0,2}(m_{0})\] \[u_{\perp,0}=h_{0,1}(m_{0})+h_{1,1}(m_{1})\ ; u_{\perp,0,1}=h_{1,2}(m_{1})\] \[u_{\perp,1}=h_{2,1}(m_{2})+h_{3,1}(m_{3})\ ; u_{\perp,1,0}=h_{2,2}(m_{2})\] \[u_{\perp,1,1}=h_{3,2}(m_{3})\] The opening proof for \(m_{3}\) is given by \((3,((u_{\perp,0},u_{\perp,1}),(u_{\perp,1,0},u_{\perp,1,1})))\), and verified by checking the following equations: for \(u_{\perp,1,1}\): \[g^{-1}(u_{\perp,1,1})=f(m_{i})\] for \(u_{\perp,1}\): \[g^{-1}(u_{\perp,1}) =\tilde{f}(u_{\perp,1,0},\ u_{\perp,1,1})\] for \(u_{\perp}\): \[g^{-1}(u_{\perp}) =\tilde{f}(u_{\perp,0},\ u_{\perp,1})\] It now follows that when a message \(m_{i}\) is updated to \(m^{\prime}_{i}\), each inner node on the path from the leaf to the root can be updated from \(u_{b_{0},..,b_{j}}\) to \(u^{\prime}_{b_{0},..,b_{j}}\) using the functions \(h_{i,j}\) as follows: \[u^{\prime}_{b_{0},..,b_{j}} = h_{i,j}(m^{\prime}_{i})+\sum_{\begin{subarray}{c}x\neq i:\\ \mbox{bin}(x)[0:j-1]=(b_{1},..,b_{j})\end{subarray}}h_{x,j}(m_{x})\ \ =\ u_{b_{0},..,b_{j}}+h_{i,j}(m^{\prime}_{i})-h_{i,j}(m_{i})\] When the partial digest functions are linear in their input, the expression \(h_{i,j}(m^{\prime}_{i})-h_{i,j}(m_{i})\) can be written as \(h_{i,j}(m^{\prime}_{i})-h_{i,j}(m_{i})=\mbox{sign}(m^{\prime}_{i}-m_{i})h_{i,j} (|m^{\prime}_{i}-m_{i}|)\). This lets us calculate the updated internal node using only the knowledge of the message diff \(m^{\prime}_{i}-m_{i}\). We provide examples of homomorphic Merkle tree constructions in Section 4.3 with linear partial digest functions \(h_{i,j}\). Homomorphic Merkle proofs in these constructions consist of the two siblings of the inner nodes on the path from the proven message to the root and the vector commitment itself is given by \(g^{-1}(b_{\perp})\) (Section 4.3). Unlike in Section 3.2, homomorphic Merkle trees enable calculating the new inner nodes after message updates using _only_ the new and the old updated messages, in particular using only their difference. Hence, we can construct a tree that achieves the same complexity for the update information size as algebraic VCs, albeit at the expanse of the proof update complexity, _without requiring the users to keep track of the old messages or to calculate the tree from scratch given all messages_ (see Appendix 0.C for further discussion). This is in contrast to Merkle trees based on SHA256. The update and proof update algorithms of such a homomorphic Merkle tree with no structured update information and the same asymptotic complexity as algebraic VCs is described in Appendix 0.B. Since the homomorphic Merkle trees can achieve both extremes in terms of update information size and update runtime (Table 1), with a smart structuring of the update information, they can support sublinear update. We show how in the next subsection. ### Structuring the Update Information We now describe the new update and proof update algorithms that enable homomorphic Merkle trees to achieve sublinear complexity as a function of the parameter \(\nu\) (Alg. 1). #### 4.2.1 Update Information When the messages \((i_{j},m_{i_{j}})_{j\in[k]}\) change to \((i_{j},m^{\prime}_{i_{j}})_{j\in[k]}\), the update information \(U\) is generated recursively using the following algorithm: 1. Start at the root \(u_{b_{0}}\). Terminate the recursion at an inner node if there are \(k^{1-\nu}\) or less updated messages under that node. 2. If there are more than \(k^{1-\nu}\) updated messages with indices \(\geq N/2\), _i.e._, under the right child, then publish the new right child of the root as part of \(U\), and apply the same algorithm to the subtree rooted at the right child, with \(u_{b_{0}}\) and \(N\) replaced by \(u_{b_{0},1}\) and \(N/2\) respectively. 3. If there are more than \(k^{1-\nu}\) updated messages with indices less than \(N/2\), _i.e._, under the left child, then publish the new left child of the root as part of \(U\), and apply the same algorithm to the subtree rooted at the left child, with \(u_{b_{0}}\) and \(N\) replaced by \(u_{b_{0},0}\) and \(N/2\) respectively. The new values of the inner nodes included in \(U\) are again listed from the smallest to the largest depth in the canonical left to right order. #### 4.2.2 Proof Update When the messages \((i_{j},m_{i_{j}})_{j\in[k]}\) are updated to \((i_{j},m^{\prime}_{i_{j}})_{j\in[k]}\), a user first retrieves the inner nodes within its Merkle proof that are published as part of the update information. It then calculates the non-published inner nodes within the proof using the partial digests. For instance, consider a user with the proof \((u_{\overline{b}_{1}},u_{b_{1},\overline{b}_{2}},\cdots,u_{b_{1},b_{2},\overline {b}_{k}})\) for some message \(m_{x}\), \((b_{1},\ldots,b_{h})=\mathrm{bin}(x)\). To update the proof, the user first checks the update information \(U\) and replaces the inner nodes whose new values are provided by \(U\): \(u^{\prime}_{b_{1},\ldots,\overline{b}_{d}}\gets U[b_{1}\,..\,\overline{b }_{d}]\), \(d\in[h]\), if \(U[b_{1}\,..\,\overline{b}_{d}]\neq\bot\). Otherwise, the user finds the new values at the nodes \(u_{b_{1},\ldots,\overline{b}_{d}}\), \(d\in[h]\), using the functions \(h_{x,d}\): \[u^{\prime}_{b_{1},..,b_{d-1},\overline{b}_{d}} = u_{b_{1},..,b_{d-1},\overline{b}_{d}}\] \[+\sum_{j\in[k]}1_{\operatorname{bin}(i_{j})[:d]=(b_{1},.., \overline{b}_{d})}\left(\operatorname{sign}(m^{\prime}_{i_{j}}-m_{i_{j}})h_{i_ {j},d}(|m^{\prime}_{i_{j}}-m_{i_{j}}|)\right)\] #### 3.2.2 Complexity Finally, we prove bounds on the complexity given by these algorithms: Theorem 3.1: _Complexity of the update information size and the runtime of proof updates are as follows: \(g_{1}(k)=k^{\nu}\) and \(g_{2}(k)=k^{1-\nu}\)._ Proof: Let \(\mathcal{U}\) denote the subset of the inner nodes published by the algorithm as part of \(U\) such that no child of a node \(u\in\mathcal{U}\) is published. Then, there must be over \(k^{1-\nu}\) updated messages within the subtree rooted at each node \(u\in\mathcal{U}\). Since there are \(k\) updated messages, and by definition of \(\mathcal{U}\), the subtrees rooted at the nodes in \(\mathcal{U}\) do not intersect at any node, there must be less than \(k/k^{1-\nu}=k^{\nu}\) inner nodes in \(\mathcal{U}\). Since the total number of published inner nodes is given by \(\mathcal{U}\) and the nodes on the path from the root to each node \(u\in\mathcal{U}\), this number is bounded by \(k^{\nu}\log{(N)}=\tilde{\Theta}(k^{\nu})\). Hence, \(|U|=\Theta(k^{\nu}\log{(N)}(\log{(N)}+|H|))=\tilde{\Theta}(k^{\nu})|H|=\tilde {\Theta}(k^{\nu})\lambda\), which implies \(g_{1}(k)=k^{\nu}\). For each inner node in its Merkle proof, the user can check if a new value for the node was provided as part of \(U\), and replace the node if that is the case, in at most \(\Theta(\log{(N)}+|H|)\) time by running a binary search algorithm over \(U\). On the other hand, if the new value of a node in the proof is not given by \(U\), the user can calculate the new value after at most \(k^{1-\nu}\log{(N)}\) function evaluations. This is because there can be at most \(k^{1-\nu}\) updated messages within the subtree rooted at an inner node, whose new value was not published as part of \(U\). This makes the total time complexity of a proof update at most \[\Theta(\log{(N)}(\log{(N)}+|H|+k^{1-\nu}\log{(N)}T_{f}))=\tilde{\Theta}(k^{1- \nu})T_{f},\] which implies \(g_{2}(k)=k^{1-\nu}\). To illustrate the proof above, consider the homomorphic Merkle tree in Figure 1 where \(k\) messages are updated. Suppose there are \(k^{1-\nu}/2\) updated messages among the first \(N/2k^{\nu}\) messages \(m_{0},..,m_{N/2k^{\nu}-1}\), another \(k^{1-\nu}/2\) updated messages among the second \(N/2k^{\nu}\) messages \(m_{N/2k^{\nu}},..\,m_{2N/2k^{\nu}-1}\) and so on. In this case, the algorithm identifies the inner nodes within the subtree at the top of the tree (whose nodes are denoted in solid blue) and publishes their new values as part of the update information. This is because there are \(k^{1-\nu}\) updated messages under each inner node and leaf of this subtree, denoted by \(u^{\prime}_{i}\), \(i=1,..\,,k^{\nu}\), whereas under the children of these leaf nodes there are less than \(k^{1-\nu}\) updated messages. Thus, each user can update its opening proof by downloading the new values of the top \(\log{k^{\nu}}\) inner nodes within its proof from the update information. There are at most \(k^{1-\nu}/2\) updated messages under each of the remaining \(\log N/k^{\nu}\) nodes in the proof; hence, the user can find their updated values in \(\Theta(k^{\nu}\log N)\) time. Note that in this example, and in general when the updated messages are distributed uniformly among the leaves, the size of the update information becomes \(\Theta(k^{\nu})\lambda\) rather than \(\Theta(k^{\nu}\log N)\lambda\). ### Constructions for Homomorphic Merkle Trees Homomorphic Merkle trees were proposed by [31, 30, 33]. They use lattice-based hash functions, and their collision-resistance is proven by reduction to the hardness of the gap version of the shortest vector problem (\(\mathsf{GAPSVP}_{\gamma}\)), which itself follows from the hardness of the small integer solution problem. We next describe the construction introduced by [31], which is similar to those proposed by later works [30, 33] (an alternative construction is provided in Appendix 0.D). Its correctness and security follow from [31, Theorem 4]. Let \(L(\mathbf{M})\) denote the lattice defined by the basis vectors \(\mathbf{M}\subset\mathbb{Z}_{q}^{k\times m}\) for appropriately selected parameters \(k,m,q\), where \(m=2k\log q\). Consider vectors \(u\in\{0,..,t\}^{k\log q}\), where \(t\) is a small integer. The (homomorphic) hash functions \(f\colon\mathbb{Z}^{k\log q}\to L(\mathbf{M})\) and \(\tilde{f}\colon\mathbb{Z}^{k\log q}\times\mathbb{Z}^{k\log q}\to L(\mathbf{M})\) used by [31] are defined as \(f(x)=\mathbf{M}x\) and \(\tilde{f}(x,y)=\mathbf{M}\mathbf{U}x+\mathbf{M}\mathbf{D}y\) respectively. Here, \(\mathbf{U}\) and \(\mathbf{D}\) are special matrices that double the dimension of the multiplied vector and shift it up or down respectively. The remaining entries are set to zero. For convenience, we define \(\mathbf{L}=\mathbf{M}\mathbf{U}\) and \(\mathbf{R}=\mathbf{M}\mathbf{D}\). Since the domain and range of the hash functions are different, to ensure the Merkle tree's homomorphism, authors define a special mapping \(g\colon\mathbb{Z}_{q}^{k}\to\mathbb{Z}_{q}^{k\log q}\) from the range of the hash functions to their domain. Here, \(g(.)\) takes a vector \(\mathbf{v}\in\mathbb{Z}_{q}\) as input and outputs \(a\) radix-2 representation for \(\mathbf{v}\). However, as there can be many radix-2 representations of a vector, to help choose a representation that yields itself to homomorphism, authors prove the following result: for any \(\mathbf{x}_{1},\mathbf{x}_{2},..,\mathbf{x}_{t}\in\mathbb{Z}_{q}\), there exists _a short_ radix-2 representation \(g(.)\) such that Figure 1: Homomorphic Merkle tree example. The new values of the inner nodes with solid blue color are published as part of the updated information. \(g(\mathbf{x}_{1}+\mathbf{x}_{2}+..+\mathbf{x}_{t}\mod q)=b(\mathbf{x}_{1})+b( \mathbf{x}_{2})+..+b(\mathbf{x}_{t})\mod q\in\{0,..,t\}^{k\log q}\), where the function \(b\colon\mathbb{Z}_{q}^{k}\to\{0,1\}^{k\log q}\) returns the binary representation of the input vector. This equality enables the mapping \(g(.)\) to _preserve_ the hash functions' original homomorphic property. Then, given an inner node \(u_{b_{0},..,b_{j}}\) as input, the homomorphic Merkle tree uses the short radix-2 representation \(g(.)\) that enforces the following equality: \(u_{b_{0},..,b_{j}}=g(\mathbf{L}u_{b_{0},..,b_{j},0}+\mathbf{R}u_{b_{0},..,b_{j},1}\mod q)=b(\mathbf{L}u_{b_{0},..,b_{j},0})+b(\mathbf{R}u_{b_{0},..,b_{j},1}) \mod q\). Finally, this enables calculating the value of each inner node as a sum of the partial digests \(h_{i,j}(.)\) of the messages \(m_{i}\) under the node \(u_{b_{0},..,b_{j}}\) (_i.e._, \((m_{i})_{\mathrm{bin}(i)[0:j]=(b_{0},..,b_{j})}\)) as outlined in Section 4.1, i.e., \[u_{b_{0},..,b_{j}}=\sum_{i\colon\mathrm{bin}(i)[0:j-1]=(b_{1},..,b_{j})}h_{i,j} (m_{i}),\] where \(h_{i,j}(.)\) is expressed in terms of the bits \(\mathrm{bin}(i)[j\colon h-1]=(b_{1}^{\prime},..,b_{h-j}^{\prime})\): \[h_{i,j}(m_{i})=f_{b_{1}^{\prime}}(f_{b_{2}^{\prime}}(..f_{b_{h-j}^{\prime}}(b( f(m_{i})))))\] Here, \(f_{0}(.)\) and \(f_{1}(.)\) are defined as \(b(\mathbf{L}.)\) and \(b(\mathbf{R}.)\) respectively. Since \(b(.)\), binary expansion, is a linear operation and matrix multiplication is linear, \(h_{i,j}(.)\) is linear in its input. Opening proof of a message \(m\) consists of its index and \(\alpha_{i}\) and \(\beta_{i}\), \(i\in[h]\), \(h=\log{(N)}\), where \(\alpha_{i}\) and \(\beta_{i}\) are the children of the inner nodes on the path from \(m\) to the root. The proof can be verified in \(\log{(N)}\) time by iteratively checking if \(f(m)=g^{-1}(\alpha_{h})\) (or \(=g^{-1}(\beta_{h})\)) and \(\tilde{f}(\alpha_{i},\beta_{i})=g^{-1}(\alpha_{i-1})\) (or \(=g^{-1}(\beta_{i-1})\) depending on the message index), where \(g^{-1}\) returns a number given its radix-2 representation [31]. Note that both \(f\) and \(\tilde{f}\) are homomorphic hash functions [7]. Other examples of homomorphic hash functions include Pedersen hashes and KZG commitments. However, the homomorphic property of the hash function is not sufficient for constructing a homomorphic Merkle tree when the function is combined with the output of other functions in a serial manner as in Merkle trees. For the lattice-based function, this was possible because of repeated linearity [31], which refers to the existence of a linear mapping \(g(.)\) from the range to the domain of the hash function. This mapping enabled the iterative hashing within the Merkle tree to preserve the linearity of the hash function. Such repeated linearity does not exist for Pedersen hashes and KZG commitments as a linear mapping from the range to the domain would imply the violation of the discrete log assumption. That is why Verkle trees based on KZG commitments are not homomorphic and cannot support sublinear update. ### A Concrete Evaluation Suppose the Ethereum state is persisted using the homomorphic Merkle tree construction of [30; 33] with the trade-off parameter \(\nu=1/2\). We next estimate the size of the update information and the proof update time after observing an Ethereum block with ERC20 token transfers. Suppose the block has the target size of 15 million gas [4], and each token transfer updates the balance of two distinct accounts stored at separate leaves of the homomorphic Merkle tree. Since each ERC20 token transfer consumes approximately \(65,000\) gas, there are \(\sim 230\) such transactions in the block, and the block updates \(k=460\) accounts. Suppose the homomorphic Merkle tree has degree 2 and commits to \(N=256^{3}=2^{24}\) accounts. For comparison, \(256^{3}\approx 16.7\) million, matching in magnitude the total number of cumulative unique Ethereum addresses, which is 200 million as of 2023 [3]. Each opening proof consists of \(2\log{(N)}=48\) inner nodes. When 460 accounts are updated, in the worst case, the update information consists of \(\lceil\sqrt{k}\rceil\log{(N)}=528\) inner nodes. To evaluate its size, we use the parameters calculated by [33] for secure instantiations of the homomorphic Merkle trees from both their paper and [30]. Since the parameters for [30] result in a large inner node size on the order of hundreds of MBs, our evaluation takes the size of an inner node as that of [33], namely \(|H|=0.21\) MB (which is equal to the key size in [33]). This implies an update information size of \(|U|=110.88\) MBytes and an opening proof size of \(|\pi|=10.08\) MBytes. As for update time, in the worst case, each user has to calculate the partial digests of 44 updated messages at each height of the homomorphic Merkle tree, _i.e._, the effect of these updated messages on each inner node of its opening proof. Calculating the partial digest of a message at height \(h\) measured from the leaves requires \(h\) evaluations of the hash function. This implies a proof update complexity of \(2\sum_{i=0}^{\log{N-1}}i\min(\lceil\sqrt{k}\rceil,2^{i})=11,900\) hash evaluations. To find numerical upper bounds for the update time, we use the hash function evaluation times, namely \(T_{f}=26.84\) and \(T_{f}=2.74\) ms, published by [33] for both the hash function in [30] and their new and more performant function (these times are for commodity hardware; _cf._[33] for the details). This gives an upper bound of 319.4 and 32.6 seconds for the update time using the hash functions in [30] and [33] respectively. Based on the benchmarks for the practical hash function introduced in [33], Table 2 compares the number of published inner nodes \(\lceil k^{\nu}\rceil\log{(N)}\), the total update information size \(\lceil k^{\nu}\rceil\log{(N)}|H|\) (assuming that the size of each inner node is \(|H|\) upper bounded by 0.21 MBytes), the number of hash function evaluations per proof update \(2\sum_{i=0}^{\log{N-1}}i\min(\lceil k^{1-\nu}\rceil,2^{i})\) and the proof update time \(2\sum_{i=0}^{\log{N-1}}i\min(\lceil k^{1-\nu}\rceil,2^{i})T_{f}\) (assuming that each hash evaluation takes less than \(T_{f}=2.74\) ms) at \(\nu=0,1/4,1/2,3/4,1\). The degree of the homomorphic Merkle tree and the opening proof size are fixed at 2 and 48 inner nodes (\(|\pi|=10.08\)) respectively. ## 5 Updating Verkle Trees and Opening Proofs We now describe the update and proof update functions for Verkle trees (Algs. 2 and 3 respectively). Since Verkle trees were proposed to support stateless clients, we describe an update scheme that minimizes the runtime complexity of proof updates and does not require the users to download the updated messages or have access to old inner nodes. As Verkle trees do not support sublinear update, we numerically estimate the size of the update information and the complexity of proof updates in Section 5.5. ### Update Information Suppose the vector \((i,m_{i})_{i\in[N]}\) is modified at some index \(x\), \((b_{1},..,b_{h})=\operatorname{bin}(x)\) to be \(m_{x}^{\prime}\). Since each inner node is the hash of a KZG commitment, the new inner nodes \(u^{\prime}_{b_{0},..,b_{j}}=H(C^{\prime}_{b_{0},..,b_{j}})\), \(j\in[h]\), can be found as a function of the old \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline \(\nu\) & \# inner nodes & \(|U|\) (MBytes) & \# hash evaluations & Time (s) \\ \hline \hline 0 & 1 & 0.21 & \(227,972\) & 624.6 \\ \hline 1/4 & 120 & 25.20 & \(52,284\) & 143.3 \\ \hline 1/2 & 528 & 110.88 & \(11,900\) & 32.6 \\ \hline 3/4 & 2400 & 504.00 & 2750 & 7.54 \\ \hline 1 & 11040 & 2318.40 & 552 & 1.51 \\ \hline \end{tabular} \end{table} Table 2: For different trade-off points between the update information size and proof update complexity, parameterized by \(\nu\), the table shows the number of published inner nodes \(\lceil k^{\nu}\rceil\log{(N)}\), the total update information size \(\lceil k^{\nu}\rceil\log{(N)}|H|\), the number of hash function evaluations per proof update \(2\sum_{i=0}^{\log{N}-1}i\min(\lceil k^{1-\nu}\rceil,2^{i})\) and the proof update time \(2\sum_{i=0}^{\log{N}-1}i\min(\lceil k^{1-\nu}\rceil,2^{i})T_{f}\). There are \(N=2^{24}\) accounts in total, \(k=460\) updates at the accounts, the inner nodes have size \(|H|=0.21\) Mbytes, and a hash function evaluation takes \(T_{f}=2.74\) ms. commitments at the nodes and the powers of the Lagrange basis polynomials as described in Section 3.1: \[C^{\prime}_{b_{0},..,b_{h}}\gets m^{\prime}_{x},\qquad C^{\prime}_{b_{0},...,b_{j}}\gets C_{b_{0},..,b_{j}}[L_{b_{j+1}}]^{(u^{\prime}_{b_{0},..,b_{j+1} }-u_{b_{0},..,b_{j+1}})}\] When \(k\) messages are updated, the above calculation is repeated \(k\) times for each update. Update information \(U\) consists of the new values of the KZG commitments on the path from the updated messages to the Verkle root akin to the Merkle trees, ordered in the canonical top-to-bottom and left-to-right order. ### Verkle Proofs Let \(\pi_{x}\) denote the Verkle proof of some message \(m_{x}\) at index \(x\), \((b_{1},..,b_{h})=\text{bin}(x)\): \(\pi_{x}=((C_{b_{0},b_{1}},..,C_{b_{0},..,b_{h-1}}),([g(X)],\pi))\). We define \(\pi^{f}_{x}\) as the opening proof for index \(x\) within polynomial \(f\). We observe that the commitment \([g(X)]\) and the proof \(\pi\) can be expressed as functions of the opening proofs of the inner nodes \(u_{b_{0},b_{1}},\ldots,u_{b_{0},\ldots,b_{h}}\) at the indices \(b_{1},..,b_{h}\) within the polynomials \(f_{b_{0}},..,f_{b_{0},..,b_{h-1}}\), respectively. Namely, \([g(X)]\) is \[\left[\sum_{j=0}^{h-1}r^{j}\frac{f_{b_{0},..,b_{j}}(X)-u_{b_{0},...,b_{j+1}}}{X-b_{j+1}}\right] =\prod_{j=0}^{h-1}\left[\frac{f_{b_{0},..,b_{j}}(X)-u_{b_{0},..,b_{ j+1}}}{X-b_{j+1}}\right]^{r^{j}}\] \[=\prod_{j=0}^{h-1}\left(\pi_{b_{j+1}}^{f_{b_{0},..,b_{j}}}\right) ^{r^{j}}\] Similarly, the opening proof \(\pi=\pi_{t}^{(h-g)}\) for index \(t\) within the polynomial \(h(X)-g(X)\) can be expressed as follows (see Appendix 0.E for details): \[\left[\frac{h(X)-g(X)-(h(t)-g(t))}{X-t}\right] =\prod_{j=0}^{h-1}\left[\frac{f_{b_{0},..,b_{j}}(X)-u_{b_{0},..,b_ {j+1}}}{X-b_{j+1}}\right]^{\frac{r^{j}}{t-b_{j+1}}}\] \[=\prod_{j=0}^{h-1}\left(\pi_{b_{j+1}}^{f_{b_{0},..,b_{j}}}\right) ^{\frac{r^{j}}{t-b_{j+1}}}\] We assume that each user holding the Verkle proof \(\pi_{x}\) for some index \(x\), \((b_{1},..,b_{h})=\text{bin}(x)\), also holds the opening proofs \(\pi_{b_{j+1}}^{f_{b_{0},..,b_{j}}}\), \(j\in[h]\), _in memory_. As we will see in the next section, the user also holds the KZG commitments at the children of the inner nodes on the path from the root to the message \(m_{x}\), _i.e._\(C_{b_{0},..,b_{j},i}\) for all \(j\in[h]\) and \(i\in[c]\)_in memory_. These opening proofs and KZG commitments are not broadcast as part of any proof; however, they are needed for the user to locally update its Verkle proof after message updates. ### Proof Update When the messages \((i_{j},m_{i_{j}})_{j\in[k]}\) are updated to \((i_{j},m_{i_{j}}^{\prime})_{j\in[k]}\), to calculate the new Verkle proof \(\pi_{x}^{\prime}\), the user must obtain the new commitments \(C_{b_{0}}^{\prime},..,C_{b_{0},..,b_{h-1}}^{\prime}\) on the path from the root to message \(m_{x}\), the new commitment \([g^{\prime}(X)]\) and the new opening proof \(\pi^{\prime}\) for the polynomial \(h^{\prime}(X)-g^{\prime}(X)\) at index \(t^{\prime}=H(r^{\prime},[g^{\prime}(X)])\). Message updates change the commitments at the inner nodes, which in turn results in new polynomials \(f_{b_{0},..,b_{j}}\), \(j\in[h]\). Suppose each polynomial \(f_{b_{0},..,b_{j}}\), \(j\in[h]\), is updated so that \[f_{b_{0},..,b_{j}}^{\prime}(X)=f_{b_{0},..,b_{j}}(X)+\sum_{i=0}^{c-1}(f_{b_{0},..,b_{j}}^{\prime}(i)-f_{b_{0},..,b_{j}}(i))L_{i}(X),\] where, by definition, \(f_{b_{0},..,b_{j}}^{\prime}(i)-f_{b_{0},..,b_{j}}(i)=u_{b_{0},..,b_{j},i}^{ \prime}-u_{b_{0},..,b_{j},i}=H(C_{b_{0},..,b_{j},i}^{\prime})-H(C_{b_{0},..,b_ {j},i})\). Then, given the new and the old commitments \((C_{b_{0},..,b_{j},i},C_{b_{0},..,b_{j},i}^{\prime})\) for \(i\in[c]\) and \(j\in[h]\), the table of Lagrange basis polynomials, and using the technique in Section 3.1, the new opening proofs \(\tilde{\pi}_{b_{j+1}}^{f_{b_{0},..,b_{j}}}\) after the message updates can be computed as follows for \(j\in[h]\): \[\tilde{\pi}_{b_{j+1}}^{f_{b_{0},\ldots,b_{j}}}=\pi_{b_{j+1}}^{f_{b_{0},\ldots,b_{ j}}}\prod_{i=0}^{c-1}\left[\frac{L_{i}(X)-L_{i}(b_{j+1})}{X-b_{j+1}}\right]^{(H(C_{b_{0},\ldots,b_{j},i}^{\prime})-H(C_{b_{0},\ldots,b_{j},i}))},\] where \(\left[\frac{L_{i}(X)-L_{i}(b_{j+1})}{X-b_{j+1}}\right]\) is the opening proof of the Lagrange basis polynomial \(L_{i}(X)\) at index \(b_{j+1}\). Once the new opening proofs are found, the new commitment \([g^{\prime}(X)]\) and the new proof \(\pi^{\prime}\) become \[[g^{\prime}(X)]=\prod_{j=0}^{h-1}\left(\tilde{\pi}_{b_{j+1}}^{f_{b_{0},\ldots, b_{j}}}\right)^{r^{\prime}j},\qquad\pi^{\prime}=\prod_{j=0}^{h-1}\left(\tilde{ \pi}_{b_{j+1}}^{f_{b_{0},\ldots,b_{j}}}\right)^{\frac{r^{\prime}j}{r^{\prime} -b_{j+1}}}\] where \(r^{\prime}=H(C_{b_{0},b_{1}}^{\prime},..,C_{b_{0},..,b_{h-1}}^{\prime},u_{b_{ 0},b_{1}}^{\prime},..,u_{b_{0},..,b_{h}}^{\prime},b_{1},..,b_{h})\) and \(t^{\prime}=H(r^{\prime},[g^{\prime}(X)])\). Note that both \(r^{\prime}\) and \(t^{\prime}\) can be calculated by the user given the new KZG commitments \(C_{b_{0},..,b_{j},i}^{\prime}\) for all \(i\in[c]\) and \(j\in[h]\). Finally, to retrieve the new KZG commitments \(C_{b_{0},..,b_{j},i}^{\prime}\) for all \(i\in[c]\) and \(j\in[h]\), the user inspects the commitments published as part of the update information \(U\): \(C_{b_{0},b_{1},..,b_{j-1},i}^{\prime}\gets U[b_{0},b_{1},..,b_{j-1},i]\) if \(U[b_{0},b_{1},..,b_{j-1},i]\neq\bot\) and \(C_{b_{0},b_{1},..,b_{j-1},i}^{\prime}\gets C_{b_{0},b_{1},..,b_{j-1},i}\) otherwise, for all \(i\in[c]\) and \(j\in[h]\). In Verkle trees, the user cannot calculate the effect of an updated message on an arbitrary inner node without the knowledge of the inner nodes on the path from the message to the target node. For instance, suppose \(U[b_{0},b_{1},..,b_{j-1},i]=\bot\) for some \(i\in[c]\) and \(j\in[h]\), and the user wants to calculate the effect of an update from \(m_{x}\) to \(m_{x}^{\prime}\) on \(C_{b_{0},..,b_{j-1},i,\tilde{b}_{j+1},..,\tilde{b}_{h}}^{\prime}\), \(\mathrm{bin}(x)=(b_{1},..,b_{j-1},i,\tilde{b}_{j+1},..,\tilde{b}_{h})\) and \(\tilde{b}_{j}=i\). Then, for each \(\ell\in\{j,..,h-1\}\), the user have to find \[C_{b_{0},..,\tilde{b}_{j},..,\tilde{b}_{h}}^{\prime} \gets m_{x}^{\prime}\] \[C_{b_{0},..,\tilde{b}_{j},..,\tilde{b}_{\ell}}^{\prime} \gets C_{b_{0},..,\tilde{b}_{j},..,\tilde{b}_{\ell}}[L_{\tilde{b}_{ \ell+1}}]^{(u_{b_{0}}^{\prime},..,\tilde{b}_{j},..,\tilde{b}_{\ell+1}}-u_{b_{0},..,\tilde{b}_{j},..,\tilde{b}_{\ell+1}}),\] where \(C_{b_{0},..,\tilde{b}_{j},..,\tilde{b}_{\ell}}^{\prime}\) are the commitments on the path from the target commitment \(C_{b_{0},b_{1},..,b_{j-1},i}\) to the message \(m_{x}\). Hence, the user has to know the original commitments on the path from the message to the target commitment, _i.e._, keep track of inner nodes, which contradicts with the idea of stateless clients. This shows the necessity of publishing all of the updated inner nodes as part of the update information. ### Complexity Suppose each KZG commitment is of size \(|G|\) and each hash \(H(C)\) of a KZG commitment, _i.e._ each inner node, has size \(|H|\). Then, updating a single message results in one update at each level of the Verkle tree and requires \(\Theta(h|H|)\) group operations. Thus, when \(k\) messages are updated, the new Verkle root can be found after \(\Theta(kh|H|)\) group operations. As \(U\) consists of the published KZG commitments at the inner nodes and their indices, \(|U|=\Theta(k\log_{c}{(N)}(\log{(N)}+|G|))=\tilde{\Theta}(k)|G|\), which implies \(g_{1}(k)=k\). The user can replace each KZG commitment at the children of the inner nodes from the root to its message in \(\Theta(\log{(N)}+|G|)\) time by running a binary search algorithm over \(U\). Since there are \(ch\) such commitments to be updated, _i.e._, \(C_{b_{0},..,b_{j},i}\), \(i\in[c]\) and \(j\in[h]\), updating these commitments takes \(\Theta(ch(\log{(N)}+|G|))=\tilde{\Theta}(1)\) time. Upon obtaining the new commitments \(C^{\prime}_{b_{0},..,b_{j-1},i}\), \(i\in[c]\), \(j\in[h]\), with access to the table of Lagrange basis polynomials, the user can update each opening proof \(\pi_{b_{j+1}}\) (for the function \(f_{b_{0},..,b_{j}}\)), \(j\in[h]\), with \(\Theta(c|H|)\) group operations. Since there are \(h\) such proofs, updating them all requires \(\Theta(ch|H|)\) group operations. Given the new proofs, computing the new commitment \([g^{\prime}(X)]\) and proof \(\pi^{\prime}\) requires \(\Theta(h|H|)\) group operations. This makes the total complexity of updating a Verkle proof \(\Theta(ch+2h)|H|T_{G}+\Theta(ch(\log_{c}{(N)}+|G|))\). For a constant \(c\) and \(h=\log_{c}{(N)}\), this implies a worst-case time complexity of \(\tilde{\Theta}(1)|H|T_{G}\) for Verkle proof updates, _i.e._, \(g_{2}(k)=1\). ### A Concrete Evaluation We now estimate the size of the update information and the number of group operations to update an opening proof after observing an Ethereum block consisting of ERC20 token transfers. As in Section 4.4, suppose the block has the target size of 15 million gas [4], and each token transfer updates the balance of two distinct accounts stored at separate leaves of the Verkle tree. Then, there are \(\sim 230\) such transactions in the block, and the block updates \(k=460\) accounts. We assume that the Verkle tree has degree 256 (_cf._[12]) and commits to \(256^{3}\) accounts as in Section 4.4. Then, each proof consists of 2 KZG commitments, \(C_{\perp,b_{1}}\) and \(C_{\perp,b_{1},b_{2}}\) and a multiproof consisting of the commitment \([g(X)]\) and proof \(\pi^{\prime}\). These components are elements of the pairing-friendly elliptic curve BLS12_381 and consist of \(|G|=48\) bytes [12]. This implies a proof size of \((\log_{c}{(N)}+1)|G|=192\) bytes (excluding the message at the leaf and its hash value; adding those makes it 272 bytes). When 460 accounts are updated, in the worst-case, the update information has to contain \(k\log_{c}(N)(\log(N)+|G|)=460\times 3\times(24+48)\) Bytes, _i.e._, 99.4 kBytes. This is comparable to the size of the Ethereum blocks, which are typically below 125 kBytes [2]. Hence, even though the update information of Verkle trees is linear in \(k\), it does not introduce a large overhead beyond the block data. Note that the runtime of the proof updates are constant and do not scale in the number of updated messages \(k\), or the Ethereum block size. On the other hand, in the worst case, an opening proof can be updated after \(c\log{(c)}|H|+2\log_{c}{(N)}|H|\) group operations. Then, with \(|H|=256\), the number of bits output by SHA256, as many as \(c\log_{c}{(N)}|H|+2\log_{c}{(N)}|H|=(c+2)\log_{c}{(N)}|H|=774\times 2256\approx 200,000\) elliptic curve multiplications might have to be made. Following the benchmarks published in [1] for the specified curve, these operations can take up to \((c+2)\log_{c}{(N)}\) 0.000665471 ns = 0.52 seconds on commodity hardware, given a runtime of 665471 nanoseconds per exponentiation of a group element with a message hash value. This is again comparable to the 12 second inter-arrival time of Ethereum blocks. Table 3 compares the Verkle proof size \(|\pi|=(\log_{c}{(N)}+1)|G|\), update information size \(|U|=k\log_{c}(N)(\log_{c}{N}+|G|)\), the upper bound \((c+2)\log_{c}{N}|H|\) on the number of group operations needed for a single proof update and the estimated time it takes to do these operations on a commodity hardware for different values of \(c\), the Verkle tree degree, while keeping the number of accounts and the updated accounts fixed at \(2^{24}\) and \(460\) respectively. The table shows the trade-off between the Verkle proof and update information size on one size and update complexity on the other. Comparing Table 3 with Table 2 shows that the Verkle tree with any given degree \(c\), \(1<c\leq 256\), significantly outperforms the existing homomorphic Merkle trees in Section 4.4 in terms of almost all of proof size, update information size and proof update time. ## 6 Lower Bound Finally, we prove the optimality of our VC scheme with sublinear update by proving a lower bound on the size of the update information given an upper bound on the complexity of proof updates. The lower bound is shown for VCs that satisfy the following _proof-binding_ property. It formalizes the observation that for many dynamic VCs (_e.g._, Merkle trees [26], Verkle trees [12], KZG commitments [20], RSA based VCs [9]) including homomorphic Merkle trees, the opening proof for a message at some index can often act as a commitment to the vector of the remaining messages. \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline \(c\) & \(|\pi|\) (Bytes) & \(|U|\) (kBytes) & \# Group Operations & Time (s) \\ \hline \hline 2 & 1200 & 794.9 & 24,576 & 0.064 \\ \hline 4 & 628 & 397.4 & 18,432 & 0.048 \\ \hline 16 & 336 & 198.7 & 27,648 & 0.072 \\ \hline 64 & 240 & 132.5 & 67,584 & 0.18 \\ \hline 256 & 192 & 99.4 & 198,144 & 0.52 \\ \hline \end{tabular} \end{table} Table 3: For different values of the tree degree \(c\), the table shows the Verkle proof size which is \(|\pi|=(\log_{c}{(N)}+1)|G|\); the update information size which is \(|U|=k\log_{c}{(N)}(\log{(N)}+|G|)\); the number of group operations for a single proof update which is \((c+2)\log_{c}{(N)}|H|\); and the estimated time for a single proof update. We use \(N=2^{24}\) accounts in total, \(k=460\) updates at the accounts, a group element size of \(|G|=48\) bytes, and a hash size of \(|H|=32\) bytes. Definition 3: A VC scheme is said to be _proof-binding_ if the following probability is negligible in \(\lambda\) for all PPT adversaries \(\mathcal{A}\): \[\Pr\left[\begin{smallmatrix}\textsc{\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\smallsmall\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\smallsmall\small\small\small\small\small \small\small\small\small\small\small\small\smallsmall\small\small\small\smallsmall\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\smallsmall\small\small\small\small\small\small\smallsmall \small\small\small\small\small\small\small\small\small\small\small\small\smallsmall\small\small\small\small\small\small\small\small\small \small\small\smallsmall\small\small\smallsmall\small\small\small\smallsmall\small\small\small\small\smallsmall\small\small\small\smallsmall\small\small\small\small\small\small\small\smallsmall\ is a binding commitment to the vector of messages \(m_{j}\), \(j\in[N]\), \(j\neq i\), with the following new commitment function: \[\textsc{NewCommit}_{pp}((m_{j})_{j\in[N],j\neq i})=(i,m_{i},\textsc{Open}_{pp}(m_ {i},i,\textsf{data})),\] where \(\textsf{data}=\textsc{Commit}_{pp}(m_{0},..,m_{N-1})\).data. The following lemma shows that all randomized VCs can be derandomized to obtain a deterministic and secure VC as we do not use hiding commitments in this work. Lemma 2: _Consider a VC \(\Pi\), where the commitment is a random function of the public parameters \(pp\) and the committed messages. Let \(\Pi^{\prime}\) denote the VC that is the same as \(\Pi\), except that the randomness is fixed. Then, \(\Pi^{\prime}\) is a correct and secure VC with at most the same upper bound on the error probability._ Proof: Let \(R\) denote the sequence of bits sampled uniformly at random from the set \(\mathcal{R}\) to instantiate the VC \(\Pi\). Since \(\Pi\) is binding, no PPT adversary \(\mathcal{A}\) can find two different sequences of messages \(\mathbf{m}\) and \(\mathbf{m}^{\prime}\) such that \(\Pi(\mathbf{m},R)=\Pi(\mathbf{m}^{\prime},R^{\prime})\) for some \(R,R^{\prime}\in\mathcal{R}\), except with negligible probability. This implies that for any fixed \(R^{*}\in\mathcal{R}\), no PPT adversary \(\mathcal{A}\) can find two different sequences of messages \(\mathbf{m}\) and \(\mathbf{m}^{\prime}\) such that \(\Pi(\mathbf{m},R^{*})=\Pi(\mathbf{m}^{\prime},R^{*})\), except with negligible probability. Hence, the commitment scheme \(\Pi^{\prime}(.)=\Pi(.,R^{*})\) is a position-binding, _i.e._, secure VC. Its correctness follows from the correctness of \(\Pi\). Finally, equipped with these lemmas, we can prove Theorem 2.2 for dynamic and proof-binding VCs. Proof of Theorem 2.2.: Suppose the messages \(m_{i_{j}}\), \(j\in[k]\), are updated to \(m^{\prime}_{i_{j}}\). Define \(\mathcal{S}\) as the sequence \((m^{\prime}_{i_{j}})_{j\in[k]}\), and let \(m^{\prime}_{i}=m_{i}\) for \(i\notin\{i_{j}\colon j\in[k]\}\). Let \(\mathcal{P}_{i}\), \(i\in[N]\), denote the user that holds the opening proof \(\pi_{i}\) for the message \(m_{i}\) at index \(i\), and aims to calculate the new proof \(\pi^{\prime}_{i}\) for the message \(m^{\prime}_{i}\) using \(\pi_{i}\), the update information \(U\) and the old and the new sequences of messages \(m_{i},m^{\prime}_{i}\), \(i\in[N]\). Suppose \(g_{2}=O(k^{1-\nu})\). Then, there exists a constant \(\alpha\) such that each user can read at most \(\alpha k^{1-\nu}\) of the updated messages while updating its opening proof. Let \(\mathcal{S}_{i}\subseteq(m^{\prime}_{i_{j}})_{j\in[k]}\) denote the sequence of updated messages and their indices, which were _not_ observed by \(\mathcal{P}_{i}\), and \(\overline{\mathcal{S}}_{i}=\mathcal{S}\setminus\mathcal{S}_{i}\) denote the sequence read by \(\mathcal{P}_{i}\). Here, \(|\mathcal{S}|\) denotes the number of messages within the sequence \(\mathcal{S}\). Since \(\mathcal{P}_{i}\) is assumed to know \(m^{\prime}_{i}\), it must be that \(m^{\prime}_{i}\in\overline{\mathcal{S}}_{i}\). We next show that each user \(\mathcal{P}_{i}\) that successfully updates its opening proof must download enough bits of \(U\) to generate a binding, deterministic commitment to the set \(\mathcal{S}_{i}\). By Lemma 1, the tuple \((i,m^{\prime}_{i},\pi^{\prime}_{i})\) is a binding commitment to the sequence of messages \((m^{\prime}_{j})_{j\in[N],j\neq i}\). This implies that the tuple \((i,\overline{\mathcal{S}}_{i},\pi^{\prime}_{i})\) is a binding commitment to the sequence \(\mathcal{S}_{i}\). By Lemma 2, the commitment \((i,\overline{\mathcal{S}}_{i},\pi^{\prime}_{i})\) can be de-randomized to obtain a deterministic commitment \(C_{i}\) to the sequence \(\mathcal{S}_{i}\) (with at most the same upper bound on the error probability). Let \(\Pi\) denote the deterministic VC scheme such that \(C_{i}=\Pi(\mathcal{S}_{i})\). Since \(\Pi\) is a deterministic function given the public parameters, and the updated messages are sampled independently and uniformly at random, then \(I(\mathcal{S}_{i};\{m_{i}\}_{i\in N},\overline{\mathcal{S}}_{i}|pp)=0\), where \(I(.;.)\) is the mutual information. Moreover, as \(\pi_{i}\) is a function of the old messages \(\{m_{i}\}_{i\in N}\) and the randomness of the original VC, \(I(C_{i};\pi_{i}|pp)=0\). Hence, \(C_{i}=f(U,i,\{m_{i}\}_{i\in N},\pi)\) is a deterministic function of the update information \(U\). For all \(i\in[k]\), it holds that \(|\mathcal{S}_{i}|\geq k-\alpha k^{1-\nu}\) and \(m^{\prime}_{i}\notin\mathcal{S}_{i}\). Given these constraints, the minimum number of distinct sequences \(\mathcal{S}_{i}\) is \(\frac{k}{\alpha k^{1-\nu}}=\frac{k^{\nu}}{\alpha}\). For an appropriately selected \(\beta\) that will be defined later, without loss of generality, let \(\mathcal{S}_{0},..,\mathcal{S}_{M-1}\) denote the first \[M=\min\left(\left\lfloor\frac{k^{\nu}}{\beta}-\frac{\alpha}{\beta}-\frac{ \lambda}{\beta k^{1-\nu}}\right\rfloor,\frac{k^{\nu}}{\alpha}\right)\] distinct sequences. Since \(C_{i}\) is a deterministic function of \(U\) for all \(i\in N\), it holds that the Shannon entropy \(H(.)\) of \(U\) satisfies the following expression: \[H(U)\geq H(C_{0},..,C_{M-1})\geq H(C_{0})+\sum_{i=1}^{M-1}H(C_{i}|C_{0},..,C_{i -1})\] As \(g_{2}(k)=O(k^{1-\nu})\), there exists a constant \(\beta\) such that each user can download at most \(\beta k^{1-\nu}\) bits of data from \(U\). Then, for all \(i\in[k]\), it must be that \(H(C_{i})\leq H(U)\leq\beta k^{1-\nu}\) since \(C_{i}\) is a deterministic function of \(U\) for each \(i\in[N]\). Finally, we show that \(H(C_{0}),H(C_{i}|C_{0},..,C_{i-1})=\Omega(\lambda)\) for all \(i=1,..,M-1\). Towards contradiction, suppose \(\exists i^{*}\colon H(C_{i^{*}}|C_{0},..,C_{i^{*}-1})=o(\lambda)\). Note that \[H(C_{0},..,C_{i^{*}-1}) \leq\sum_{i=0}^{M-1}H(C_{i})\] \[\leq\min\left(\frac{k^{\nu}}{\beta}-\frac{\alpha}{\beta}-\frac{ \lambda}{\beta k^{1-\nu}},\frac{k^{\nu}}{\alpha}\right)\beta k^{1-\nu}\leq k- \alpha k^{1-\nu}-\lambda.\] Now, consider an adversary \(\mathcal{A}\) that tries to break the binding property of the VC scheme \(\Pi\). Due to the upper bound on the entropy of \((C_{0},..,C_{i^{*}-1})\), it holds that \(H(\mathcal{S}_{i^{*}}|C_{0},..,C_{i^{*}-1})\geq\lambda\); since \(H(\mathcal{S}_{i^{*}})\geq k-\alpha k^{1-\nu}\), and \[H(\mathcal{S}_{i^{*}})-H(\mathcal{S}_{i^{*}}|C_{0},..,C_{i^{*}-1})\] \[=I(\mathcal{S}_{i^{*}};(C_{0},..,C_{i^{*}-1}))\leq H(C_{0},..,C_{ i^{*}-1})\leq k-\alpha k^{1-\nu}-\lambda.\] However, when \(H(C_{i^{*}}|C_{0},..,C_{i^{*}-1})=o(\lambda)\), for sufficiently large \(\lambda\), given \((C_{0},..,C_{i^{*}-1})\), the adversary can find a collision such that \(\Pi(\mathcal{S}_{i^{*}})=\Pi(\mathcal{S}^{\prime}_{i^{*}})\) for two \(\mathcal{S}_{i^{*}}\neq\mathcal{S}^{\prime}_{i^{*}}\), with probability \(2^{-o(\lambda)}\). As this is a contradiction, it must be that \(H(C_{0})\) and \(H(C_{i}|C_{0},..,C_{i-1})=\Omega(\lambda)\) for all \(i<M\), and thus, \(H(U)=\Omega(k^{\nu}\lambda)\) and \(g_{1}(k)=\Omega(k^{\nu})\). Remark 1: Theorem 2 shows that the update information length scales as \(\tilde{\mathcal{O}}(k^{\nu}\lambda)\) when the runtime complexity for proof updates is \(\tilde{\mathcal{O}}(k^{1-\nu})\) and the error probability for the security of the VC is \(e^{-\Omega(\lambda)}\) for a PPT adversary. When the error probability is just stated to be negligible in \(\lambda\), then the same proof can be used to show that the update information length must scale as \(\Omega(k^{\nu}\operatorname{polylog}(\lambda))\) for any polynomial function of \(\log(\lambda)\). To demonstrate the optimality of homomorphic Merkle trees, we show that, like many other VCs, they satisfy the proof-binding property: Theorem 6.1.: _The proof-binding property is satisfied by secure homomorphic Merkle trees._ Proof.: Consider an adversary \(\mathcal{A}\) that finds an opening proof \(\pi\), an index \(i^{*}\), a message \(m_{i^{*}}\), and sequences of messages \((m_{0},..,m_{i^{*}-1},m_{i^{*}+1},..,m_{N-1})\neq(m^{\prime}_{0},..,m^{\prime} _{i^{*}-1},m^{\prime}_{i^{*}+1},..,m^{\prime}_{N-1})\) such that \(\textsc{Verify}_{pp}(C,m_{i^{*}},i^{*},\pi)=1\) as well as \(\textsc{Verify}_{pp}(C^{\prime},m_{i^{*}},i^{*},\pi)=1\) for the commmitments \(C\) and \(C^{\prime}\) to the sequences \((m_{0},..,m_{i^{*}-1},m_{i^{*}},m_{i^{*}+1},..,m_{N-1})\) and \((m^{\prime}_{0},..,m^{\prime}_{i^{*}-1},m_{i^{*}},m^{\prime}_{i^{*}+1},..,m^{ \prime}_{N-1})\). We show how it can break the position-binding property with this knowledge. The adversary first constructs the two homomorphic Merkle trees committing to these sequences. It then finds the first inner node \(u^{\prime}\) within the proof \(\pi\) on the path from the message \(m_{i^{*}}\) to the root such that the subtrees under this inner node contain different sequences of messages \((m_{a},..,m_{b})\neq(m^{\prime}_{a},..,m^{\prime}_{b})\) at the leaves. By definition \(g^{-1}(b^{\prime})\) is the homomorphic Merkle tree commitment to the sequence of messages under the node \(b^{\prime}\) on both trees. Thus, the adversary has created two homomorphic Merkle trees committing to different sequences of messages but with the same homomorphic Merkle tree commitment \(C=g^{-1}(b^{\prime})\). This implies that the adversary can find a tuple \((C,m_{i}\in(m_{a},..,m_{b}),m^{\prime}_{i}\in(m^{\prime}_{a},..,m^{\prime}_{b} ),\pi_{i},\pi^{\prime}_{i})\) such that \(\textsc{Verify}_{pp}(C,m_{i},i,\pi_{i})=1\) and \(\textsc{Verify}_{pp}(C,m^{\prime}_{i},i,\pi^{\prime}_{i})=1\) and \(m\neq m^{\prime}\). Since for all adversaries \(\mathcal{A}\), the probability that \(\mathcal{A}\) finds such a tuple is negligible in the security parameter \(\lambda\), for all adversaries \(\mathcal{A}\), the probability that \(\mathcal{A}\) finds an opening proof \(\pi\), index \(i^{*}\), message \(m_{i^{*}}\) and sequences of messages \((m_{0},..,m_{i^{*}-1},m_{i^{*}+1},..,m_{N-1})\neq(m^{\prime}_{0},..,m^{\prime} _{i^{*}-1},m^{\prime}_{i^{*}+1},..,m^{\prime}_{N-1})\) with the above qualities is negligible in \(\lambda\), implying that homomorphic Merkle trees are proof-binding. ## 7 Conclusion Dynamic VCs with sublinear update are the key to reducing the size of the global update information while minimizing the runtime of clients synchronizing with the latest commitment. In this work, we propose a construction that can achieve an update information size of \(\Theta(k^{\nu})\) and a proof update time of \(\Theta(k^{1-\nu})\) in the number of changed messages \(k\). Our construction combines a novel update algorithm (Alg. 1) with homomorphic Merkle trees [31, 30, 33] that allow each inner node to be expressed as a linear function of the underlying messages. It achieves the smallest asymptotic complexity for the update information size and proof update time. We also provide update algorithms for the Verkle trees proposed for stateless clients on Ethereum. The existing instantiations of homomorphic Merkle trees are based on lattices and require relatively large parameters for security. Consequently, despite the appealing asymptotic complexity of our construction, its performance for concrete parameters is dominated by Verkle trees. As such, designing asymptotically optimal and practically efficient dynamic VCs remains an open problem. An interesting direction is to design a more preferment homomorphic Merkle tree system. **Acknowledgments.** This work was partially funded by NSF, DARPA, the Simons Foundation, and NTT Research. Additional support was provided by the Stanford Center for Blockchain Research. Opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of DARPA.
2306.02905
Inner and Partial non-degeneracy of mixed functions
Mixed polynomials $f:\mathbb{C}^2\to\mathbb{C}$ are polynomial maps in complex variables $u$ and $v$ as well as their complex conjugates $\bar{u}$ and $\bar{v}$. They are therefore identical to the set of real polynomial maps from $\mathbb{R}^4$ to $\mathbb{R}^2$. We generalize Mondal's notion of partial non-degeneracy from holomorphic polynomials to mixed polynomials, introducing the concepts of partially non-degenerate and strongly partially non-degenerate mixed functions. We prove that partial non-degeneracy implies the existence of a weakly isolated singularity, while strong partial non-degeneracy implies an isolated singularity. We also compare (strong) partial non-degeneracy with other types of non-degeneracy of mixed functions, such as (strong) inner non-degeneracy, and find that, in contrast to the holomorphic setting, the different properties are not equivalent for mixed polynomials. We then introduce additional conditions under which strong partial non-degeneracy becomes equivalent to the existence of an isolated singularity. Furthermore, we prove that mixed polynomials that are strongly inner non-degenerate satisfy the strong Milnor condition, resulting in an explicit Milnor (sphere) fibration.
Benjamin Bode, Eder L. Sanchez Quiceno
2023-06-05T14:11:50Z
http://arxiv.org/abs/2306.02905v1
# Inner and Partial Non-Degeneracy of Mixed Functions ###### Abstract Mixed polynomials \(f:\mathbb{C}^{2}\to\mathbb{C}\) are polynomial maps in complex variables \(u\) and \(v\) as well as their complex conjugates \(\bar{u}\) and \(\bar{v}\). They are therefore identical to the set of real polynomial maps from \(\mathbb{R}^{4}\) to \(\mathbb{R}^{2}\). We generalize Mondal's notion of partial non-degeneracy from holomorphic polynomials to mixed polynomials, introducing the concepts of partially non-degenerate and strongly partially non-degenerate mixed functions. We prove that partial non-degeneracy implies the existence of a weakly isolated singularity, while strong partial non-degeneracy implies an isolated singularity. We also compare (strong) partial non-degeneracy with other types of non-degeneracy of mixed functions, such as (strong) inner non-degeneracy, and find that, in contrast to the holomorphic setting, the different properties are not equivalent for mixed polynomials. We then introduce additional conditions under which strong partial non-degeneracy becomes equivalent to the existence of an isolated singularity. Furthermore, we prove that mixed polynomials that are strongly inner non-degenerate satisfy the strong Milnor condition, resulting in an explicit Milnor (sphere) fibration. _Keywords:_ mixed function, isolated singularity, Newton non-degenerate, Milnor fibration. _Mathematics Subject Classification:_ Primary 14B05; Secondary 14J17, 14M25, 14P05, 32S05, 32S55. ## 1 Introduction Singularities of holomorphic polynomials are quite well studied. In particular, there are several non-degeneracy conditions that imply isolatedness of singularities and that lead to invariants associated with the Newton boundary that allow insights into topological properties of the singularity. In this paper we study real polynomial maps or, equivalently in the dimensions that we consider, mixed polynomials. We introduce the notion of a partially Newton non-degenerate mixed function, generalizing the corresponding definition from the complex setting, and compare it with other non-degeneracy conditions of mixed functions, in particular with those of Oka [14] and our previous work with Araujo dos Santos [1]. Complex polynomials that satisfy certain non-degeneracy conditions are known to be accessible to a topological analysis. For a special class of polynomials defined by Kouchnirenko [11] for example, which are called non-degenerate with respect to their Newton boundary (or Kouchnirenko non-degenerate), the topological type of the singularity, i.e., the link type of the singularity, is determined by the terms on the Newton boundary, that is, its _Newton principal part [10, 9]. Moreover, several singularity invariants are determined by combinatorial aspects of the Newton boundary, for instance, the formula of the Milnor number for a convenient, non-degenerate holomorphic polynomial presented by Kouchnirenko [11]. Wall [17] defined inner non-degenerate polynomials, generalizing Kouchnirenko's formula to include weighted homogeneous holomorphic polynomials with isolated singularity. Later, Mondal generalized Wall's non-degeneracy, defining the concept of a partially non-degenerate polynomial [13]. These two notions are known to be different in a field of positive characteristic, but are conjectured to be equivalent in fields of characteristic zero. The conjecture was proved for polynomials in 2 or 3 variables by Mondal. A similar development has occurred over the last years in the study of real polynomial mappings. We consider polynomial mappings \(f:\mathbb{R}^{4}\to\mathbb{R}^{2}\), with \(f(O)=0\), where \(O\) denotes the origin in \(\mathbb{R}^{4}\). These are the natural real analogues of holomorphic polynomials \(\mathbb{C}^{2}\to\mathbb{C}\). In order to take better advantage of this analogy we may write \(f\) as a mixed polynomial, a complex-valued function in two complex variables and their conjugates. The only reason for restricting our attention to mixed polynomials of these dimensions is that our original motivation was the study of the (classical 1-dimensional) links of singularities, see also [1]. We expect that our definitions and results have analogues for any mixed polynomial \(f:\mathbb{C}^{k}\to\mathbb{C}\). The set of such mixed polynomials is of course identical to the set of real polynomial mappings of the appropriate dimensions. The singular set \(\Sigma_{f}\) for a mixed polynomial, which is the set of points where the real Jacobian matrix does not have maximal rank, can be equivalently defined as the set of solutions to a system of equations of mixed polynomials, to be described in more detail in the later sections. We say that the origin \(O\in\mathbb{R}^{4}\) is a weakly isolated singularity if it is a singularity and there is an open neighbourhood \(U\subset\mathbb{R}^{4}\) of the origin such that \(U\cap V_{f}\cap\Sigma_{f}=\{O\}\), where \(V_{f}\) is the variety defined by the equation \(f=0\). We say that \(O\) is an isolated singularity if it is a singularity and there is a neighbourhood \(U\) of the origin such that \(U\cap\Sigma_{f}=\{O\}\). The difference between weakly isolated singularities and isolated singularities marks an important departure from the complex case, where the two notions are equivalent. The concepts of a Newton polygon and Newton boundary of holomorphic polynomials have an analogous definition for mixed functions as introduced by Oka [14]. Following this analogy with the holomorphic setting, different non-degeneracy conditions have been put forward. Oka introduced the notion of Newton non-degeneracy and strong Newton non-degeneracy, which imply the existence of a weakly isolated singularity and an isolated singularity, respectively, if another property ("convenience") of the Newton boundary is assumed. We defined the concept of inner non-degenerate and strongly inner non-degenerate mixed polynomials in [1]. Again these non-degeneracy conditions only depend on the terms on the Newton boundary. It was shown that inner non-degeneracy and strongly inner non-degeneracy imply a weakly isolated singularity and an isolated singularity, respectively. Furthermore, Oka's convenient and (strongly) non-degenerate mixed functions were found to be special cases of our (strongly) inner non-degenerate functions. The topological type of an inner non-degenerate polynomial with an additional "nice" property is determined by the terms on the mixed Newton boundary. An assumption of an isolated singularity is not necessary to prove this equivalence, as opposed to the results in [15]. Therefore, these classes of mixed polynomials are interesting in that they possess certain properties that are already established for holomorphic polynomials. Of particular interest in this context is the family of semiholomorphic polynomials that was introduced in [7] and studied in detail in [1]. This family consists of those mixed polynomials that are holomorphic with respect to one of the two complex variables. In this paper, we define partial Newton non-degeneracy and strong partial Newton non-degeneracy (Definition 3.1 and Definition 3.2), which are mixed versions of the partial non-degeneracy introduced by Mondal. As with other types of non-degeneracy we often drop "Newton" and simply say that a polynomial is partially non-degenerate. We prove that partial Newton non-degeneracy (PND) and strong partial Newton non-degeneracy (SPND) generalize previous conditions that imply weakly isolated singularity and isolated singularity, that are, convenience and non-degeneracy (ND), and convenience and strong non-degeneracy (SND) defined in [14] and inner non-degeneracy (IND) and strong inner non-degeneracy (SIND) defined in [1]. **Theorem 1.1**.: _Let \(f\) be a (strongly) inner non-degenerate mixed polynomial. Then \(f\) is (strongly) partially non-degenerate._ **Theorem 1.2**.: _Let \(f\) be a partially non-degenerate mixed polynomial. Then \(f\) has a weakly isolated singularity at the origin. If \(f\) is strongly partially non-degenerate, then it has an isolated singularity at the origin._ We can see the relations between these different types of non-degeneracy and the existence of isolated singularities in the following diagrams. We provide a number of examples that show that for all implications in the diagram the converse is not true in general. This is in contrast to the complex setting, where (strong) inner non-degeneracy and (strong) partial non-degeneracy are equivalent. **Theorem 1.3**.: _None of the implications in Figure 1 is an equivalence._ Having proved that strong partial non-degeneracy implies the existence of an isolated singularity, we would like to know how far this implication is from Figure 1: Relations between the different types of non-degeneracy and isolatedness of singularities. being an equivalence. We prove that if we assume that a polynomial map satisfies some extra properties, then the notion of strong partial non-degeneracy is equivalent to the existence of an isolated singularity. We find such conditions for the semiholomorphic case (Proposition 4.2) and the general mixed case (Proposition 4.3). As a consequence we obtain a criterion of non-isolatedness of the singularity at the origin (Corollary 4.4), which emphasizes the known fact that isolated singularities are very rare. To our knowledge, this is one of the first results that lead to the non-isolatedness of a mixed polynomial from data obtained from a suitable Newton polygon. All holomorphic polynomials \(f:\mathbb{C}^{2}\to\mathbb{C}\) with an isolated singularity satisfy the strong Milnor condition (SMC), which means that \(f/|f|\) defines a \(S^{1}\)-fibration on the complement of \(V_{f}\) in sufficiently small \(3\)-spheres. This property is in general not shared by mixed polynomials with isolated singularities [12]. However, we find that strong inner non-degeneracy is a sufficient condition for such a Milnor fibration. **Theorem 1.4**.: _Let \(f:(\mathbb{C}^{2},0)\to(\mathbb{C},0)\) be a strongly inner non-degenerate mixed polynomial. Then \(f\) satisfies the strong Milnor condition._ This includes the case of convenient and strongly non-degenerate mixed polynomials that was proved in [14] and the case of radially weighted homogeneous with isolated singularity, which was proved in [2]. **Proposition 1.5**.: _Let \(f:(\mathbb{C}^{2},0)\to(\mathbb{C},0)\) be a radially weighted homogeneous mixed polynomial. Then the following properties are equivalent:_ * \(f\) _has a weakly isolated singularity at the origin and satisfies the strong Milnor condition._ * \(f\) _has an isolated singularity at the origin._ * \(f\) _is strongly partially non-degenerate._ * \(f\) _is strongly inner non-degenerate._ The rest of the paper is structured as follows. Section 2 reviews the definitions of non-degeneracy of mixed functions introduced in [14] and [1], while Section 3 introduces partial non-degeneracy and compares the different notions, proving the implications shown in Figure 1 and stated in Theorem 1.1 and Theorem 1.2. We also provide examples that prove Theorem 1.3. Section 4 discusses conditions under which strong partial non-degeneracy becomes equivalent to the existence of an isolated singularity, while Section 5 studies the strong Milnor condition, resulting in proofs of Theorem 1.4 and Proposition 1.5 **Acknowledgments:** B. Bode acknowledges funding from the European Union's Horizon 2020 Research and Innovation Programme under the Marie Sklodowska-Curie grant agreement No 101023017, and E. Sanchez Quiceno acknowledges the supports by grants 2019/11415-3 and 2017/25902-8, Sao Paulo Research Foundation (FAPESP). This research was started during the visit of the second author to ICMAT, which was financed by FAPESP. We express our gratitude to the institute for their warm reception, particularly to Professor Daniel Peralta Salas for the invaluable support during the visit. The authors are thankful to Professor Osamu Saeki from Kyushu University, Japan, and Professor Raimundo N. Araujo dos Santos from ICMC-USP, Brazil, for their valuable discussions and comments that contributed to the paper. In this section we review some background on mixed singularities and mixed hypersurfaces. A more detailed account of the concepts and tools concerning mixed polynomials and their Newton boundaries can be found in [1, 14]. We consider the germ of a mixed polynomial \(f:(\mathbb{C}^{2},0)\to(\mathbb{C},0)\), \[f(z,\bar{z})=\sum_{\nu,\mu}c_{\nu,\mu}z^{\nu}\bar{z}^{\mu},\] where \(z=(u,v)\), \(\bar{z}=(\bar{u},\bar{v})\), \(z^{\nu}=u^{\nu_{1}}v^{\nu_{2}}\) for \(\nu=(\nu_{1},\nu_{2})\) (respectively \(\bar{z}^{\mu}=\bar{u}^{\mu_{1}}\bar{v}^{\mu_{2}}\) for \(\mu=(\mu_{1},\mu_{2})\)). The support of \(f\) is defined as \(supp(f):=\{\nu+\mu:c_{\nu,\mu}\neq 0\}\subset\mathbb{N}^{2}\). For mixed polynomials \(f\) we consider the singular set, the solutions of \[\Sigma_{f}:=\begin{cases}&s_{1,f}:=f_{u}\overline{f_{\bar{v}}}-\overline{f_{ \bar{u}}}f_{v}=0,\\ &s_{2,f}:=|f_{u}|^{2}-|f_{\bar{u}}|^{2}=0\\ &s_{3,f}:=|f_{v}|^{2}-|f_{\bar{v}}|^{2}=0\end{cases} \tag{1}\] as a germ of sets at the origin, where \(f_{x}\) denotes the partial derivative with respect to \(x\). Note that this definition of the singular set is equivalent to the usual one [1], which was also given in the introduction. When \(f\) is \(u\)-semiholomorphic, i.e., \(f\) does not depend on the variable \(\bar{u}\), the singular set can be defined as the solution of \[\Sigma_{f}:=\begin{cases}&f_{u}=0\\ &s_{3,f}=0\end{cases} \tag{2}\] as a germ of sets at the origin. If there is a neighbourhood \(U\) of the origin \(0\in\mathbb{C}^{2}\) with \(U\cap\Sigma_{f}=\{0\}\), we say that the origin is an isolated mixed singularity of \(f\). If there is a neighbourhood \(U\) of the origin \(0\) with \(U\cap\Sigma_{f}\cap V_{f}=\{0\}\), we say that the origin is a weakly isolated mixed singularity of \(f\). Oka [14] defined the Newton boundary \(\Gamma_{f}\) of a mixed polynomial, which in the two variable case is formed by \(0\)-dimensional ("vertices" ) and compact \(1\)-dimensional ("edges") faces. The _Newton principal part_\(f_{\Gamma}\) of \(f\) is defined by \[f_{\Gamma}(z,\bar{z})=\sum_{\nu+\mu\in\Gamma_{f}}c_{\nu,\mu}z^{\nu}\bar{z}^{ \mu}. \tag{3}\] When \(f\equiv f_{\Gamma}\), we say that \(f\) is a _boundary polynomial_. For a positive weight vector \(P=(p_{1},p_{2})\in\mathbb{N}^{2}\) we can define the radial degree of each monomial of \(f\) relative to \(P\) by setting \(rdeg_{P}(M_{\nu,\mu}):=\sum_{j=1}^{2}p_{j}(\nu_{j}+\mu_{j})\), where \(M_{\nu,\mu}=c_{\nu,\mu}z^{\nu}\bar{z}^{\mu}\). To every \(P\) we associate a face function \(f_{P}\) which corresponds to the monomials of \(f\) on which \(rdeg_{P}\) is minimal among all monomials of \(f\). We denote the corresponding minimal value by \(d(P;f)\). This face function can be seen as the restriction of \(f\) to one face \(\Delta(P)\) of \(\Gamma_{f}\) \[f_{P}(z,\bar{z})=f_{\Delta(P)}(z,\bar{z}):=\sum_{\nu+\mu\in\Delta(P)}c_{\nu, \mu}z^{\nu}\bar{z}^{\mu}.\] We call \(f(z,\bar{z})\) a _radially weighted homogeneous polynomial of radial type_ if there is a positive weight vector \(P\in\mathbb{N}^{2}\) such that \(f=f_{P}\). The face functions play an important role in the study of the topology and the singularities of \(f\)[14, 1]. **Definition 2.1** (Oka [14]).: _A face function \(f_{\Delta}\) of a mixed polynomial \(f\) for some compact face \(\Delta\) (0- or 1-dimensional) of \(\Gamma_{f}\) is called_ **Newton non-degenerate (ND)** _if \(V_{f_{\Delta}}\cap\Sigma_{f_{\Delta}}\cap(\mathbb{C}^{*})^{2}=\emptyset\). It is called_ **strongly Newton non-degenerate (SND)** _if \(\Sigma_{f_{\Delta}}\cap(\mathbb{C}^{*})^{2}=\emptyset\). We say that \(f\) is (strongly) Newton non-degenerate if \(f_{\Delta}\) is (strongly) Newton non-degenerate for all compact faces \(\Delta\) of \(\Gamma_{f}\)._ Let \(f:\mathbb{C}^{2}\to\mathbb{C}\) be a mixed polynomial with a boundary that has at least one compact 1-face. A 0-dimensional face of the boundary of \(\Gamma_{+}(f)\) that bounds a compact 1-face is called an _extreme vertex_ if it is the boundary of a unique 1-face. Otherwise, we call it a _non-extreme vertex_. Let \(\{P_{1},P_{2},\ldots,P_{N}\}\) be the sequence of weight vectors for which \(\Delta(P_{i})\) is a compact 1-face of \(\Gamma_{f}\), ordered as in [1], i.e., \(P_{i}=(p_{i,1},p_{i,2})\succ P_{j}=(p_{j,1},p_{j,2})\) if and only if \(k_{i}>k_{j}\), where \(k_{\ell}=\frac{p_{\ell,1}}{p_{\ell,2}}\). With these notions we consider the following definitions. **Definition 2.2** ([1, Definition 3.1]).: _We say that \(f\) is an_ **inner Newton non-degenerate (IND)** _if both of the following conditions hold:_ 1. _the face functions_ \(f_{P_{1}}\) _and_ \(f_{P_{N}}\) _have no critical points in_ \(V_{f_{P_{1}}}\cap(\mathbb{C}^{2}\setminus\{v=0\})\) _and_ \(V_{f_{P_{N}}}\cap(\mathbb{C}^{2}\setminus\{u=0\})\)_, respectively._ 2. _for each 1-face and non-extreme vertex_ \(\Delta\)_, the face function_ \(f_{\Delta}\) _has no critical points in_ \(V_{f_{\Delta}}\cap(\mathbb{C}^{*})^{2}\)_._ As in the other notions of non-degeneracy, there is also a related "strong" version of the property. **Definition 2.3** ([1, Definition 6.1]).: _We say that \(f\) is a_ **strongly inner Newton non-degenerate (SIND)** _if both of the following conditions hold:_ 1. _the face functions_ \(f_{P_{1}}\) _and_ \(f_{P_{N}}\) _have no critical points in_ \(\mathbb{C}^{2}\setminus\{v=0\}\) _and_ \(\mathbb{C}^{2}\setminus\{u=0\}\)_, respectively._ 2. _for each 1-face and non-extreme vertex_ \(\Delta\)_, the face function_ \(f_{\Delta}\) _has no critical points in_ \((\mathbb{C}^{*})^{2}\)_._ We may extend both definitions to mixed polynomials whose Newton boundary consists of a single vertex, so that \(N=0\), by saying that such a mixed polynomial \(f\) is strongly inner Newton non-degenerate if \(f_{\Gamma}\) has no critical points in \(\mathbb{C}^{2}\setminus\{(0,0)\}\). It is inner Newton non-degenerate if \(f_{\Gamma}\) has no critical points in \(V_{f_{\Delta}}\cap(\mathbb{C}^{2}\setminus\{(0,0)\})\). Such functions are for example included in the complex setting [13]. The proofs of our results are often written in terms of the face functions \(f_{P_{i}}\) associated with 1-faces. The same arguments apply to a mixed polynomial \(f\) with a Newton boundary without compact 1-face if we set \(P_{1}=P_{N}=(1,1)\), \(f_{P_{1}}=f_{P_{N}}=f_{\Gamma}\), as well as \(k_{1}=k_{N}=1\). However, the origin is not necessarily a singular point of such polynomials, as is shown by the example \(f(u,v)=u\). Hence the corresponding results for these functions should be interpreted as "If \(f\) has a singular point at the origin, it is an isolated singularity." as opposed to simply "\(f\) has an isolated singularity at the origin.", for example. In [1] we proved that the concept of an inner non-degenerate mixed polynomial is a natural generalization of the definition of a non-degenerate, convenient mixed polynomial. Furthermore, inner non-degenerate mixed polynomials have weakly isolated singularities. The analogous statement for the family of non-degenerate, convenient mixed polynomials was originally proved by Oka [14]. Likewise, strongly inner non-degenerate mixed polynomials have isolated singularities and generalize convenient, strongly non-degenerate mixed polynomials. ## 3 (Strongly) inner and partially non-degenerate mixed polynomials In this section we introduce the notion of partial non-degeneracy and strong partial non-degeneracy of mixed functions. We study their relations with inner non-degeneracy and strong inner non-degeneracy of [1] as well as Oka's non-degeneracy and strong non-degeneracy. **Definition 3.1**.: _We say that \(f\) is_ **partially Newton non-degenerate (PND)** _if both of the following conditions hold for every positive weight vector \(P\):_ 1. _the mixed polynomials_ \[(s_{1,f}(0,v))_{P}=(s_{2,f}(0,v))_{P}=(s_{3,f}(0,v))_{P}=(f(0,v))_{P}=0\] (4) _have no common solutions in_ \((\mathbb{C}^{*})^{2}\) _and_ \[(s_{1,f}(u,0))_{P}=(s_{2,f}(u,0))_{P}=(s_{3,f}(u,0))_{P}=(f(u,0))_{P}=0\] (5) _have no common solutions in_ \((\mathbb{C}^{*})^{2}\)_._ 2. _the mixed polynomials_ \[(s_{1,f})_{P}=(s_{2,f})_{P}=(s_{3,f})_{P}=f_{P}=0\] (6) _have no common solution in_ \((\mathbb{C}^{*})^{2}\)_._ Note that the functions in condition (i) only depend on one of the two complex variables, since the other has been set to \(0\). The condition that Eq. (4) has no solutions in \((\mathbb{C}^{*})^{2}\) is thus equivalent to the condition that these functions (considered as functions in \(v\)) have no common solutions in \(\mathbb{C}^{*}\). The analogous statement for Eq. (5) holds. **Definition 3.2**.: _We say that \(f\) is_ **strongly partially Newton non-degenerate (SPND)** _if both of the following conditions hold for every positive weight vector \(P\):_ 1. _the mixed polynomials_ \[(s_{1,f}(0,v))_{P}=(s_{2,f}(0,v))_{P}=(s_{3,f}(0,v))_{P}=0\] (7) _have no common solutions in_ \((\mathbb{C}^{*})^{2}\) _and_ \[(s_{1,f}(u,0))_{P}=(s_{2,f}(u,0))_{P}=(s_{3,f}(u,0))_{P}=0\] (8) _have no common solutions in_ \((\mathbb{C}^{*})^{2}\) _;_ 2. _the mixed polynomials_ \[(s_{1,f})_{P}=(s_{2,f})_{P}=(s_{3,f})_{P}=0\] (9) _have no common solution in_ \((\mathbb{C}^{*})^{2}\)_._ Both definitions reduce to that of partial non-degeneracy of holomorphic functions if \(f\) is holomorphic. Note that the functions \(s_{i,f}(0,v)\), \(s_{i,f}(u,0)\) with \(i=1,2,3\), \(f(0,v)\) and \(f(u,0)\) are mixed polynomials whose Newton boundary consists of a single vertex. Therefore, the condition (i) in both Definition 3.1 and Definition 3.2 is independent of the choice of weight vector \(P\). We want to compare the new definitions with inner non-degeneracy. As in corresponding definitions for holomorphic polynomials, partial non-degeneracy refers to properties of the face functions of \(s_{i,f}\), the functions that define the singular set of \(f\), while inner non-degeneracy is concerned with \(s_{i,f_{P}}\), the functions that define the singular set of \(f_{P}\), a face function of \(f\). This implies one crucial difference between the two notions. While it is obvious from the definition that (strong) inner non-degeneracy only depends on the Newton principal part of a mixed function, the same is not true for (strong) partial non-degeneracy. This is a manifestation of the fact that taking derivatives and taking \(P\)-parts are operations that do not commute, so that for example \((f_{u})_{P}\neq(f_{P})_{u}\) in general, as was already explored in [1]. In order to prove Theorem 1.1, i.e., that (strong) inner non-degeneracy implies (strong) partial degeneracy, we use the same idea as in the proof of Proposition 3.6 in [1]. Proof of Theorem 1.1.: We prove the statement for the strong types of non-degeneracy. The analogous statement for IND and PND follows the same line of reasoning. We assume that \(f\) is SIND, but not SPND, and prove the theorem by contradiction. Suppose that SPND-(ii) does not hold for some weight vector \(P=(p_{1},p_{2})\), with \(k_{1}\geq\frac{p_{1}}{p_{2}}\geq k_{N}\). Then there is a point \((a,b)\in\mathbb{C}^{2}\) with \(ab\neq 0\) that is a solution of the system (9) \[(s_{1,f})_{P}(a,b)=(s_{2,f})_{P}(a,b)=(s_{3,f})_{P}(a,b)=0. \tag{10}\] This can be written in terms of the face functions of the partial derivatives of \(f\): \[(s_{1,f})_{P}(a,b)=\begin{cases}((f_{u})_{P}\cdot(\overline{f_{v}})_{P})(a,b), &\text{ if }d(P;f_{u})+d(P;f_{e})<d(P;f_{v})+d(P;f_{a})\\ -((f_{v})_{P}(\overline{f_{u}})_{P})(a,b),&\text{ if }d(P;f_{v})+d(P;f_{a})<d(P;f_{u })+d(P;f_{e})\\ (s_{1,f})_{P}(a,b),&\text{ if }d(P;f_{u})+d(P;f_{e})=d(P;f_{v})+d(P;f_{a}) \end{cases} \tag{11}\] \[(s_{2,f})_{P}(a,b)=\begin{cases}|(f_{u})_{P}(a,b)|^{2},&\text{ if }d(P;f_{u})<d(P;f_{a})\\ -|(f_{\bar{u}})_{P}(a,b)|^{2},&\text{ if }d(P;f_{a})<d(P;f_{u})\\ (s_{2,f})_{P}(a,b),&\text{ if }d(P;f_{u})=d(P;f_{a})\end{cases} \tag{12}\] \[(s_{3,f})_{P}(a,b)=\begin{cases}|(f_{v})_{P}(a,b)|^{2},&\text{ if }d(P;f_{v})<d(P;f_{e})\\ -|(f_{\bar{v}})_{P}(a,b)|^{2},&\text{ if }d(P;f_{v})<d(P;f_{v})\\ (s_{3,f})_{P}(a,b),&\text{ if }d(P;f_{v})=d(P;f_{e})\end{cases} \tag{13}\] where \(((f_{u})_{P}\cdot(\overline{f_{v}})_{P})(a,b)=(f_{u})_{P}(a,b)\cdot(\overline{f_{v }})_{P}(a,b)\). Since \(k_{1}\geq\frac{p_{1}}{p_{2}}\geq k_{N}\), the face function \(f_{P}\) is neither of type \(u\)- and \(\bar{u}\)-semiholomorphic nor \(v\)- and \(\bar{v}\)-semiholomorphic. Therefore, applying [1, Lemma 3.5] we have \[\begin{cases}&d(P;f_{u})>d(P;f_{\bar{u}})=d(P;f)-p_{1},\text{ if }f_{P}\text{ is }\bar{u}\text{- semiholomorphic}\\ &d(P;f_{\bar{u}})>d(P;f_{u})=d(P;f)-p_{1},\text{ if }f_{P}\text{ is }u\text{- semiholomorphic}\\ &d(P;f_{\bar{u}})=d(P;f_{u})=d(P;f)-p_{1},\text{ if }f_{P}\text{ depends on both }u\text{ and }\bar{u}\end{cases}\] and \[\begin{cases}&d(P;f_{v})>d(P;f_{\bar{v}})=d(P;f)-p_{2},\text{ if }f_{P}\text{ is }\bar{v}\text{- semiholomorphic}\\ &d(P;f_{\bar{v}})>d(P;f_{v})=d(P;f)-p_{2},\text{ if }f_{P}\text{ is }v\text{- semiholomorphic}\\ &d(P;f_{\bar{v}})=d(P;f_{v})=d(P;f)-p_{2},\text{ if }f_{P}\text{ depend in }v\text{ and also }\bar{v}.\end{cases}\] Depending on the combination of these different cases, there are 9 cases to consider in total. We show the calculation for the case that \(d(P;f_{u})>d(P;f_{\bar{u}})=d(P;f)-p_{1}\) and \(d(P;f_{\bar{v}})=d(P;f_{v})=d(P;f)-p_{2}\). The other cases with \(k_{1}\geq\frac{p_{1}}{p_{2}}\geq k_{N}\) follow analogously. Comparing Eqs. (11)-(13) with Eq. (10) we find \[(s_{1,f})_{P}(a,b) =(f_{v})_{P}(a,b)\cdot(\overline{f_{u}})_{P}(a,b)=0\] \[(s_{2,f})_{P}(a,b) =|(f_{\bar{u}})_{P}(a,b)|^{2}=0 \tag{14}\] \[(s_{3,f})_{P}(a,b) =0.\] Applying [1, Lemma 3.5] in these cases we have \((f_{\bar{u}})_{P}=(f_{P})_{\bar{u}}\), \((f_{v})_{P}=(f_{P})_{v}\) and \((f_{\bar{v}})_{P}=(f_{P})_{\bar{v}}\). Moreover, \(f_{P}\) is a \(\bar{u}\)-semiholomorphic polynomial, i.e., \((f_{P})_{u}\equiv 0\). Thus, we have two subcases to consider: If \(s_{3,f_{P}}\not\equiv 0\), then \[s_{3,f_{P}}=|(f_{P})_{v}|^{2}-|(f_{P})_{\bar{v}}|^{2}=|(f_{v})_{P}|^{2}-|(f_{ \bar{v}})_{P}|^{2}\not\equiv 0. \tag{15}\] Note that, \[s_{3,f}= ((f_{v})_{P}+M_{1})\overline{((f_{v})_{P}+M_{1})}-((f_{\bar{v}}) _{P}+M_{2})\overline{((f_{\bar{v}})_{P}+M_{2})}\] \[= (f_{v})_{P}(\overline{f_{v}})_{P}+(f_{v})_{P}\overline{M_{1}}+M_{ 1}\overline{(f_{v})}_{P}+M_{1}\overline{M_{1}}-\] \[((f_{\bar{v}})_{P}(\overline{f_{\bar{v}}})_{P}+(f_{\bar{v}})_{P} \overline{M_{2}}+M_{2}(\overline{f_{\bar{v}}})_{P}+M_{2}\overline{M_{2}}), \tag{16}\] where \(M_{1}=f_{v}-(f_{v})_{P}\) and \(M_{2}=f_{\bar{v}}-(f_{\bar{v}})_{P}\) are mixed polynomials satisfying \(d(P;(f_{v})_{P})<d(P;M_{1})\) and \(d(P;(f_{\bar{v}})_{P})<d(P;M_{2})\) if \(M_{1}\not\equiv 0\) and \(M_{2}\not\equiv 0\), respectively. Thus, by Eq. (15) we have that \((s_{3,f})_{P}=(f_{v})_{P}(\overline{f_{v}})_{P}-(f_{\bar{v}})_{P}(\overline{f_{ \bar{v}}})_{P}=s_{3,f_{P}}\). Therefore, it follows that \[s_{1,f_{P}} =(f_{P})_{u}\overline{(f_{P})_{\bar{v}}}-\overline{(f_{P})_{\bar{u }}}(f_{P})_{v}=-\overline{(f_{P})_{\bar{u}}}(f_{P})_{v}=-\overline{(f_{\bar{u} })_{P}}(f_{v})_{P}=(s_{1,f})_{P} \tag{17}\] \[s_{2,f_{P}} =|(f_{P})_{u}|^{2}-|(f_{P})_{\bar{u}}|^{2}=-|(f_{P})_{\bar{u}}| ^{2}=-|(f_{\bar{u}})_{P}|^{2}=(s_{2,f})_{P}\] (18) \[s_{3,f_{P}} =(s_{3,f})_{P}, \tag{19}\] and a solution \((a,b)\) in Eq. (14) implies a solution \(s_{1,f_{P}}(a,b)=s_{2,f_{P}}(a,b)=s_{3,f_{P}}(a,b)=0\), which contradicts SIND-(ii). If \(s_{3,f_{P}}\equiv 0\), then the system \(s_{1,f_{P}}=s_{2,f_{P}}=s_{3,f_{P}}=0\) is equal to \(s_{1,f_{P}}=s_{2,f_{P}}=0\). Since Eqs. (17)-(18) still hold in this case, the same solution \((a,b)\) in Eq. (14), \((s_{1,f})_{P}(a,b)=(s_{2,f})_{P}(a,b)=(s_{3,f})_{P}(a,b)=0\), implies a solution \(s_{1,f_{P}}(a,b)=s_{2,f_{P}}(a,b)=0\), which contradicts SIND-(ii). Therefore, the weight vector \(P\) satisfies \(\frac{p_{1}}{p_{2}}>k_{1}\), \(\frac{p_{1}}{p_{2}}<k_{N}\) with \(ab\neq 0\) or SPND-(i) does not hold. In order to deal with these remaining cases we may write \(f_{P_{1}}(u,\bar{u},v,\bar{v})=g(u,\bar{u},v,\bar{v})+B(v,\bar{v})u+C(v,\bar{v })\bar{u}+A(v,\bar{v})\), where \(A(v,\bar{v})\) consists of all those terms of \(f_{P_{1}}\) that depend neither on \(u\) nor on \(\bar{u}\), \(B(v,\bar{v})u\) is its linear term in \(u\), \(C(v,\bar{v})\bar{u}\) its linear term in \(\bar{u}\) and \(g(u,\bar{u},v,\bar{v})\) is the sum of all remaining terms in \(f_{P_{1}}\). The condition that \(f_{P_{1}}\) has no critical point in \(\mathbb{C}^{2}\setminus\{v=0\}\) is equivalent to the following system having no solutions: If \(f\) is \(v\)-convenient, \[\begin{cases}&B(v,\bar{v})\overline{A_{\bar{v}}(v,\bar{v})}-\overline{C(v, \bar{v})}A_{v}(v,\bar{v})=0\\ &|B(v,\bar{v})|^{2}-|C(v,\bar{v})|^{2}=0\\ &|A_{v}(v,\bar{v})|^{2}-|A_{\bar{v}}(v,\bar{v})|^{2}=0.\end{cases} \tag{20}\] If \(f\) is not \(v\)-convenient, i.e., \(A(v,\bar{v})\equiv 0\), \[\begin{cases}&B(v,\bar{v})|^{2}-|C(v,\bar{v})|^{2}=0.\end{cases} \tag{21}\] Now, suppose that SPND-(i) or SPND-(ii) is not satisfied for a weight vector \(P=(p_{1},p_{2})\) with \(\frac{p_{1}}{p_{2}}>k_{1}\) and the system (9) has a solution of the form \((a,b)\) or \((0,b)\). Then we get: If \(f\) is \(v\)-convenient, \[\begin{cases}&B(b,\bar{b})\overline{A_{\bar{v}}(b,\bar{b})}-\overline{C(b, \bar{b})}A_{v}(b,\bar{b})=0\\ &|B(b,\bar{b})|^{2}-|C(b,\bar{b})|^{2}=0\\ &|A_{v}(b,\bar{b})|^{2}-|A_{\bar{v}}(b,\bar{b})|^{2}=0.\end{cases} \tag{22}\] If \(f\) is not \(v\)-convenient, i.e., \(A(v,\bar{v})\equiv 0\), \[\begin{cases}&B(b,\bar{b})\overline{(B_{\bar{v}}(b,\bar{b})a+C_{\bar{v}}(b, \bar{b})\bar{a})}-(B_{v}(b,\bar{b})a+C_{v}(b,\bar{b})\bar{a})\overline{C(b, \bar{b})}=0\\ &|B(b,\bar{b})|^{2}-|C(b,\bar{b})|^{2}=0\\ &|B_{v}(b,\bar{b})a+C_{v}(b,\bar{b})\bar{a}|^{2}-|B_{\bar{v}}(b,\bar{b})a+C_{ \bar{v}}(b,\bar{b})\bar{a}|^{2}=0.\end{cases} \tag{23}\] In both cases, convenient and non-convenient, a solution of Eq. (22) or Eq. (23) with \((a,b),b\neq 0\) gives a solution of Eq. (20) and Eq. (21), respectively, which is a contradiction to SIND-(i). Analogously, we can use SIND-(i) in \(f_{P_{N}}\) to see the case \(\frac{p_{1}}{p_{2}}<k_{N}\) and the case where Eq. (8) has a solution of the form \((a,0)\). In either case we obtain a contradiction. Recall that the condition SPND-(i) is independent of the weight vector \(P\), so if SPND-(i) fails, it fails for a weight vector of the form discussed above. Therefore, our assumption that \(f\) is not SPND was wrong, which proves that SIND implies SPND. The proof of the implication from IND to PND follows the same argument. The only difference is that there is one more equation \(f_{P}=0\) in the systems of equations above. The following example shows that partially non-degenerate mixed functions are not necessarily inner non-degenerate. **Example 3.3**.: _The mixed polynomial \(f(u,\bar{u},v,\bar{v})=v\bar{v}-u\bar{u}+\bar{v}u^{2}\) is partially non-degenerate, but not inner non-degenerate._ _Its Newton boundary only has one compact 1-face with weight vector \(P=(1,1)\) and \(f_{P}(u,\bar{u},v,\bar{v})=v\bar{v}-u\bar{u}\). This shows that \(f\) is not inner non-degenerate (or "inner degenerate"), since every point in \(\mathbb{C}^{2}\) is a singular point of \(f_{P}\)._ _However, \((f(0,v))_{P}=f(0,v)=v\bar{v}\) and \((f(u,0))_{P}=f(u,0)=u\bar{u}\), which both have no zeros in \(\mathbb{C}^{*}\), so that condition (i) in Definition 3.1 is satisfied._ _If \((u_{*},v_{*})\in(\mathbb{C}^{*})^{2}\) is a root of \(f_{P}\) for any weight vector \(P\), then \(P=(1,1)\) and \(|u_{*}|=|v_{*}|\). We calculate for \(P=(1,1)\)_ \[(s_{3,f})_{P}(u_{*},\overline{u_{*}},v_{*},\overline{v}_{*})=-\overline{u_{*} }^{2}v_{*}-u_{*}\overline{v_{*}}^{2}. \tag{24}\] _If this equals 0, we have \(\text{Re}(u_{*}^{2}\overline{v_{*}})=0\) and this implies_ \[2\varphi_{1}-\varphi_{2}=\pm\pi\text{ mod }2\pi, \tag{25}\] _where \(\varphi_{1}=\arg(u_{*})\) and \(\varphi_{2}=\arg(v_{*})\). Substituting this into \((s_{1,f})_{P}\) leads to_ \[(s_{1,f})_{P}(u_{*},v_{*}) =-u_{*}^{3}+2u_{*}\overline{v_{*}}^{2}\] \[=|u_{*}|^{3}\mathrm{e}^{-3\varphi_{1}i}=\overline{u_{*}}^{3}, \tag{26}\] _since \(|u_{*}|=|v_{*}|\). Since \(u_{*}\neq 0\), there is no common solution to Eq. (6) in \((\mathbb{C}^{*})^{2}\) and hence \(f\) is partially non-degenerate._ In fact, examples of polynomials that are partially non-degenerate, but not inner non-degenerate, can also be found among the family of semiholomorphic polynomials, such as \(f(u,v,\bar{v})=u^{3}-2(v+\bar{v})+\left(\mathrm{i}v\bar{v}+\frac{1}{4}(v^{2}- \bar{v}^{2})\right)u\). To test the non-degeneracies PND and SPND for the semiholomorphic case we can use the following result. **Lemma 3.4**.: _Let \(f\) be a mixed polynomial and \(P\) a positive weight vector. Then_ * _if_ \(f_{P}\) _is_ \(u\)_-semiholomorphic and_ \[d(P;(f_{u})_{P})+d(P;(f_{\bar{v}})_{P})\neq d(P;(f_{v})_{P})+d(P;(f_{\bar{u}}) _{P}),\] _then_ \((s_{1,f})_{P}=(s_{2,f})_{P}=(s_{3,f})_{P}=0\) _if and only if_ \((f_{P})_{u}=(s_{3,f})_{P}=0\)_._ * _if_ \(f\) _is_ \(u\)_-semiholomorphic, then_ \((s_{1,f})_{P}=(s_{2,f})_{P}=(s_{3,f})_{P}=0\) _if and only if_ \((f_{u})_{P}=(s_{3,f})_{P}=0\)_._ Proof.: * Since \(f_{P}\) is \(u\)-semiholomorphic, then by [1, Lemma 3.5] we have that \(d(P;f)-p_{1}=d(P;(f_{u})_{P})<d(P;(f_{\bar{u}})_{P})\), \((f_{P})_{u}=(f_{u})_{P}\) and \((f_{P})_{\bar{u}}\equiv 0\). Thus \[(s_{2,f})_{P}=|(f_{u})_{P}|^{2}=|(f_{P})_{u}|^{2}=s_{2,f_{P}}.\] (27) Since \(d(P;(f_{u})_{P})+d(P;(f_{\bar{v}})_{P})\neq d(P;(f_{v})_{P})+d(P;(f_{\bar{u}}) _{P})\), we have two cases to consider: If \(d(P;(f_{u})_{P})+d(P;(f_{\bar{v}})_{P})<d(P;(f_{v})_{P})+d(P;(f_{\bar{u}})_{P})\), then \((s_{1,f})_{P}=(f_{u})_{P}(f_{\bar{v}})_{P}=(f_{P})_{u}(f_{\bar{v}})_{P}\) and thus \((s_{1,f})_{P}=(s_{2,f})_{P}=0\) if and only if \((f_{P})_{u}=0\). If \(d(P;(f_{u})_{P})+d(P;(f_{\bar{v}})_{P})>d(P;(f_{v})_{P})+d(P;(f_{\bar{u}})_{P})\), then \[(s_{1,f})_{P}=(f_{v})_{P}(f_{\bar{u}})_{P}\] (28) and since \(d(P;(f_{u})_{P})<d(P;(f_{\bar{u}})_{P})\), we have \[d(P;(f_{v})_{P})<d(P;(f_{\bar{v}})_{P}).\] This inequality implies that \((s_{3,f})_{P}=|(f_{v})_{P}|^{2}\). It follows from Eqs. (27)-(28) that \((s_{1,f})_{P}=(s_{2,f})_{P}=(s_{3,f})_{P}=0\) if and only if \((f_{P})_{u}=(f_{v})_{P}=0\). 2. Since \(f_{\bar{u}}\equiv 0\), we have \[s_{1,f}=f_{u}\overline{f_{\bar{v}}}-\overline{f_{\bar{u}}}f_{v}=f _{u}\overline{f_{\bar{v}}}\] (29) \[s_{2,f}=|f_{u}|^{2}-|f_{\bar{u}}|^{2}=|f_{u}|^{2}.\] (30) Taking the \(P\)-part of the mixed polynomials of Eqs. (29)-(30) yields \[(s_{1,f})_{P} =(f_{u})_{P}(\overline{f_{\bar{v}}})_{P}\] (31) \[(s_{2,f})_{P} =|(f_{u})_{P}|^{2}.\] (32) Therefore, \((f_{u})_{P}=0\) if and only if \((s_{1,f})_{P}=(s_{2,f})_{P}=0\). **Example 3.5**.: _The mixed polynomial \(f(u,\bar{u},v,\bar{v})=u^{10}+u^{2}v+(v\bar{v})^{n}+\bar{u}v^{2n-1}\) with \(n>1\) is strongly partially non-degenerate, but not strongly inner non-degenerate._ _The polynomial has two 1-faces with weight vectors \(P_{1}=(2n-1,2)\) and \(P_{2}=(1,8)\), and_ \[f_{P_{1}}(u,v,\bar{v}) =u^{2}v+(v\bar{v})^{n}\] \[f_{P_{2}}(u,v,\bar{v}) =u^{10}+u^{2}v.\] _The face function \(f_{P_{1}}\) is semiholomorphic and satisfies the inequality from Lemma 3.4(i). Therefore, we can find its critical points by solving \((f_{P_{1}})_{u}=s_{3,f_{P_{1}}}=0\). We find that_ \[(f_{P_{1}})_{u}(u,v,\bar{v}) =2uv\] \[s_{3,f_{P_{1}}}(u,v,\bar{v}) =n\bar{u}^{2}v^{n-1}\bar{v}^{n}+(u\bar{u})^{2}(1+nv^{n}\bar{v}^{n -1}).\] _It follows that \((0,v)\) is a critical point of \(f_{P_{1}}\) for all \(v\in\mathbb{C}\). Therefore, \(f\) is not strongly inner non-degenerate, since it does not satisfy Condition (i) in Definition 2.3._ _To prove SPND for \(f\) we first calculate_ \[s_{2,f}(u,\bar{u},v,\bar{v})= 100u^{9}\bar{u}^{9}+20u\bar{u}^{9}v+20u^{9}\bar{u}\bar{v}+4u\bar{ u}v\bar{v}\] \[-(v\bar{v})^{2n-1}\] _We find that \(s_{2,f}(0,v)=-(v\bar{v})^{2n-1}\) and \(s_{2,f}(u,0)=100u^{9}\bar{u}^{9}\). Both of these functions have no zeros in \((\mathbb{C}^{*})^{2}\), so that Condition (i) in Definition 3.2 is satisfied._ _Furthermore, we have_ \[s_{1,f}(u,\bar{u},v,\bar{v})= 10nu^{9}v^{n-1}\bar{v}^{n}-u^{2}\bar{v}^{2n-1}+\bar{u}v^{2n-2}\bar{ v}^{2n-1}\] \[-2n\bar{u}v^{2n-2}\bar{v}^{2n-1}+2nu(v\bar{v})^{n}-n\bar{v}^{2n-1} v^{n-1}\bar{v}^{n}.\] _The Newton polygon of this function has two compact 1-faces with \(Q_{1}=(2n-2,1)\), \(Q_{2}=(1,1)\) and_ \[(s_{1,f})_{Q_{1}}(u,\bar{u},v,\bar{v}) =2nu(v\bar{v})^{n}-n\bar{v}^{3n-1}v^{n-1} \tag{33}\] \[(s_{1,f})_{Q_{2}}(u,\bar{u},v,\bar{v}) =-u^{2}\bar{v}^{2n-1}+2nu(v\bar{v})^{n}.\] _Note that if \(P\notin\{Q_{1},Q_{2}\}\), then \((s_{1,f})_{P}\) consists of exactly one summand of the expressions above. Thus these functions have no zeros in \((\mathbb{C}^{*})^{2}\) and it follows that Condition (ii) of Definition 3.2 is satisfied for all such \(P\)._ _For \(P=Q_{2}\) check that \((s_{2,f})_{P}(u,\bar{u},v,\bar{v})=4u\bar{u}v\bar{v}\), which is non-zero on \((\mathbb{C}^{*})^{2}\)._ _For \(P=Q_{1}\) we get_ \[(s_{2,f})_{P}(u,\bar{u},v,\bar{v})=4u\bar{u}v\bar{v}-(v\bar{v})^{2n-1}. \tag{34}\] _Assume that \((u_{*},v_{*})\in(\mathbb{C}^{*})^{2}\) is a zero of \((s_{2,f})_{P}\). Then \(|u_{*}|=\frac{1}{4}|v_{*}|^{2n-2}\). But then the first summand in Eq. (33) has modulus \(2n|v_{*}|^{4n-2}\), while the second summand has modulus \(n|v_{*}|^{4n-2}\). Therefore, \((s_{1,f})_{Q_{1}}(u_{*},v_{*})\neq 0\), since \(v_{*}\neq 0\)._ _We thus have a contradiction. There is no common zero \((u_{*},v_{*})\in(\mathbb{C}^{*})^{2}\) of \((s_{1,f})_{P}\) and \((s_{2,f})_{P}\) for any weight vector \(P\). Therefore, Condition (ii) in Definition 3.2 is satisfied and \(f\) is strongly partially non-degenerate._ We now prove that PND and SPND imply the existence of a weakly isolated singularity and an isolated singularity, respectively. This follows the same reasoning as the proof of Propositions 3.6 and 6.2 in [1]. Proof of Theorem 1.2.: Assume that the origin is not an isolated singularity. Then via the curve selection lemma there exists a real analytic curve \(z(\tau)\) of critical points starting at the origin. i.e., a curve \(z(\tau)=(u(\tau),v(\tau))=(a\tau^{p_{1}}+h.o.t.,b\tau^{p_{2}}+h.o.t.),0\leq\tau\leq 1\), where \(h.o.t.\) refers to higher order terms in \(\tau\), satisfying * \(z(0)=0\) and \(z(\tau)\in\mathbb{C}^{2}\) for \(\tau>0\). * \(s_{1,f}(z(\tau))=s_{2,f}(z(\tau))=s_{3,f}(z(\tau))=0\). If \(u(\tau)\not\equiv 0\) and \(v(\tau)\not\equiv 0\), we set \(P=(p_{1},p_{2})\). If \(u(\tau)\equiv 0\), we set \(a=0\) and \(P=(0,p_{2})\), similarly, \(b=0\) and \(P=(p_{1},0)\) if \(v(\tau)\equiv 0\). Expanding the equations of (ii), we find that the coefficient of \(\tau^{d(P;s_{j,f})}\) is \((s_{j,f})_{P}(a,b)\) for all \(j\in\{1,2,3\}\) (compare also the proof of Lemma 5.8). Since the right hand side of (ii) vanishes, these coefficients have to be zero as well. Thus we get a common solution \((a,b)\in(\mathbb{C}^{*})^{2}\), \((a,0)\) or \((0,b)\) of the systems (9), (8) or (7), respectively, which yields a contradiction of SPND-(i) or SPND-(ii). The case of partial non-degeneracy follows in the same way by including the equation \(f(z(\tau))=0\). **Corollary 3.6**.: _Let \(f\) be a radially weighted homogeneous polynomial. Then the following conditions are equivalent:_ 1. \(f\) _has a weakly isolated singularity (isolated singularity) at the origin._ 2. \(f\) _is inner non-degenerate (strongly inner non-degenerate)._ 3. \(f\) _is partially non-degenerate (strongly partially non-degenerate)._ Proof.: (iii) implies (i) by Proposition 1.2. The fact that (ii) implies (iii) follows from Proposition 1.1. Finally, (i) implies (ii) because if \(f\) is radially weighted homogeneous, then the existence of a weakly isolated singularity (isolated singularity) implies conditions IND-(i) and IND-(ii) (SIND-(i) and SIND-(ii)) for the weight vector \(P\) associated to the unique \(1\)-face of \(\Gamma_{f}\). Proof of Theorem 1.3.: In Example 3.4 in [1] we give an example of a semiholomorphic polynomial that is inner non-degenerate, but Newton degenerate. There are plenty of (strongly) inner non-degenerate mixed polynomials that are not convenient, even among the holomorphic ones (for which (S)IND and (S)PND are equivalent). Consider for example \(f(u,v)=u(u^{2}-v^{2})\). It is not convenient, but since \(f_{u}=3u^{2}-v^{2}\), \(f_{v}=-2uv\), it is strongly inner non-degenerate. Example 3.3 and Example 3.5 illustrate that in general (S)PND does not imply (S)IND for mixed polynomials. In [5, 6] the first author constructs families of semiholomorphic polynomials \(f\) with isolated singularities or weakly isolated singularities. In both cases, their Newton boundary consists of a single compact \(1\)-face. There is only one term above the Newton boundary, which only depends on \(v\) and \(\bar{v}\). The face function \(f_{P}\) corresponding to the \(1\)-face is by construction degenerate, i.e., there exists a \((u_{*},v_{*})\in(\mathbb{C}^{*})^{2}\) with \[f_{P}(u_{*},v_{*})=(f_{P})_{u}(u_{*},v_{*})=s_{3,f_{P}}(u_{*},v_{*})=0. \tag{35}\] Since \(f_{P}\) is semiholomorphic, we have \((f_{u})_{P}=(f_{P})_{u}\) and by construction we have \(s_{3,f_{P}}=(s_{3,f})_{P}\). It thus follows that \(f\) is not SPND. In fact, it is not even PND. A different example (with an isolated singularity whose link is the figure-eight knot) is given in [16]. Since isolated singularities are also weakly isolated, this concludes the proof of Theorem 1.3. ## 4 When does an isolated singularity imply non-degeneracy? We have seen in Section 3 that SPND mixed polynomials have isolated singularities. The analogous result for SIND mixed polynomials was shown in [1]. However, we know from Theorem 1.3 that not every isolated singularity comes from such a non-degenerate polynomial. The examples by Rudolph [16] and the first author [6] that illustrate this are all semiholomorphic and are not even inner non-degenerate. Therefore, they might give the impression that once inner (or partial) non-degeneracy is assumed, strong inner (or partial) non-degeneracy and the existence of an isolated singularity are equivalent. This is not the case. In this section we give an example of a semiholomorphic polynomial with an isolated singularity that is inner non-degenerate and partially non-degenerate, but not strongly inner non-degenerate and not strongly partially non-degenerate. This leads us to a set of conditions on the polynomials that (if satisfied) imply the equivalence between SPND and the existence of an isolated singularity. The example also highlights the need for a distinction between IND and SIND, as well as between PND and SPND, even for semiholomorphic polynomials. Recall that for holomorphic functions in these dimensions all of these notions are equivalent. First, we recall some results from [1]. The face functions play an important role in the study of the topology and the singularities of \(f\)[14, 1] and some of these results come from the following properties of these face functions and their relations with \(f\): Denote the weight vectors whose associated faces are \(1\)-faces of \(\Gamma_{f}\) by \(P_{i}\), \(i=1,2,\ldots,N\). Writing \(v=r\mathrm{e}^{\mathrm{i}t}\), \(\bar{v}=r\mathrm{e}^{-\mathrm{i}t}\), we may associate to the \(1\)-face \(\Delta(P_{i})\) corresponding to a weight vector \(P_{i}\) a function that does not depend on \(r\) anymore, as follows. For a positive weight vector \(P_{i}=(p_{i,1},p_{i,2})\) let \(g_{i}:\mathbb{C}\times S^{1}\to\mathbb{C}\) be given by \[g_{i}(u,\bar{u},\mathrm{e}^{\mathrm{i}t}):=r^{-\frac{d(P_{i};f)}{p_{i,2}}}f_{P _{i}}\left(r^{k_{i}}u,r^{k_{i}}\bar{u},r\mathrm{e}^{\mathrm{i}t},r\mathrm{e}^{ -\mathrm{i}t}\right), \tag{36}\] where \(k_{i}:=\frac{p_{i,1}}{p_{i,2}}\) and \(d(P_{i};f)\) is the radial degree of \(f\) related with \(P_{i}\). Note that the right hand side does not depend on \(r\), because \(f_{P_{i}}\) is radially weighted homogeneous with weight vector \(P_{i}\). By [1, Lemma 4.2] we have that the function \(f\) admits the decomposition \[f(u,\bar{u},r\mathrm{e}^{\mathrm{i}t},r\mathrm{e}^{-\mathrm{i}t})=r^{\frac{d( P_{i};f)}{p_{i,2}}}f_{i}\left(\frac{u}{r^{k_{i}}},\frac{\bar{u}}{r^{k_{i}}},r,t \right), \tag{37}\] where \(f_{i}\) is a \(r\)-parameter deformation of \(g_{i}\), that is, the difference \(f_{i}(u,\bar{u},r,t)-g_{i}(u,\bar{u},\mathrm{e}^{\mathrm{i}t})\) goes to \(0\), as \(r\to 0\). If \(f\) is radially weighted homogeneous with weight vector \(P=P_{1}\), then \(f_{1}\) does not depend on \(r\) and \(f_{1}=g_{1}\). If \(f\) is \(u\)-semiholomorphic, we may interpret \(g_{i}\) as a loop in the space of complex polynomials in one variable \(u\), whose coefficients are finite Fourier series in \(\mathrm{e}^{\mathrm{i}t}\) and \(\mathrm{e}^{-\mathrm{i}t}\). We can thus associate to each face function of \(f\) a loop in the space of polynomials in one variable. If it is clear which \(g_{i}\) we are referring to, we often drop the index \(i\) and simply write \(g:\mathbb{C}\times S^{1}\to\mathbb{C}\) or \(g_{t}:\mathbb{C}\to\mathbb{C}\) with \(g_{t}(u)=g(u,\mathrm{e}^{\mathrm{i}t})\). Conversely, we may (as in [3] for example) use loops \(g_{t}\) in the space of polynomials of degree \(s\) as above, whose coefficients satisfy certain extra conditions, to obtain radially weighted homogeneous semiholomorphic polynomials. For example, if the loop is \(2\)-periodic, i.e., all of its coefficients are polynomials in \(\mathrm{e}^{2\mathrm{i}t}\) and \(\mathrm{e}^{-2\mathrm{i}t}\), so that \(g_{t+\pi}=g_{t}\) for all \(t\in[0,2\pi]\), then we can define \[f(u,v,\bar{v})=f(u,r\mathrm{e}^{\mathrm{i}t},\mathrm{e}^{-\mathrm{i}t})=r^{sk }g\left(\frac{u}{r^{k}},\mathrm{e}^{\mathrm{i}t}\right), \tag{38}\] which is a semiholomorphic polynomial for sufficiently large even values of \(k\in 2\mathbb{Z}\). The constructed semiholomorphic polynomial \(f\) has a weakly isolated singularity if and only if the roots of \(g_{t}\) are distinct for all \(t\in[0,2\pi]\). It has an isolated singularity if and only if the roots of \(g_{t}\) are distinct for all \(t\in[0,2\pi]\) and \(\arg(g):(\mathbb{C}\times S^{1})\backslash g^{-1}(0)\to S^{1}\) has no critical points and thus defines a fibration. [3]. Since each \(g_{t}\) is holomorphic with respect to \(u\), this condition can be expressed in terms of the critical values of the complex polynomials \(g_{t}\). Let \(c_{j}(t)\), \(j=1,2,\ldots,s-1\), denote the critical points of \(g_{t}\) and let \(v_{j}(t)=g_{t}(c_{j}(t))\) be the corresponding critical values. Then \((u_{*},t_{*})\) is a critical point of \(\arg(g)\) if and only if there is a \(j\in\{1,2,\ldots,s-1\}\) such that \(u_{*}=v_{j}(t_{*})\) and \(\frac{\partial\arg(v_{j}(t))}{\partial t}|_{t=t_{*}}=0\). To every loop \(g_{t}\) in the space of polynomials, we can associate a corresponding loop \(\{v_{1}(t),v_{2},\ldots,v_{s-1}(t)\}\) in the space of critical values \(\mathbb{C}^{s-1}/S_{s-1}\), where the symmetric group \(S_{s-1}\) acts on an \(s-1\)-tuple by permutation. A useful technique in the construction of semiholomorphic polynomials is to start with a loop in \(V_{s}:=(\mathbb{C}^{*})^{s-1}/S_{s-1}\) that satisfies \(\frac{\partial\arg(v_{j}(t))}{\partial t}\neq 0\) for all \(t\) and all \(j\). Then try to find a corresponding lifted loop in the space of polynomials with the given \(v_{j}(t)\)'s as its critical values. If such a lifted loop \(g_{t}\) exists, then its roots are simple for each \(t\in[0,2\pi]\) and they thus form a braid \(B\). If \(g_{t}\) is also 2-periodic, we can define a semiholomorphic, radially weighted homogeneous polynomial \(f\) as in Eq. (38). Since \(f\) is radially weighted homogeneous, it has an isolated singularity if and only if it strongly inner non-degenerate and strongly partially non-degenerate, which is equivalent to \(\arg(g)\) being a fibration, i.e., \(\frac{\partial\arg(v_{j}(t))}{\partial t}\neq 0\) for all \(t\) and \(j\). In this case, the link of the singularity is the closure of \(B\). In the following example, we thus want to start with a loop in \(V_{s}\) that does not have this property, so that the resulting semiholomorphic polynomial \(f\) is not SPND and not IPND. However, we want the critical points of \(\arg(g)\) to be degenerate, i.e., if \(\frac{\partial\arg(v_{j}(t))}{\partial t}|_{t=t_{*}}=0\), then \(\frac{\partial^{2}\arg(v_{j})}{\partial t^{2}}(t_{*})=0\). We then deform \(f\) by adding terms above its Newton boundary that do not change the fact that \(f\) is neither SIND nor SPND, but yield an isolated singularity. **Example 4.1**.: _We start with a loop \(v(t)\) in \(\mathbb{C}^{*}\), which is the space of critical values \(V_{2}\) of complex polynomials with distinct roots and degree 2. Following the observation above we want this loop to have argument-critical points, i.e., values \(t_{*}\in[0,2\pi]\) such that \(\frac{\partial\arg(v)}{\partial t}(t_{*})=0\). However, we want these critical points to be degenerate, i.e., \(\frac{\partial^{2}\arg(v)}{\partial t^{2}}(t_{*})=0\)._ _Take for example \(v(t)=\cos(t)-\cos(2t)+1-\mathrm{i}(\sin(t)-\frac{1}{3}\sin(3t))\), which is shown in Figure 2a) and has argument-critical points at \(t=0\) and \(t=\pi\). The graph of \(\frac{\partial\arg(v)}{\partial t}\) is shown in Figure 2b). We may now consider the loop of polynomials \(g_{t}(u)=u^{2}+v(2t)\). It's critical values are exactly \(v(2t)\), i.e., the loop \(v(t)\) is traversed twice as \(t\) goes from 0 to \(2\pi\). Arguments as in [4, Section 5.1] show that the roots of this loop of polynomials form a braid that closes to the Hopf link._ _Since \(g_{t}\) has the required even symmetry, we ob Figure 2: a) The parametrized loop \(v(t)\). b) The graph of \(\frac{\partial\arg(v)}{\partial t}\). non-degenerate, radially weighted homogeneous semiholomorphic polynomial_ \[f(u,v,\overline{v})= u^{2}+(v\overline{v})^{3}+(v\overline{v})^{2}\left(\frac{1}{2}(v^{2 }+\overline{v}^{2})\right)-v\overline{v}\left(\frac{1}{2}(v^{4}+\overline{v}^ {4})\right)\] \[-\mathrm{i}\left((v\overline{v})^{2}\left(\frac{1}{2\mathrm{i}}(v ^{2}-\overline{v}^{2})\right)-\frac{1}{3}\left(\frac{1}{2\mathrm{i}}(v^{6}- \overline{v}^{6})\right)\right).\] _Because \(v(2t)\) has argument-critical points, \(f\) does not have an isolated singularity and is not strongly inner non-degenerate._ _Consider now_ \[F(u,v,\overline{v})=f(u,v,\overline{v})-\frac{1}{2\mathrm{i}}(v-\overline{v}) ^{8}.\] _The added term lies above the Newton boundary of \(f\), so that \(f\) and \(F\) have the same principal part and in particular, \(F\) is inner non-degenerate, but not strongly inner non-degenerate. Likewise, we find that it is not strongly partially non-degenerate, since_ \[(f_{u})_{P}(0,x)=(f_{P})_{u}(0,x)=s_{3,f_{P}}(0,x)=(s_{3,f})(0,x)=0 \tag{39}\] _for any \(x\in\mathbb{R}\) and \(P=(3,1)\)._ _Note that any singular point \((u,v)\) of \(F\) must satisfy \(s_{1,F}(u,v,\overline{v})=2u=0\). We can thus find all its singular points by solving \(s_{3,F}(0,v,\overline{v})=0\)._ _Using Mathematica we find that for \(-0.01<\mathrm{Re}(v)<0.01\) the zeros of \(s_{3,F}(0,v)\) consist of the origin and two connected components, whose \(\mathrm{Im}(v)\)-coordinates are parametrized by \(\mathrm{Re}(v)\) and contain \((\mathrm{Re}(v),\mathrm{Im}(v))=(0,-0.866025)\) and \((\mathrm{Re}(v),\mathrm{Im}(v))=(0,0.866025)\), respectively. Numbers are rounded to relevant digits. This shows that the origin is an isolated singularity of \(F\)._ This example shows that if we want to establish an equivalence between isolated singularities and strong partial non-degeneracy for semiholomorphic polynomial we have to exclude radially weighted homogeneous polynomials for which \(\frac{\partial\arg(v_{i}(t))}{\partial t}\) has roots that are local maxima or minima of \(\frac{\partial\arg(v_{i}(t))}{\partial t}\). We want to generalize this condition to semiholomorphic polynomials that are not radially weighted homogeneous. We mentioned above that the first author proved in [3] that an isolated singularity is equivalent to the roots of \(g\) being distinct and \(\frac{\partial\arg(v_{i}(t))}{\partial t}\) having no zeros. The calculation in that proof can be rewritten in terms of \(s_{3,f}\) as follows. Let \(f\) be a radially weighted homogeneous semiholomorphic polynomial with a weakly isolated singularity and let \(g\) be as in Eq. (36). As above we write \(v_{j}(t)\) for the critical values of \(g(\cdot,\mathrm{e}^{\mathrm{i}t})\). We also write \(c_{j}(t)\) for the critical points of \(g(\cdot,\mathrm{e}^{\mathrm{i}t})\), i.e., the roots of \(\frac{\partial g}{\partial u}\), with \(g(c_{j}(t),\mathrm{e}^{\mathrm{i}t})=v_{j}(t)\). Then \(f_{u}(r^{k}c_{j}(t),\mathrm{re}^{\mathrm{i}t})=0\) and \(s_{3,f}(r^{k}c_{j}(t),r\mathrm{e}^{\mathrm{i}t})\) is a positive multiple of \(\frac{\partial\arg(v_{i}(t))}{\partial t}\), where \(k=\frac{p_{1}}{p_{2}}\) is determined by the weight vector \(P=(p_{1},p_{2})\) associated with the unique 1-face of \(f\). Therefore, if we have a semiholomorphic polynomial \(f\) whose Newton boundary has several 1-faces, then the function that plays the role of \(\frac{\partial\arg(v_{j}(t))}{\partial t}\) in the radially weighted homogeneous case should be something like the \(P_{i}\)-part of \(s_{3,f}\), where \(P_{i}\), \(i=1,2,\ldots,N\), are the weight vectors associated with the 1-faces of \(f\). Guided by this analogy we formulate the following two properties. Let \(f:\mathbb{C}^{2}\to\mathbb{C}\) be a semiholomorphic polynomial satisfying * the mixed polynomial \(f_{u}\) is inner non-degenerate, * if \(t_{*}\) is a local extremum (minimum or maximum) of \(((s_{3,f})_{P_{i}})_{i}(u(t),t)\) for any \(P_{i}\) and parametrization \((u(t),t)\)1 of the roots of \(((f_{u})_{P_{i}})_{i}\), then \((s_{3,f})_{P_{i}}(u(t_{*}),t_{*})\neq 0\). Footnote 1: By [1] such parametrization exists if \(f_{u}\) is inner non-degenerate. **Proposition 4.2**.: _Let \(f\) be a semiholomorphic polynomial that satisfies the above conditions (S-i) and (S-ii). Then \(f\) has an isolated singularity if and only if \(f\) is strongly partially non-degenerate._ Proof.: \((\Leftarrow)\) Follows from Proposition 1.2. \((\Rightarrow)\) As usual we denote the weight vectors associated with the \(1\)-face of the Newton boundary of \(f\) by \(P_{i}\), \(i=1,2,\ldots,N\). First, consider a weight vector \(P\) that is different from all \(P_{i}\), so that its corresponding face is a vertex \(\Delta\) of the Newton boundary. Since \(f\) is semiholomorphic, it is of the form \(f_{P}(u,\mathrm{e}^{\mathrm{i}t},r\mathrm{e}^{-\mathrm{i}t})=f_{\Delta}(u,r \mathrm{e}^{\mathrm{i}t},r\mathrm{e}^{-\mathrm{i}t})=u^{\mu}r^{\nu}\Phi( \mathrm{e}^{\mathrm{i}t})\) for some trigonometric polynomial \(\Phi(\mathrm{e}^{\mathrm{i}t})\) and natural numbers \(\mu,\nu\). Then the inner non-degeneracy of \(f_{u}\) implies that \(\Phi\) has no zeros, which implies that \((f_{u})_{P}\) has no zeros in \((\mathrm{C}^{*})^{2}\) and \((f_{u}(0,v))_{P}\) has no zeros in \((\mathrm{C}^{*})^{2}\), so that \(f\) satisfies SPND-(i) and SPND-(ii) for every weight vector \(P\) that is not \(P_{i}\) for some \(i\). For the remaining weight vectors \(P_{i}\) consider the decompositions of \(f_{u}\) and \(s_{3,f}\) associated with \(P_{i}\) as in Eq. (37) \[f_{u}(u,r\mathrm{e}^{\mathrm{i}t},r\mathrm{e}^{-\mathrm{i}t}) =r^{\frac{d(P_{i};f_{u})}{p_{i,2}}}(f_{u})_{i}\left(\frac{u}{rk_{ i}},r,t\right)\] \[s_{3,f}(u,r\mathrm{e}^{\mathrm{i}t},r\mathrm{e}^{-\mathrm{i}t}) =r^{\frac{d(P_{i};s_{3,f})}{p_{i,2}}}(s_{3,f})_{i}\left(\frac{u}{ rk_{i}},r,t\right).\] Since \(f_{u}\) is inner Newton non-degenerate, we can find a parametrization of the roots of \(f_{u}\), \((r^{ki}u(r,t),r\mathrm{e}^{\mathrm{i}t})\), where \((u(r,t),r\mathrm{e}^{\mathrm{i}t})\) is a root of \((f_{u})_{i}\), \(r<r_{0}\) and \(t\in[0,2\pi]\), \(r_{0}>0\) small enough, by [1, Proposition 4.5]. Note that \(\lim_{r\to 0}u(r,t)=u(0,t)\) is the \(u\)-coordinate of a root of the family of univariate polynomials \(((f_{u})_{P_{i}})_{i}(\cdot,0,t)\). We know that one inequality \[s_{3,f}(r^{k}u(r,t),r\mathrm{e}^{\mathrm{i}t})=r^{\frac{d(P_{i};s_{3,f})}{p_{ i,2}}}(s_{3,f})_{i}(u(r,t),r,t)>0\text{ or} \tag{40}\] \[s_{3,f}(r^{k}u(r,t),r\mathrm{e}^{\mathrm{i}t},r\mathrm{e}^{-\mathrm{i}t})=r^{ \frac{d(P_{i};s_{3,f})}{p_{i,2}}}(f_{u})_{i}(u(r,t),r,t)<0 \tag{41}\] holds for \((r,t)\in(0,r_{0})\times[0,2\pi]\), with \(r_{0}>0\) small enough. Otherwise the Intermediate Value Theorem gives us a critical point of \(f\) arbitrarily close to the origin. Therefore, on the limit points \(r=0\), \[(s_{3,f})_{i}(u(0,t),0,t)=((s_{3,f})_{P_{i}})_{i}(u(0,t),0,t)\geq 0\] or \[(s_{3,f})_{i}(u(0,t),0,t)=((s_{3,f})_{P_{i}})_{i}(u(0,t),0,t)\leq 0\] and by the condition (S-ii) \((u(0,t),0,t)\) is not a common solution of \(((f_{u})_{P_{i}})_{i}\) and \(((s_{3,f})_{P_{i}})_{i}\) for any \(t\). Therefore, \(f\) is strongly partially non-degenerate for \(P_{i}\). We know from Theorem 1.2 that every strongly partially non-degenerate semiholomorphic polynomial has an isolated singularity. Proposition 4.2 tells us how far this condition is from being a complete characterisation of semiholomorphic polynomials with isolated singularity. It follows that a semiholomorphic polynomial that has an isolated singularity, but that is not strongly partially non-degenerate, does not satisfy (S-i) or (S-ii). In the following we present a new perspective on condition (S-ii), which should make it easier to understand. Let \(f\) be a semiholomorphic polynomial that is not a radially weighted homogeneous polynomial and has a unique 1-face, i.e., \(f\neq f_{\Gamma}=f_{P}\), for some weight vector \(P\). Consider the critical values \(v_{j}(t)\), \(j=1,2,\ldots,\ell\), of \(g_{1}(\cdot,\mathrm{e}^{it})\), where \(g_{1}\) is the g-polynomial associated to \(P=P_{1}\). As in the radially weighted homogeneous case, we can (under small additional assumptions) show that \(((s_{3,f})_{P_{1}})_{1}(r^{k}c_{j}(t),t)\) is a positive multiple of \(\frac{\partial\arg(v_{j}(t))}{\partial t}\). In particular, it is 0 if and only if \(\frac{\partial\arg(v_{j}(t))}{\partial t}=0\) and always has the same sign as \(\frac{\partial\arg(v_{j}(t))}{\partial t}\). Then (S-ii) says that for every \(t\) where \(\frac{\partial\arg(v_{j}(t))}{\partial t}=0\) we have that \(\frac{\partial^{2}\arg(v_{j}(t))}{\partial t}\neq 0\). Figure (3) shows segments of the graph of a function \(\frac{\partial\arg(v_{j}(t))}{\partial t}\) that does not have this property. At \(t=t_{1}\) the function has a zero with \(\frac{\partial^{2}\arg v_{j}}{dt}(t_{1})=0\), while at \(t=t_{2}\) the function evaluates to zero with \(\frac{\partial^{2}\arg v_{j}}{dt}(t_{2})\neq 0\). These two cases correspond to a tangential and a transverse intersection of the graph with the \(t\)-axis, respectively. Figure 4: Possible graphs of \((s_{3,f})_{i}(u(r,t),r,t)\) as functions of \(t\) for a fixed small positive value of \(r\). We know that as \(r\) approaches zero \((s_{3,f})_{1}(u(r,t),r,t)\) converges to the function \(((s_{3,f})_{P_{1}})_{1}(u(0,t),t)=((s_{3,f})_{P_{1}})_{1}(r^{k}c_{j}(t),t)\), which is a positive multiple of \(\frac{\partial\arg(v_{j}(t))}{\partial t}\). Therefore, for any fixed small positive value of \(r\) the function \((s_{3,f})_{1}(u(r,t),r,t)\) still has a root in a neighborhood of \(t=t_{2}\) and this root still arises as a transverse intersection of the graph of the function and the \(t\)-axis. In a neighborhood of \(t=t_{1}\) however, the function might have two (or more) zeros as depicted in Figure ((a)a) or no zeros at all as in Figure ((b)b). In other words, the only roots of \(\partial\arg(v_{j}(t))/\partial t\) that do not necessarily correspond to roots of \((s_{3,f})_{i}\) are roots that are also local maxima or minima. Note that since \(f\) is semiholomorphic, the roots of \(s_{3,f}(r^{k}u(r,t),r\mathrm{e}^{\mathrm{i}t})\) correspond exactly to the critical points of \(f\). Furthermore, the roots of the function \(s_{3,f}(r^{k}u(r,t),r\mathrm{e}^{\mathrm{i}t})\) also directly correspond to the roots of \((s_{3,f})_{1}(u(r,t),r,t)\). Therefore, Proposition 4.2 guarantees that if \(f\) has an isolated singularity, then \(f_{P}\) cannot admit singularities like the one corresponding to \(t=t_{2}\) in the example. In other words, this type of singularity at \(t_{2}\) associated to \(f_{P}\) implies a non-isolatedness of the singularity at the origin. Thus we can see Proposition 4.2 as a criterion not just to say when \(f\) has an isolated singularity but also when \(f\) has a non-isolated singularity at the origin. Generalizing Theorem 4.2 to the general mixed case comes with a difficulty. The singular set of a mixed function is defined by 3 equations, \(s_{1,f},s_{2,f}\) and \(s_{3,f}\), while the singular set of a semiholomorphic function only requires two (see Eq. (2)). This forces us to impose an extra condition on the face functions of partial derivatives of \(f\) as follows. Let \(f\) be a mixed polynomial. Let \(P_{i}\), \(i=1,2,\ldots,N\), denote the weight vectors associated with the 1-faces of \(\Gamma_{s_{1,f}}\). Consider the following conditions. * The mixed polynomial \(s_{1,f}\) is nice and inner Newton non-degenerate, see [1]. * For any \(P_{i}\), \(i\in\{1,2,\ldots,N\}\), \(\tau_{0}\in[0,2\pi]\) with \(((s_{j,f})_{P_{i}})_{i}(u(\tau_{0}),t(\tau_{0}))=0,\ j=1,2\), the point \((u(\tau_{0}),t(\tau_{0}))\) is not a local extremum of the function \(((s_{j,f})_{P_{i}})_{i}(u(\tau),t(\tau))\), \(j=1,2\), where \((u(\tau),t(\tau))\) is a local parametrization of the roots of \(((s_{1,f})_{P_{i}})_{i}\).2 Footnote 2: By [1] such parametrization exists if \(s_{1,f}\) is inner non-degenerate and has a nice Newton boundary. * For any \(P_{i}\), \(i\in\{1,2,\ldots,N\}\), the systems \[(f_{v}(u,0))_{P_{i}}=(f_{\bar{v}}(u,0))_{P_{i}}=(f_{u}(u,0))_{P_{i}}=(f_{\bar{ u}}(u,0))_{P_{i}}=0,\] \[(f_{v}(0,v))_{P_{i}}=(f_{\bar{v}}(0,v))_{P_{i}}=(f_{u}(0,v))_{P_{i}}=(f_{\bar{ u}}(0,v))_{P_{i}}=0\] and \[(f_{v})_{P_{i}}=(f_{\bar{v}})_{P_{i}}=(f_{u})_{P_{i}}=(f_{\bar{u}})_{P_{i}}=0\] have no solution in \((\mathbb{C}^{*})^{2}\). **Proposition 4.3**.: _Let \(f\) be a mixed polynomial then satisfies the conditions (M-i), (M-ii) and (M-iii). Then \(f\) has an isolated singularity at the origin if and only if \(f\) is strongly partially non-degenerate._ Proof.: \((\Leftarrow)\) Follows directly from Proposition 1.2. \((\Rightarrow)\) Let \(P\) be a weight vector such that the corresponding face in \(\Gamma_{s_{1,f}}\) is a non-extreme vertex. Since \(s_{1,f}\) has a nice Newton boundary, it follows that \((s_{1,f})_{P}\) has no roots in \((\mathbb{C}^{*})^{2}\). Therefore, \(f\) satisfies SPND-(ii) for \(P\). It remains to prove SPND-(ii) for weight vectors \(P\) that are associated with a \(1\)-face or an extreme vertex of \(\Gamma_{s_{1,f}}\) and to prove SNPD-(i). We split the proof in two cases, the first discusses \(P\) associated with a \(1\)-face and the second contains the case of extreme vertex as well as the proof of the condition SPND-(i). We index the \(1\)-faces of \(\Gamma_{s_{1,f}}\) by \(i\in\{1,2,\ldots,N\}\) with weight vectors \(P_{i}\) as usual. Let \(P_{i}=(p_{i,1},p_{i,2})\) and \(k_{i}=\frac{p_{i,1}}{p_{i,2}}\). Case 1: \(P\) corresponds to a \(1\)-face of \(\Gamma_{s_{1,f}}\) In this case, \(P=P_{i}\) for some \(i\in\{1,2,\ldots,N\}\). Consider the decomposition from Eq. (37) of each \(s_{1,f}\), \(s_{2,f}\) and \(s_{3,f}\) associated to \(P_{i}\), i.e., \[s_{1,f}(u,r\mathrm{e}^{it}) =r^{\frac{d(P_{i};s_{1,f})}{p_{i,2}}}(s_{1,f})_{i}\left(\frac{u}{ r^{k_{i}}},r,t\right)\] \[s_{2,f}(u,r\mathrm{e}^{it}) =r^{\frac{d(P_{i};s_{2,f})}{p_{i,2}}}(s_{2,f})_{i}\left(\frac{u}{ r^{k_{i}}},r,t\right)\] \[s_{3,f}(u,r\mathrm{e}^{it}) =r^{\frac{d(P_{i};s_{3,f})}{p_{i,2}}}(s_{3,f})_{i}\left(\frac{u}{ r^{k_{i}}},r,t\right)\] Since \(s_{1,f}\) is nice and inner non-degenerate, we can find a parametrization \((r^{k_{i}}u(r,\tau),r\mathrm{e}^{it(r,\tau)})\in\mathbb{C}^{*2}\) of the roots of \((s_{1,f})\), where \((u(r,\tau),r\mathrm{e}^{it(r,\tau)})\) is a root of \((s_{1,f})_{i}\) (cf. [1, Theorem 4.6]). Note that \(\lim_{r\to 0}(u(r,\tau),r,t(r,\tau))=(u(0,\tau),0,t(0,\tau))\in\mathbb{C}^{*2}\) is a root of \(((s_{1,f})_{P_{i}})_{i}\). In particular, the roots of \((s_{1,f})_{P_{i}}\) are given by \((r^{ki}u(0,\tau),r\mathrm{e}^{it(0,\tau)})\). Suppose that some inequality of \[s_{2,f}(r^{ki_{i}}u(r,\tau),r\mathrm{e}^{it(r,\tau)}) =r^{\frac{d(P_{i};s_{2,f})}{p_{i,2}}}(s_{2,f})_{i}(u(r,\tau),r,t( r,\tau))>0 \tag{42}\] \[s_{2,f}(r^{ki_{i}}u(r,\tau),r\mathrm{e}^{it(r,\tau)}) =r^{\frac{d(P_{i};s_{2,f})}{p_{i,2}}}(s_{2,f})_{i}(u(r,\tau),r,t( r,\tau))<0\] (43) \[s_{3,f}(r^{ki_{i}}u(r,\tau),r\mathrm{e}^{it(r,\tau)}) =r^{\frac{d(P_{i};s_{3,f})}{p_{i,2}}}(s_{3,f})_{i}(u(r,\tau),r,t( r,\tau))>0\] (44) \[s_{3,f}(r^{ki_{i}}u(r,\tau),r\mathrm{e}^{it(r,\tau)}) =r^{\frac{d(P_{i};s_{3,f})}{p_{i,2}}}(s_{3,f})_{i}(u(r,\tau),r,t( r,\tau))<0 \tag{45}\] holds for \((r,\tau)\in(0,r_{0})\times(\tau_{0}-\epsilon,\tau_{0}+\epsilon)\), with \(r_{0}>0\) and \(\epsilon>0\) small enough, for instance Eq. (42). By condition (M-ii) we obtain on the limit point \(r=0\) that \[(s_{2,f})_{i}(u(0,\tau),0,t(0,\tau))=((s_{2,f})_{P_{i}})_{i}(u(0,\tau),0,t(0, \tau))>0.\] Therefore \((u(0,\tau),0,t(0,\tau))\) is not a common zero of \(((s_{1,f})_{P_{i}})_{i},\ ((s_{2,f})_{P_{i}})_{i}\) and \(((s_{3,f})_{P_{i}})_{i}\) for any \(\tau\in(\tau_{0}-\epsilon,\tau_{0}+\epsilon)\). Since at \(r=0\) these functions are equal to the \(g\)-polynomial associated with \((s_{1,f})_{P_{i}}\), \((s_{2,f})_{P_{i}}\) and \((s_{3,f})_{P_{i}}\), respectively, we find that \((u(0,\tau),\mathrm{e}^{it(0,\tau)})\) is not a common zero of these \(g\)-polynomials, which by definition means that \((r^{ki}u(0,\tau),r\mathrm{e}^{it(0,\tau)})\) is not a common root of \((s_{1,f})_{P_{i}}\), \((s_{2,f})_{P_{i}}\) and \((s_{3,f})_{P_{i}}\). But since these are by construction all the roots of \((s_{1,f})_{P_{i}}\), there are no common zeros at all. On the other hand, suppose that there are no \(r_{0},\epsilon>0\) small enough such that some inequality (42)-(45) holds. Then by the Intermediate Value Theorem there are for every \(r>0\), \(\varepsilon>0\), tuples \((r_{1},\tau_{1}^{\prime})\) and \((r_{1}^{\prime},\tau_{1}^{\prime})\) with \(r_{1},r_{1}^{\prime}<r\) and \(\tau_{1},\tau_{1}^{\prime}\in[\tau_{0}-\varepsilon,\tau_{0}+\varepsilon]\) such that \[s_{1,f}(r_{1}^{k_{i}}u(r_{1},\tau_{1}),r_{1}\mathrm{e}^{\mathrm{i}t(r_{1}, \tau_{1})})=s_{2,f}(r_{1}^{k_{i}}u(r_{1},\tau_{1}),r_{1}\mathrm{e}^{\mathrm{i}t (r_{1},\tau_{1})})=0, \tag{46}\] with \[f_{u}(r_{1}^{k_{i}}u(r_{1},\tau_{1}),r_{1}\mathrm{e}^{\mathrm{i}t(r_{1},\tau_ {1})})=f_{\bar{u}}(r_{1}^{k_{i}}u(r_{1},\tau_{1}),r_{1}\mathrm{e}^{\mathrm{i}t (r_{1},\tau_{1})})=0\] and \[s_{1,f}((r_{1}^{\prime})^{k_{i}}u(r_{1}^{\prime},\tau_{1}^{\prime}),r_{1}^{ \prime}\mathrm{e}^{\mathrm{i}t(r_{1}^{\prime},\tau_{1}^{\prime})})=s_{3,f}((r_ {1}^{\prime})^{k_{i}}u(r_{1}^{\prime},\tau_{1}^{\prime}),r_{1}^{\prime}\mathrm{ e}^{\mathrm{i}t(r_{1}^{\prime},\tau_{1}^{\prime})})=0, \tag{47}\] with \[f_{v}((r_{1}^{\prime})^{k_{i}}u(r_{1}^{\prime},\tau_{1}^{\prime}),r_{1}^{ \prime}\mathrm{e}^{\mathrm{i}t(r_{1}^{\prime},\tau_{1}^{\prime})})=f_{\bar{v} }((r_{1}^{\prime})^{k_{i}}u(r_{1}^{\prime},\tau_{1}^{\prime}),r_{1}^{\prime} \mathrm{e}^{\mathrm{i}t(r_{1}^{\prime},\tau_{1}^{\prime})})=0.\] Otherwise, Eq. (46) and Eq. (47) imply \(s_{3,f}(r_{1}^{k_{i}}u(r_{1},\tau_{1}),r_{1}\mathrm{e}^{\mathrm{i}t(r_{1}, \tau_{1})})=0\) and \(s_{2,f}((r_{1}^{\prime})^{k_{i}}u(r_{1}^{\prime},\tau_{1}^{\prime}),r_{1}^{ \prime}\mathrm{e}^{\mathrm{i}t(r_{1}^{\prime},\tau_{1}^{\prime})})=0\), respectively, either of which is a contradiction to the isolatedness at the origin. Indeed, suppose that \(f_{u}(u_{*},v_{*})\neq 0\). Then \(s_{2,f}(u_{*},v_{*})=0\) implies \(|f_{u}(u_{*},v_{*})/\bar{f}_{\bar{u}}(u_{*},v_{*})|=1\) and \(s_{1,f}(u_{*},v_{*})=0\) implies \(f_{v}(u_{*},v_{*})=(f_{u}(u_{*},v_{*})/f_{\bar{u}}(u_{*},v_{*}))\cdot f_{\bar{ v}}(u_{*},v_{*})\), which is equivalent in this case to \(s_{3,f}(u_{*},v_{*})=0\). Thus we can construct sequences \[(r_{n}^{k_{i}}u(r_{n},\tau_{n}),r_{n}\mathrm{e}^{\mathrm{i}t(r_{n},\tau_{n})}) \text{ and }((r_{n}^{\prime})^{k_{i}}u(r_{n}^{\prime},\tau_{n}^{\prime}),r_{n}^{ \prime}\mathrm{e}^{\mathrm{i}t(r_{n}^{\prime},\tau_{n}^{\prime})}),\] with sequences \((r_{n},\tau_{n})\) and \((r_{n}^{\prime},\tau_{n}^{\prime})\in(0,1)\times(\tau_{0}-\epsilon,\tau_{0}+\epsilon)\) converging to \((0,\tau_{0})\) and such that \[f_{u}(r_{n}^{k_{i}}u(r_{n},\tau_{n}),r_{n}\mathrm{e}^{\mathrm{i}t(r_{n},\tau_{n })})=r_{n}^{\frac{d(P_{i};f_{u})}{p_{i,2}}}\,(f_{u})_{i}(u(r_{n},\tau_{n}),r_{ n},t(r_{n},\tau_{n}))=0,\] \[f_{\bar{u}}(r_{n}^{k_{i}}u(r_{n},\tau_{n}),r_{n}\mathrm{e}^{\mathrm{i}t(r_{n}, \tau_{n})})=r_{n}^{\frac{d(P_{i};f_{\bar{v}})}{p_{i,2}}}\,(f_{\bar{u}})_{i}(u(r_ {n},\tau_{n}),r_{n},t(r_{n},\tau_{n}))=0\] and \[f_{v}((r_{n}^{\prime})^{k_{i}}u(r_{n}^{\prime},\tau_{n}^{\prime}),r_{n}^{ \prime}\mathrm{e}^{\mathrm{i}t(r_{n}^{\prime},\tau_{n}^{\prime})})=(r_{n}^{ \prime})^{\frac{d(P_{i};f_{u})}{p_{i,2}}}\,(f_{v})_{i}(u(r_{n}^{\prime},\tau_{n} ^{\prime}),r_{n}^{\prime},t(r_{n}^{\prime},\tau_{n}^{\prime}))=0,\] \[f_{\bar{v}}((r_{n}^{\prime})^{k_{i}}u(r_{n}^{\prime},\tau_{n}^{\prime}),r_{n}^{ \prime}\mathrm{e}^{\mathrm{i}t(r_{n}^{\prime},\tau_{n}^{\prime})})=(r_{n})^{ \frac{d(P_{i};f_{\bar{u}})}{p_{i,2}}}\,(f_{\bar{v}})_{i}(u(r_{n}^{\prime},\tau_{n} ^{\prime}),r_{n}^{\prime},t(r_{n}^{\prime},\tau_{n}^{\prime}))=0.\] Then \[(f_{u})_{i}(u(r_{n}, \tau_{n}),r_{n},t(r_{n},\tau_{n}))=(f_{\bar{u}})_{i}(u(r_{n},\tau _{n}),r_{n},t(r_{n},\tau_{n}))=\] \[=(f_{v})_{i}(u(r_{n}^{\prime},\tau_{n}^{\prime}),r_{n}^{\prime},t(r_ {n}^{\prime},\tau_{n}^{\prime}))=(f_{\bar{v}})_{i}(u(r_{n}^{\prime},\tau_{n}^{ \prime}),r_{n}^{\prime},t(r_{n}^{\prime},\tau_{n}^{\prime}))=0.\] Therefore, taking the limit \((r_{n},\tau_{n})\to(0,\tau_{0})\) and \((r_{n}^{\prime},\tau_{n}^{\prime})\to(0,\tau_{0})\), we have \[((f_{u})_{P_{i}})_{i}(u(0,\tau_{0}),0,t(0,\tau_{0}))=((f_{\bar{u}})_{P_{i}})_{i}(u (0,\tau_{0}),0,t(0,\tau_{0}))=0\] \[((f_{v})_{P_{i}})_{i}(u(0,\tau_{0}),0,t(0,\tau_{0}))=((f_{\bar{v}})_{P_{i}})_{i}(u(0,\tau_{0}),0,t(0,\tau_{0}))=0,\] which is a contradiction to (M-iii). Therefore, \((s_{1,f})_{P_{i}},\ (s_{2,f})_{P_{i}}\) and \((s_{3,f})_{P_{i}}\) have no common zeros. Thus \(f\) satisfies SPND-(ii). Case 2: \(P=\frac{p_{1}}{p_{2}}\) with \(\frac{p_{1}}{p_{2}}>k_{1}\) or \(\frac{p_{1}}{p_{2}}<k_{N}\). We are going to prove that \(f\) satisfies SPND-(ii) for \(P\) with \(\frac{p_{1}}{p_{2}}>k_{1}\) and also that \[(s_{1,f}(0,v))_{P}=(s_{2,f}(0,v))_{P}=(s_{3,f}(0,v))_{P}=0 \tag{48}\] has no solution in \((\mathbb{C}^{*})^{2}\). The other cases are very similar. Since \(s_{1,f}\) is IND, we may apply the same arguments as in Case 1 to a parametrization of the zeros of \((s_{1,f})_{1}\) in \(\mathbb{C}^{2}\setminus\{v=0\}\). It follows that \((s_{1,f})_{P_{i}}=(s_{2,f})_{P_{i}}=(s_{3,f})_{P_{i}}=0\) has no solution in \(\mathbb{C}^{2}\setminus\{v=0\}\) for any \(i\in\{1,2,\ldots,N\}\). In particular, \[(s_{1,f})_{P_{1}}(0,v)=(s_{2,f})_{P_{1}}(0,v)=(s_{3,f})_{P_{1}}(0,v)=0 \tag{49}\] has no solution for any \(v\in\mathbb{C}^{*}\). It is clear that \[(s_{1,f})_{P_{1}}(0,v)=\begin{cases}(s_{1,f}(0,v))_{P_{1}}&\text{if $s_{1,f}$ is $v$- convenient}\\ \ \ 0&\text{otherwise.}\end{cases} \tag{50}\] For \(s_{j,f}\), \(j=2,3\), we denote by \(k_{1,j}\) the number associated to the first 1-face of \(\Gamma_{s_{2,f}}\) and \(\Gamma_{s_{3,f}}\), respectively. That is if \(P^{\prime}_{i}\), \(i\in\{1,2,\ldots,N^{\prime}\}\), and \(P^{\prime\prime}_{i}\), \(i\in\{1,2,\ldots,N^{\prime\prime}\}\), are the weight vectors corresponding to the 1-faces of \(s_{2,f}\) and \(s_{3,f}\), respectively, with \(P^{\prime}_{1}=(p^{\prime}_{1},p^{\prime}_{2})\) and \(P^{\prime\prime}_{1}=(p^{\prime\prime}_{1},p^{\prime\prime}_{2})\), then \(k_{1,2}=\frac{p^{\prime}_{1}}{p^{\prime}_{2}}\) and \(k_{1,3}=\frac{p^{\prime\prime}_{1}}{p^{\prime\prime}_{2}}\). Thus, \[(s_{2,f})_{P_{1}}(0,v)=\begin{cases}(s_{2,f}(0,v))_{P_{1}}&\text{if $k_{1}\geq k_{1,2}$} \\ \ 0&\text{otherwise,}\end{cases} \tag{51}\] and \[(s_{3,f})_{P_{1}}(0,v)=\begin{cases}(s_{3,f}(0,v))_{P_{1}}&\text{if $k_{1}\geq k_{1,3}$} \\ \ 0&\text{otherwise.}\end{cases} \tag{52}\] Since system (49) does not have solutions with \(v\in\mathbb{C}^{*}\), it follows from Eqs.(50)-(52) that a lack of solution of (49) implies a lack of solution of \[(s_{1,f}(0,v))_{P_{1}}=(s_{2,f}(0,v))_{P_{1}}=(s_{3,f}(0,v))_{P_{1}}=0 \tag{53}\] in \((\mathbb{C}^{*})^{2}\). More precisely, since there are no solutions to Eq. (49), not all of \((s_{1,f})_{P_{1}}(0,v)\), \((s_{2,f})_{P_{1}}(0,v)\) and \((s_{3,f})_{P_{1}}(0,v)\) are constant \(0\). Let \(J\subset\{1,2,3\}\) be the set of indices \(j\) with \((s_{j,f})_{P_{1}}(0,v)\not\equiv 0\). Then the lack of solutions to Eq. (49) implies that there are no common zeros of \((s_{j,f})_{P_{1}}(0,v)=(s_{j,f}(0,v))_{P_{1}}\), \(j\in J\), and in particular no solution to Eq. (53) in \((\mathbb{C}^{*})^{2}\). Since for any \(P\) and any \(i=1,2,3\), we have \((s_{i,f}(0,v))_{P_{1}}=(s_{i,f}(0,v))_{P}\), it follows that the system (48) has no solution in \((\mathbb{C}^{*})^{2}\). Making analogous arguments we get that for any \(P\) \[(s_{1,f}(u,0))_{P}=(s_{2,f}(u,0))_{P}=(s_{3,f}(u,0))_{P}=0 \tag{54}\] has no solution in \((\mathbb{C}^{*})^{2}\). In other words, \(f\) satisfies SPND-(i). Note that if \((s_{i,f})_{P_{1}}(0,v)\not\equiv 0\), for some \(i\in\{1,2,3\}\), then we have \((s_{i,f})_{P_{1}}(0,v)=(s_{i,f}(0,v))_{P_{1}}\). Moreover, \(s_{i,f}\) is \(v\)-convenient. Thus, we find by Eqs.(50)-(52) that \[(s_{i,f})_{P_{1}}(0,v)=(s_{i,f}(0,v))_{P_{1}}=(s_{i,f})_{P}(u,v), \tag{55}\] for all \(P=(p_{1},p_{2})\) with \(\frac{p_{1}}{p_{2}}>k_{1}\). Again, a lack of solution of (49) implies that for some \(i\in\{1,2,3\}\) Eq.(55) holds and thus for any \(P\) with \(\frac{p_{1}}{p_{2}}>k_{1}\) \[(s_{1,f})_{P}(u,v)=(s_{2,f})_{P}(u,v)=(s_{3,f})_{P}(u,v)=0 \tag{56}\] has no solution in \((\mathbb{C}^{*})^{2}\). Therefore, \(f\) satisfies SPND-(ii) for \(P\) with \(\frac{p_{1}}{p_{2}}>k_{1}\). We can prove with appropriate changes that \(f\) satisfies SPND-(ii) for \(P\) with \(\frac{p_{1}}{p_{2}}<k_{N}\). Hence, \(f\) is strongly partially non-degenerate. As a consequence of the proof of Theorem 4.2 we have the following sufficient condition to prove that a mixed polynomial has a non-isolated singularity. **Corollary 4.4**.: _Let \(f\) be a mixed polynomial and \(p=(u_{*},v_{*})\) a solution of the system \((s_{1,f})_{P}=(s_{2,f})_{P}=(s_{3,f})_{P}=0\), for some weight vector \(P\). If there exists a local parametrization of the roots of \((s_{1,f})_{P}\) around \(p\), which does not satisfy (M-ii) nor (M-iii), then \(f\) has a non-isolated singularity at the origin._ ## 5 The Strong Milnor Condition Recall that \(f\) satisfies _the strong Milnor condition_ if there is a positive number \(\rho_{0}>0\) such that \[\arg(f):=\frac{f}{|f|}:S_{\rho}^{3}\setminus V_{f}\to S^{1} \tag{57}\] is a locally trivial fibration for every radius \(\rho\leq\rho_{0}\). The Milnor set \(M_{\arg f}\) of the mapping \(\arg f:\mathbb{C}^{2}\setminus V_{f}\to S^{1}\) is defined as \[M_{\arg f}=\{z\in\mathbb{C}^{n}\setminus V_{f}\,|\,\exists\lambda\in\mathbb{R},\lambda z=\mathrm{i}(f\overline{df}-\bar{f}\bar{d}f)\}, \tag{58}\] where \(\mathrm{d}f=\left(\frac{\partial f}{\partial u},\frac{\partial f}{\partial v}\right)\) and \(\bar{d}f=\left(\frac{\partial f}{\partial\bar{u}},\frac{\partial f}{\partial \bar{v}}\right)\). By [2, Theorem 2.2] when \(f\) has a weakly isolated singularity, the strong Milnor condition is equivalent to \(U\cap M_{\arg f}=\emptyset\) for sufficiently small neighbourhoods \(U\) of the origin in \(0\mathbb{R}^{4}\). The fact that strongly Newton non-degenerate mixed polynomials with convenient Newton boundary satisfy the strong Milnor condition was proved in Theorem 33 in [14]. Now we show that this is in fact true for the larger class of strongly partially non-degenerate mixed polynomials. For this we need several lemmas. **Lemma 5.1**.: _Let \(f:(\mathbb{C}^{2},0)\to(\mathbb{C},0)\) be a strongly inner non-degenerate mixed polynomial. Suppose that \(f\) is not \(v\)-convenient. Then the corresponding extreme vertex \(f_{\Delta_{1}}\) is strongly Newton non-degenerate. Likewise, if \(f\) is not \(u\)-convenient, then \(f_{\Delta_{N+1}}\) is strongly Newton non-degenerate. It follows that if \(f\) is neither \(u\)-convenient nor \(v\)-convenient, then \(f\) is strongly Newton non-degenerate._ Proof.: Recall that if \(f\) is SIND, but not \(v\)-convenient, there is a \(n\in\mathbb{N}\) such that \((1,n)\in\mathit{supp}(f)\). Otherwise, all summands in all of \(s_{1,f_{P_{1}}}\), \(s_{2,f_{P_{1}}}\) and \(s_{3,f_{P_{1}}}\) involve \(u\) or \(\bar{u}\), so that \(s_{1,f_{P_{1}}}(0,v)=s_{2,f_{P_{1}}}(0,v)=s_{3,f_{P_{1}}}(0,v)=0\) for all \(v\in\mathbb{C}\), which contradicts SIND-(i). Since \(f\) is not \(v\)-convenient, we have \[(f_{P_{1}})_{v}(0,v)=(f_{P_{1}})_{\bar{v}}(0,v)=0 \tag{59}\] for all \(v\in\mathbb{C}\). Since there is a \(n\in\mathbb{N}\) such that \((1,n)\in\mathit{supp}(f)\), the face function \(f_{\Delta_{1}}\) of the extreme vertex \(\Delta_{1}\) is of the form \(B(v,\bar{v})u+\bar{u}C(v,\bar{v})\) for some polynomials \(B(v,\bar{v})\) and \(C(v,\bar{v})\). Thus \((f_{\Delta_{1}})_{u}=B(v,\bar{v})\) and \((f_{\Delta_{1}})_{\bar{u}}=C(v,\bar{v})\). Now suppose that \((u_{*},v_{*})\in(\mathbb{C}^{*})^{2}\) is a critical point of \(f_{\Delta_{1}}\). Then \(s_{2,f_{\Delta_{1}}}(u_{*},v_{*})=0\) implies that \(|B(v_{*},\bar{v_{*}})|^{2}-|C(v_{*},\bar{v_{*}})|^{2}=0\), which does not depend on \(u_{*}\). Therefore, \(s_{2,f_{\Delta_{1}}}(u,v_{*})=0\) for all \(u\in\mathbb{C}\), in particular for \(u=0\). Now note that \(s_{2,f_{\Delta_{1}}}(0,v_{*})=s_{2,f_{P_{1}}}(0,v_{*})\) and so \(s_{2,f_{P_{1}}}(0,v_{*})=0\). By Eq. (59) we also have \(s_{1,f_{P_{1}}}(0,v_{*})=0\) and \(s_{3,f_{P_{1}}}(0,v_{*})=0\). In other words, \((0,v_{*})\) is a critical point of \(f_{P_{1}}\), contradicting SIND-(i). The proof for mixed polynomials that are not \(u\)-convenient follows the same reasoning. Now recall that SIND-(ii) implies that all face functions of \(f\) except possibly \(f_{\Delta_{1}}\) and \(f_{\Delta_{N+1}}\) are strongly Newton non-degenerate. But we have shown that both of these are SND as well if \(f\) is neither \(u\)-convenient nor \(v\)-convenient. A mixed polynomial \(f:(\mathbb{C}^{2},0)\to(\mathbb{C},0)\) can be written as \[f(z,\bar{z})=\sum_{\nu+\mu\in\mathit{supp}(f)}c_{\nu,\mu}z^{\nu}\bar{z}^{\mu}, \tag{60}\] where \(z=(u,v)\), \(\bar{z}=(\bar{u},\bar{v})\), \(\nu=(\nu_{1},\nu_{2})\), \(\mu=(\mu_{1},\mu_{2})\), \(z^{\nu}=u^{\nu_{1}}v^{\nu_{2}}\) and \(\bar{z}^{\mu}=\bar{u}^{\mu_{1}}\bar{v}^{\mu_{2}}\). For any positive weight vector \(P=(p_{1},p_{2})\) we define \[f_{(d,P)}(u,v):=\sum_{\begin{subarray}{c}\nu+\mu\in\mathit{supp}(f)\\ p_{1}(\nu_{1}+\mu_{1})+p_{2}(\nu_{2}+\mu_{2})=d\end{subarray}}c_{\nu,\mu}z^{\nu }\bar{z}^{\mu}, \tag{61}\] that is, \(f_{(d,P)}\) is the sum of terms of \(f\) whose radial degree with respect to \(P\) is equal to \(d\). It follows that \(f_{(d,P)}=0\) if \(d<d(P;f)\) and \(f_{(d(P;f),P)}=f_{P}\) if \(d=d(P;f)\). **Lemma 5.2**.: _Let \(f:(\mathbb{C}^{2},0)\to(\mathbb{C},0)\) be a mixed polynomial, \(P\) be a positive weight vector. Let \(\alpha\) be some non-zero complex number and \(d=d(P;f)\). Define_ \[w_{1}(u,v) =\mathrm{i}(\alpha(\overline{f_{u}})_{P}-\bar{\alpha}(f_{\bar{u}} )_{P})_{(d-p_{1},P)}(u,v),\] \[w_{2}(u,v) =\mathrm{i}(\alpha(\overline{f_{v}})_{P}-\bar{\alpha}(f_{\bar{v}} )_{P})_{(d-p_{2},P)}(u,v). \tag{62}\] _Then the common zeros of \(w_{1}\) and \(w_{2}\) are critical points of \(f_{P}\)._ **Remark 5.3**.: _Observe that by [1, Lemma 3.5] we have \(d(P;f_{u})\geq d-p_{1}\) with equality if and only if \((f_{u})_{(d-p_{1},P)}=((f_{u})_{P})_{(d-p_{1},P)}\) and likewise for derivatives with respect to the other variables \(\bar{u}\), \(v\) and \(\bar{v}\). Therefore, \(w_{1}\) and \(w_{2}\) could equivalently be defined as_ \[w_{1}(u,v) =\mathrm{i}(\alpha(\overline{f_{u}})-\bar{\alpha}(f_{\bar{u}}))_{ (d-p_{1},P)}(u,v),\] \[w_{2}(u,v) =\mathrm{i}(\alpha(\overline{f_{v}})-\bar{\alpha}(f_{\bar{v}}))_ {(d-p_{2},P)}(u,v). \tag{63}\] Proof of Lemma 5.2.: By [1, Lemma 3.5] there are different cases to consider, depending on whether \(f_{P}\) is semiholomorphic with respect to any of the variables \(u\), \(\bar{u}\), \(v\) and \(\bar{v}\). We have \[w_{1}(u,v) =\begin{cases}0&\text{if }(f_{P})_{u}=(f_{P})_{\bar{u}}=0,\\ \mathrm{i}\alpha(\overline{f_{u}})_{P}(u,v)&\text{if }(f_{P})_{u}\neq 0,(f_{P})_{ \bar{u}}=0,\\ -\mathrm{i}\bar{\alpha}(f_{\bar{u}})_{P}(u,v)&\text{if }(f_{P})_{u}=0,(f_{P})_{ \bar{u}}\neq 0,\\ \mathrm{i}(\alpha(\overline{f_{u}})_{P}-\bar{\alpha}(f_{\bar{u}})_{P})(u,v)& \text{if }(f_{P})_{u}\neq 0,(f_{P})_{\bar{u}}\neq 0,\end{cases} \tag{64}\] \[w_{2}(u,v) =\begin{cases}0&\text{if }(f_{P})_{v}=(f_{P})_{\bar{v}}=0,\\ \mathrm{i}\alpha(\overline{f_{v}})_{P}(u,v)&\text{if }(f_{P})_{v}\neq 0,(f_{P})_{ \bar{v}}=0,\\ -\mathrm{i}\bar{\alpha}(f_{\bar{v}})_{P}(u,v)&\text{if }(f_{P})_{v}=0,(f_{P})_{ \bar{v}}\neq 0,\\ \mathrm{i}(\alpha(\overline{f_{v}})_{P}-\bar{\alpha}(f_{\bar{v}})_{P})(u,v)& \text{if }(f_{P})_{v}\neq 0,(f_{P})_{\bar{v}}\neq 0.\end{cases} \tag{65}\] Furthermore, we know from [1, Lemma 3.5] that if \(f_{P}\) is not \(\bar{x}\)-semiholomorphic, i.e., \((f_{P})_{x}\neq 0\), for some \(x\in\{u,\bar{u},v,\bar{v}\}\), then \((f_{P})_{x}=(f_{x})_{P}\). This leads to \[w_{1}(u,v) =\begin{cases}0&\text{if }(f_{P})_{u}=(f_{P})_{\bar{u}}=0,\\ \mathrm{i}\alpha(\overline{f_{P}})_{u}(u,v)&\text{if }(f_{P})_{u}\neq 0,(f_{P})_{ \bar{u}}=0,\\ -\mathrm{i}\bar{\alpha}(f_{P})_{\bar{u}}(u,v)&\text{if }(f_{P})_{u}=0,(f_{P})_{ \bar{u}}\neq 0,\\ \mathrm{i}(\alpha(\overline{f_{P}})_{u}-\bar{\alpha}(f_{P})_{\bar{u}})(u,v)& \text{if }(f_{P})_{u}\neq 0,(f_{P})_{\bar{u}}\neq 0,\end{cases} \tag{66}\] \[w_{2}(u,v) =\begin{cases}0&\text{if }(f_{P})_{v}=(f_{P})_{\bar{v}}=0,\\ \mathrm{i}\alpha(\overline{f_{P}})_{v}(u,v)&\text{if }(f_{P})_{v}\neq 0,(f_{P})_{ \bar{v}}=0,\\ -\mathrm{i}\bar{\alpha}(f_{P})_{\bar{v}}(u,v)&\text{if }(f_{P})_{v}=0,(f_{P})_{ \bar{v}}\neq 0,\\ \mathrm{i}(\alpha(\overline{f_{P}})_{v}-\bar{\alpha}(f_{P})_{\bar{v}})(u,v)& \text{if }(f_{P})_{v}\neq 0,(f_{P})_{\bar{v}}\neq 0.\end{cases} \tag{67}\] Recall that if \(f_{P}\) is \(x\)-semiholomorphic with \(x\in\{u,\bar{u},v,\bar{v}\}\), then the equations that define its critical set simplify to \((f_{P})_{x}=s_{j,f_{P}}=0\), where \(j=2\) if \(x\in\{v,\bar{v}\}\) and \(j=3\) if \(x\in\{u,\bar{u}\}\). Now let \((u_{*},v_{*})\in\mathbb{C}^{2}\) be such that \(w_{1}(u_{*},v_{*})=w_{2}(u_{*},v_{*})=0\). Then if \(f_{P}\) is \(x\)-semiholomorphic, the equation \(w_{i}(u_{*},v_{*})=0\) implies \((f_{P})_{x}(u_{*},v_{*})=0\), where \(i=1\) if \(x\in\{u,\bar{u}\}\) and \(i=2\) if \(x\in\{v,\bar{v}\}\). In particular, if \(f_{P}\) depends exactly on one of \(\{u,\bar{u}\}\), say \(x\), and exactly on one of \(\{v,\bar{v}\}\), say \(y\), then \(w_{1}\) and \(w_{2}\) are non-zero multiples of \((f_{x})_{P}=(f_{P})_{x}\) and \((f_{y})_{P}=(f_{P})_{y}\), respectively. Hence common zeros of \(w_{1}\) and \(w_{2}\) are critical points of \(f_{P}\). If \(f_{P}\) is neither \(v\)- nor \(\bar{v}\)-semiholomorphic, then the equation \(w_{2}(u_{*},v_{*})=0\) implies that \((\alpha(\overline{f_{v}})_{P}-\bar{\alpha}(f_{\bar{v}})_{P})(u_{*},v_{*})=0\). Since \(\alpha\neq 0\), we obtain \(s_{3,f_{P}}(u_{*},v_{*})=0\). The analogous statement holds for \(w_{1}(u_{*},v_{*})\) and \(s_{2,f_{P}}\) if \(f_{P}\) is not \(u\)-semiholomorphic and not \(\bar{u}\)-semiholomorphic. Thus, if \(f_{P}\) depends on exactly three variables out of \(\{u,\bar{u},v,\bar{v}\}\), then the common zeros of \(w_{1}\) and \(w_{2}\) are critical points of \(f_{P}\). Suppose now \(f_{P}\) depends on all of \(u,\bar{u},v,\bar{v}\). By the same argument as in the previous case we have \(s_{2,f_{P}}(u_{*},v_{*})=s_{3,f_{P}}(u_{*},v_{*})=0\). Furthermore, \(w_{1}(u_{*},v_{*})=w_{2}(u_{*},v_{*})=0\) implies that if \((f_{x})_{P}(u_{*},v_{*})=0\) for any \(x\in\{u,\bar{u},v,\bar{v}\}\), then \((f_{\bar{x}})_{P}(u_{*},v_{*})=0\) as well and so \(s_{1,f_{P}}(u_{*},v_{*})=0\). We can thus assume that \((f_{x})_{P}(u_{*},v_{*})\neq 0\) for all \(x\in\{u,\bar{u},v,\bar{v}\}\). It follows from \(w_{1}(u_{*},v_{*})=w_{2}(u_{*},v_{*})=0\) that \[\frac{\overline{(f_{P})_{u}}(u_{*},v_{*})}{(f_{P})_{\bar{u}}(u_{*},v_{*})}= \frac{\bar{\alpha}}{\alpha}=\frac{\overline{(f_{P})_{v}}(u_{*},v_{*})}{(f_{P} )_{\bar{v}}(u_{*},v_{*})} \tag{68}\] and so \(s_{1,f_{P}}(u_{*},v_{*})=0\). Lastly, suppose that \(f_{P}\) depends neither on \(u\) nor on \(\bar{u}\). The argument for \(v\) and \(\bar{v}\) is analogous. Then \(w_{1}(u,v)=(f_{P})_{u}(u,v)=(f_{P})_{\bar{u}}(u,v)=0\) for all \((u,v)\in\mathbb{C}^{2}\) and therefore both \(s_{1,f_{P}}\) and \(s_{2,f_{P}}\) are constant \(0\). The same reasoning as in the previous cases shows that a zero \((u_{*},v_{*})\) of \(w_{2}\) is also a zero of \(s_{3,f_{P}}\) and therefore a critical point of \(f_{P}\). Thus in all possible cases \((u_{*},v_{*})\) is a critical point of \(f_{P}\). We write \(P_{1},P_{2},\ldots,P_{N}\) for the weight vectors associated with the compact \(1\)-faces of the Newton boundary of \(f\), ordered in the usual way [14, 1]. For \(P_{i}=(p_{i,1},p_{i,2})\) we define \(k_{i}=\frac{p_{i,1}}{p_{i,2}}\). Let \(\Delta(P_{i};f)\), \(i\in\{1,2,\ldots,N\}\), denote the set of integer lattice points that lie on the compact \(1\)-face \(\Delta(P_{i})\) associated to \(P_{i}\). **Lemma 5.4**.: _Let \(f:(\mathbb{C}^{2},0)\to(\mathbb{C},0)\) be a \(v\)-convenient mixed polynomial with \((1,n)\notin\Delta(P_{1};f)\cap\text{supp}(f)\) for all \(n\in\mathbb{N}_{0}\) and let \(P\) be a positive weight vector with \(\frac{p_{1}}{p_{2}}>k_{1}\). Let \((w_{1},w_{2})\) be as in Lemma 5.2 and \((u_{*},v_{*})\in\mathbb{C}^{2}\) be such that \(w_{1}(u_{*},v_{*})=w_{2}(u_{*},v_{*})=0\). Then \((0,v_{*})\) is a critical point of \(f_{P_{1}}\). Likewise, \((u_{*},0)\) is a critical point of \(f_{P_{N}}\) if \(f\) is \(u\)-convenient with \((n,1)\notin\Delta(P_{N};f)\cap\text{supp}(f)\) for all \(n\in\mathbb{N}_{0}\) and \(k_{N}>\frac{p_{1}}{p_{2}}\)._ Proof.: We discuss the case of \(f\) being \(v\)-convenient with \((1,n)\notin\Delta(P_{1};f)\cap\text{supp}(f)\) for all \(n\in\mathbb{N}_{0}\) and \(\frac{p_{1}}{p_{2}}>k_{1}\). The case of \(f\) being \(u\)-convenient and \(k_{N}<\frac{p_{1}}{p_{2}}\) follows the same line of reasoning. We have seen in Lemma 5.2 that \((u_{*},v_{*})\) is a critical point of \(f_{P}\). Since \(f\) is convenient and \(\frac{p_{1}}{p_{2}}>k_{1}\), the corresponding face \(\Delta(P)\) is the extreme vertex \(\Delta_{1}\) and so \(f_{P}\) depends neither on \(u\) nor on \(\bar{u}\). Thus \((u,v_{*})\) is a critical point of \(f_{P}\) for any \(u\in\mathbb{C}\), in particular for \(u=0\). In this case, we have \((f_{P_{1}})_{u}(0,v)=(f_{P_{1}})_{\bar{u}}(0,v)=0\) for all \(v\in\mathbb{C}\), which means that \(s_{1,f_{P_{1}}}(0,v)=s_{2,f_{P_{1}}}(0,v)=0\) for all \(v\in\mathbb{C}\). Moreover, \((f_{P_{1}})_{v}(0,v)=(f_{P})_{v}(0,v)\) and \((f_{P_{1}})_{\bar{v}}(0,v)=(f_{P})_{\bar{v}}(0,v)\) for all \(v\in\mathbb{C}\), which means that \(s_{3,f_{P}}(0,v)=0\) if and only if \(s_{3,f_{P_{1}}}(0,v)=0\). Therefore, since \((0,v_{*})\) is a critical point of \(f_{P}\), it is also a critical point of \(f_{P_{1}}\). Let now \(f:(\mathbb{C}^{2},0)\to(\mathbb{C},0)\) be a mixed polynomial, \(P=(p_{1},p_{2})\) a positive weight vector and \(\alpha\in\mathbb{C}^{*}\). We define \(d_{1}:=\min\{d(P;f_{u}),d(P;f_{\bar{u}})\}\), \(d_{2}:=\min\{d(P;f_{v}),d(P;f_{\bar{v}})\}\) and \[\widetilde{w_{1}}(u,v) :=\mathrm{i}(\alpha(\overline{f_{u}})_{P}-\bar{\alpha}(f_{\bar{u}}) _{P})_{(d_{1},P)}(u,v),\] \[\widetilde{w_{2}}(u,v) :=\mathrm{i}(\alpha(\overline{f_{v}})_{P}-\bar{\alpha}(f_{\bar{v}}) _{P})_{(d_{2},P)}(u,v). \tag{69}\] As is the case for \((w_{1},w_{2})\) we could have defined \((\widetilde{w_{1}},\widetilde{w_{2}})\) equivalently using \(f_{x}\), \(x\in\{u,\bar{u},v,\bar{v}\}\), instead of \((f_{x})_{P}\), since the degrees \(d_{1}\) and \(d_{2}\) can by definition only be obtained by \((f_{u})_{P}\) or \((f_{\bar{u}})_{P}\) and \((f_{v})_{P}\) or \((f_{\bar{v}})_{P}\), respectively. There are many similarities between \((w_{1},w_{2})\) and \((\widetilde{w_{1}},\widetilde{w_{2}})\). In particular, by [1, Lemma 3.5] we have \(d_{1}\geq d-p_{1}\) and \(d_{2}\geq d-p_{2}\) with equalities if and only if \(f_{P}\) depends on \(u\) or \(\bar{u}\) and \(v\) or \(\bar{v}\), respectively. It follows that if \(f_{P}\) depends on \(v\) or \(\bar{v}\), then \(\widetilde{w_{2}}=w_{2}\). If \(f_{P}\) depends on \(u\) or \(\bar{u}\), then \(\widetilde{w_{1}}=w_{1}\). Furthermore, it follows that if \(w_{i}(u_{*},v_{*})\neq 0\) for some \(i\in\{1,2\}\) and some \((u_{*},v_{*})\in\mathbb{C}^{2}\), then \(d_{i}=d-p_{i}\) and therefore \(\widetilde{w_{i}}(u_{*},v_{*})=w_{i}(u_{*},v_{*})\neq 0\). The contrapositive of this statement implies the following lemma. **Lemma 5.5**.: _Let \(f:(\mathbb{C}^{2},0)\to(\mathbb{C},0)\) be a mixed polynomial, \(P\) be a positive weight vector and \(\alpha\in\mathbb{C}^{*}\). Let \((\widetilde{w_{1}},\widetilde{w_{2}})\) be as above. Then common zeros of \(\widetilde{w_{1}}\) and \(\widetilde{w_{2}}\) are critical points of \(f_{P}\)._ Proof.: We know from the above that a common zero \((u_{*},v_{*})\) of \(\widetilde{w_{1}}\) and \(\widetilde{w_{2}}\) is also a common zero of \(w_{1}\) and \(w_{2}\). But then Lemma 5.2 implies that \((u_{*},v_{*})\) is a critical point of \(f_{P}\). The same arguments imply the analogue of Lemma 5.4. In general, \(w_{1}\) and \(w_{2}\) are easier to handle than \(\widetilde{w_{1}}\) and \(\widetilde{w_{2}}\), because the case when \(f_{P}\) depends neither on \(x\) nor on \(\bar{x}\) for \(x\in\{u,\bar{u},v,\bar{v}\}\) is simpler. However, an advantage of working with \((\widetilde{w_{1}},\widetilde{w_{2}})\) is that the assumption on \((1,n)\) in Lemma 5.4 is no longer necessary for the corresponding result on \(\widetilde{w_{1}}\) and \(\widetilde{w_{2}}\). **Lemma 5.6**.: _Let \(f:(\mathbb{C}^{2},0)\to(\mathbb{C},0)\) be a \(v\)-convenient mixed polynomial, let \(P\) be a positive weight vector with \(\frac{p_{1}}{p_{2}}>k_{1}\) and \(\alpha\in\mathbb{C}^{*}\). Let \((\widetilde{w_{1}},\widetilde{w_{2}})\) be as in Eq. (69) and \((u_{*},v_{*})\in\mathbb{C}^{2}\) be such that \(\widetilde{w_{1}}(u_{*},v_{*})=\widetilde{w_{2}}(u_{*},v_{*})=0\). Then \((0,v_{*})\) is a critical point of \(f_{P_{1}}\). Likewise, \((u_{*},0)\) is a critical point of \(f_{P_{N}}\) if \(f\) is \(u\)-convenient and \(k_{N}>\frac{p_{1}}{p_{2}}\)._ Proof.: The case where \((1,n)\notin\Delta(P_{1};f)\cap\operatorname{supp}(f)\) for all \(n\in\mathbb{N}_{0}\) follows directly from the fact that zeros of \((\widetilde{w_{1}},\widetilde{w_{2}})\) are also zeros of \((w_{1},w_{2})\) and Lemma 5.4. Now suppose that there is a \(n\in\mathbb{N}_{0}\) with \((1,n)\in\Delta(P_{1};f)\cap\operatorname{supp}(f)\). Then \(f_{P_{1}}\) can be written as \(f_{P_{1}}(u,\bar{u},v,\bar{v})=A(v,\bar{v})+uB(v,\bar{v})+\bar{u}C(v,\bar{v})+ D(u,\bar{u},v,\bar{v})\) for polynomials \(A\), \(B\), \(C\) and \(D\), where each monomial in \(D\) is divisible by \(u^{2}\), \(u\bar{u}\) or \(\bar{u}^{2}\) and \(B\) and \(C\) are not both \(0\). Furthermore, \(A\) is not constant \(0\), because by assumption \(f\) is \(v\)-convenient. A direct calculation gives \(\widetilde{w_{2}}(u,v)=\operatorname{i}(\alpha\overline{A_{v}}(v,\bar{v})-\bar {\alpha}A_{\bar{v}}(v,\bar{v}))\) and \(\widetilde{w_{1}}(u,v)=\operatorname{i}(\alpha\overline{B}(v,\bar{v})-\bar{ \alpha}C(v,\bar{v}))\). Note that these expressions do not depend on \(u\), so that if \((u_{*},v_{*})\in\mathbb{C}^{2}\) is a common zero of \(\widetilde{w_{1}}\) and \(\widetilde{w_{2}}\), so is \((u,v_{*})\) for any \(u\in\mathbb{C}\). Since \((f_{P_{1}})_{v}(0,v)=A_{v}(v,\bar{v})\) and \((f_{P_{1}})_{\bar{v}}(0,v)=A_{\bar{v}}(0,v)\) for all \(v\in\mathbb{C}\), the equation \(\widetilde{w_{2}}(0,v_{*})=0\) implies that \(s_{3,f_{P_{1}}}(0,v_{*})=0\). Similarly, we have \((f_{P_{1}})_{u}(0,v)=B(v,\bar{v})\) and \((f_{P_{1}})_{\bar{u}}=C(v,\bar{v})\), so that \(\widetilde{w_{1}}(0,v_{*})=0\) implies \(s_{2,f_{P_{1}}}(0,v_{*})=0\). It follows from an equation analogous to Eq. (68) that \(s_{1,f_{P_{1}}}(0,v_{*})=0\) as well. Therefore, \((0,v_{*})\) is a critical point of \(f_{P_{1}}\). Let \(z(\tau)=(u(\tau),v(\tau))\), \(0\leq\tau\leq 1\), be a real-analytic curve in \(\mathbb{C}^{2}\) with \(z(0)=(0,0)\) and let \(f:(\mathbb{C}^{2},0)\to(\mathbb{C},0)\) be a mixed polynomial. We denote the lowest order of \(\tau\) in \(f(z(\tau))\) by \(d_{z}(f)\). We write \(a_{z}(f)\) for the coefficient of \(\tau^{d_{z}(f)}\) in \(f(z(\tau))\). The following lemmas are fairly elementary facts on power series, but will be used frequently. **Lemma 5.7**.: _Let \(z(\tau)=(u(\tau),v(\tau))\), \(0\leq\tau\leq 1\), be a real-analytic curve in \(\mathbb{C}^{2}\) with \(z(0)=(0,0)\) and let \(f,g:(\mathbb{C}^{2},0)\to(\mathbb{C},0)\) be mixed polynomials._ * \(d_{z}(fg)=d_{z}(f)+d_{z}(g)\) _and_ \(a_{z}(fg)=a_{z}(f)a_{z}(g)\)_._ * \(d_{z}(f+g)\geq\min\{d_{z}(f),d_{z}(g)\}\) _with strict inequality if and only if_ \(d_{z}(f)=d_{z}(g)\) _and_ \(a_{z}(f)=-a_{z}(g)\)_. If_ \(d_{z}(f)<d_{z}(g)\)_, then_ \(a_{z}(f+g)=a_{z}(f)\)_, and if_ \(d_{z}(f)=d_{z}(g)\) _with_ \(a_{z}(f)\neq-a_{z}(g)\)_, then_ \(a_{z}(f+g)=a_{z}(f)+a_{z}(g)\)_._ **Lemma 5.8**.: _Let \(z(\tau)=(u(\tau),v(\tau))\), \(0\leq\tau\leq 1\), be a real-analytic curve in \(\mathbb{C}^{2}\) with \(z(0)=(0,0)\) and_ \[u(\tau) =a\tau^{p_{1}}+h.o.t., \tag{70}\] \[v(\tau) =b\tau^{p_{2}}+h.o.t., \tag{71}\] _for some \(a,b\in\mathbb{C}^{*}\) and \(p_{1},p_{2}\in\mathbb{N}\). The expression h.o.t. refers to higher order terms in \(\tau\). Let \(f:(\mathbb{C}^{2},0)\to(\mathbb{C},0)\) be a mixed polynomial and \(P=(p_{1},p_{2})\). If \(f(z(\tau))\not\equiv 0\), then the lowest order of \(\tau\) in \(f(z(\tau))\) is at least \(d(P;f)\)._ Proof.: A direct calculation gives \[f(z(\tau))=\sum_{d\in\mathbb{N}}f_{(d,P)}(z(\tau))=\sum_{d\in\mathbb{N}}(f_{(d,P)}(a,b)\tau^{d}+h.o.t), \tag{72}\] where \(h.o.t.\) in each summand represents terms whose degree in \(\tau\) is greater than \(d\). Thus the lowest order term possible is obtained from the lowest order term of the summand where \(d=d(P;f)\), i.e., \(f_{P}(a,b)\tau^{d(P;f)}\). However, since \(f_{P}(a,b)\) could be zero, the actual lowest order \(d_{z}(f)\) could also be greater. Proof of Theorem 1.4.: To prove that \(U\cap M_{\arg f}=\emptyset\), for sufficiently small neighbourhoods \(U\) of the origin \(0\in\mathbb{R}^{4}\), we proceed as in the proof of [14, Lemma 11]. Assume that \(f\) does not satisfy the strong Milnor condition. By the curve selection lemma we can find a real analytic curve \(z(\tau)=(u(\tau),v(\tau)),0\leq\tau\leq 1\), satisfying * \(z(0)=0\) and \(z(\tau)\in\mathbb{C}^{2}\setminus V_{f}\) for \(\tau>0\). * \(\mathrm{i}(f\overline{df}-\bar{f}\bar{d}f)(z(\tau))=\lambda(\tau)z(\tau)\) for some real number \(\lambda(\tau)\). Since the zeros of \(\mathrm{i}(f\overline{df}-\bar{f}\bar{d}f)\) are exactly \(\Sigma_{f}\cup V_{f}\), it does not vanish outside of \(V_{f}\) and close to the origin by Proposition 1.2, which implies that \(\lambda(\tau)\not\equiv 0\). Case 1: \(u(\tau)\not\equiv 0\) and \(v(\tau)\not\equiv 0\) Since \(z(\tau)\) is real analytic, it is given by a Taylor Series: \[u(\tau) =a\tau^{p_{1}}+h.o.t.,\qquad a\neq 0,\,p_{1}>0,\] \[v(\tau) =b\tau^{p_{2}}+h.o.t.,\qquad b\neq 0,\,p_{2}>0.\] Define \(P=(p_{1},p_{2}),\ z_{0}=(a,b)\in(\mathbb{C}^{*})^{2}\) and \(d=d(P;f)\). We consider the expansions: \[f(z(\tau)) =\alpha\tau^{q}+h.o.t.; q\geq d=d(P;f),\alpha\neq 0 \tag{73}\] \[f_{x}(z(\tau)) =\beta_{x}\tau^{q_{x}}+h.o.t.; x\in\{u,\bar{u}\},\,q_{x}\geq d(P;f_{x})\geq d-p_{1},\beta_{x}\neq 0\] (74) \[f_{y}(z(\tau)) =\beta_{y}\tau^{q_{y}}+h.o.t.; y\in\{v,\bar{v}\},\,q_{y}\geq d(P;f_{y})\geq d-p_{2},\beta_{y}\neq 0\] (75) \[\lambda(\tau) =\lambda_{0}\tau^{s}+h.o.t., \lambda_{0}\in\mathbb{R},\,s\in\mathbb{Z}_{\geq 0}. \tag{76}\] Note that \(q_{z}=d(P;f_{z})\) if and only if \((f_{z})_{P}(a,b)\neq 0\) and \(\beta_{z}=(f_{z})_{P}(a,b)\), \(z\in\{u,v,\bar{u},\bar{v}\}\). If Assumption (ii) is satisfied, then that means in particular that the lowest order terms of both functions, \(\mathrm{i}(f\overline{df}-\bar{f}\bar{d}f)(z(\tau))\) and \(\lambda(\tau)z(\tau)\), must agree. We know from Lemma 5.7 that the lowest order of the left hand is at least \(\min\{d_{z}(f\overline{f_{u}}),d_{z}(\bar{f}\bar{f}_{\bar{u}})\}\) for the first coordinate and at least \(\min\{d_{z}(f\overline{f_{v}}),d_{z}(\bar{f}\bar{v}_{\bar{v}})\}\) for the second coordinate. Using again Lemma 5.7 and the fact that \(d_{z}(f)=d_{z}(\bar{f})=q\), we have at least \(q+\min\{d_{z}(\overline{f_{u}}),d_{z}(f_{\bar{u}})\}\) and \(q+\min\{d_{z}(\overline{f_{v}}),d_{z}(f_{\bar{v}})\}\), respectively. By Lemma 5.8 these are bounded from below by \(q+d_{1}\) and \(q+d_{2}\), respectively, which by [1, Lemma 3.5] are at least \(q+d-p_{1}\) and \(q+d-p_{2}\), respectively. Therefore, the coefficient of \(\tau^{q+d-p_{1}}\) in \(\mathrm{i}(f\overline{f_{u}}-\bar{f}\bar{f_{u}})(z(\tau))\) is either \(0\) or the lowest order coefficient and thus by Assumption (ii) equal to \(\lambda_{0}a\). Likewise, the coefficient of \(\tau^{q+d-p_{2}}\) in \(\mathrm{i}(f\overline{f_{v}}-\bar{f}\bar{v})(z(\tau))\) is either zero or equal to \(\lambda_{0}b\). Using again Lemma 5.7, Lemma 5.8 and the fact that we already know that \(a_{z}(f)=\alpha\), \(d_{z}(f)=q\), we can calculate these coefficients as \(\mathrm{i}(\alpha(\overline{f_{u}})_{P}-\bar{\alpha}(f_{\bar{u}})_{P})(d-p_{1 },P)(a,b)\) and \(\mathrm{i}(\alpha(\overline{f_{v}})_{P}-\bar{\alpha}(f_{\bar{v}})_{P})_{(d-p_{ 2},P)}(a,b)\). Note that these are exactly the definitions of \(w_{1}(a,b)\) and \(w_{2}(a,b)\) in Lemma 5.2. We claim that \(w_{1}(a,b)\) and \(w_{2}(a,b)\) are both zero. (Alternatively, we may consider the coefficients of \(\tau^{q+d_{1}}\) and \(\tau^{q+d_{2}}\), which are also either zero or the lowest order coefficients of the left hand side in Assumption (ii). These coefficients are equal to \(\widetilde{w_{1}}(a,b)\) and \(\widetilde{w_{2}}(a,b)\).) We find that \[\left\langle\frac{dz(\tau)}{d\tau},\overline{df}(z(\tau))\right\rangle+ \left\langle\frac{d\bar{z}(\tau)}{d\tau},\overline{\bar{d}f}(z(\tau))\right\rangle\] \[= \left(f_{u}(z(\tau))\frac{du(\tau)}{d\tau}+f_{\bar{u}}(z(\tau)) \frac{d\bar{u}(\tau)}{d\tau}\right)+\left(f_{v}(z(\tau))\frac{dv(\tau)}{d\tau} +f_{\bar{v}}(z(\tau))\frac{d\bar{v}(\tau)}{d\tau}\right)\] \[= \frac{d}{d\tau}f(z(\tau))=q\alpha\tau^{q-1}+h.o.t. \tag{77}\] where \(\left\langle\,\cdot\,,\,\cdot\,\right\rangle\) denotes the Hermitian inner product in \(\mathbb{C}^{2}\). By Lemma 5.8 and [1, Lemma 3.5] the lowest orders of the left hand side are at least \(d-1\) with \[\left\langle\frac{dz(\tau)}{d\tau},\overline{df}(z(\tau))\right\rangle= \langle(p_{1}a,p_{2}b),((\overline{f_{u}})_{(d-p_{1},P)}(a,b),( \overline{f_{v}})_{(d-p_{2},P)}(a,b))\rangle\tau^{d-1}+h.o.t. \tag{78}\] \[\left\langle\frac{d\bar{z}(\tau)}{d\tau},\overline{\bar{d}f}(z( \tau))\right\rangle= \langle(p_{1}\bar{a},p_{2}\bar{b}),((\overline{f_{\bar{u}}})_{(d-p _{1},P)}(a,b),(\overline{f_{\bar{v}}})_{(d-p_{2},P)}(a,b))\rangle\tau^{d-1}+h.o.t., \tag{79}\] Recall from [1, Lemma 3.5] that for all \(x\in\{u,\bar{u},v,\bar{v}\}\) we have either \(f_{x}=0\) or \(d(P;f_{x})=d-p_{i}\), where \(i=1\) if \(x\in\{u,\bar{u}\}\) and \(i=2\) if \(x\in\{v,\bar{v}\}\). We may thus calculate the coefficient of \(\tau^{d-1}\) in \(\left(\left\langle\frac{dz(\tau)}{d\tau},\overline{df}(z(\tau))\right\rangle+ \left\langle\frac{d\bar{z}(\tau)}{d\tau},\overline{\bar{d}f}(z(\tau))\right\rangle\right)\) and get either zero or \(q\alpha\) (in which case \(d=q\)). Note that \[\langle(p_{1}a,p_{2}b),((\overline{f_{u}})_{(d-p_{1},P)}(a,b),( \overline{f_{v}})_{(d-p_{2},P)}(a,b))\rangle\] \[+\langle(p_{1}\bar{a},p_{2}\bar{b}),(\overline{f_{\bar{u}}(d-p_{1 },P)}(a,b),(\overline{f_{\bar{v}}})_{(d-p_{2},P)}(a,b))\rangle\] \[= (-{\rm i}\bar{\alpha})^{-1}\left(\langle(p_{1}a,p_{2}b),({\rm i} \alpha(\overline{f_{u}})_{(d-p_{1},P)}(a,b),{\rm i}\alpha(\overline{f_{v}})_{ (d-p_{2},P)}(a,b))\rangle\right.\] \[\left.+\langle(p_{1}\bar{a},p_{2}\bar{b}),({\rm i}\alpha \overline{f_{\bar{u}}}_{(d-p_{1},P)}(a,b),{\rm i}\alpha(\overline{f_{\bar{v}} })_{(d-p_{2},P)}(a,b))\rangle\right) \tag{80}\] and \[{\rm Re}\left(\langle(p_{1}a,p_{2}b),({\rm i}\alpha(\overline{f _{u}})_{(d-p_{1},P)}(a,b),{\rm i}\alpha(\overline{f_{v}})_{(d-p_{2},P)}(a,b))\rangle\right.\] \[\left.+\langle(p_{1}\bar{a},p_{2}\bar{b}),({\rm i}\alpha \overline{f_{\bar{u}}}_{(d-p_{1},P)}(a,b),{\rm i}\alpha(\overline{f_{\bar{v}} })_{(d-p_{2},P)}(a,b))\rangle\right)\] \[= \,{\rm Re}\left(\langle(p_{1}a,p_{2}b),({\rm i}\alpha(\overline{f _{u}})_{(d-p_{1},P)}(a,b),{\rm i}\alpha(\overline{f_{v}})_{(d-p_{2},P)}(a,b)) \rangle\right)\] \[+{\rm Re}\left(\langle(p_{1}\bar{a},p_{2}\bar{b}),({\rm i} \alpha\overline{f_{\bar{u}}}_{(d-p_{1},P)}(a,b),{\rm i}\alpha(\overline{f_{ \bar{v}}})_{(d-p_{2},P)}(a,b))\rangle\right)\] \[= \,{\rm Re}\left(\langle(p_{1}a,p_{2}b),({\rm i}\alpha(\overline{ f_{u}})_{(d-p_{1},P)}(a,b),-{\rm i}\bar{\alpha}(f_{\bar{v}})_{(d-p_{2},P)}(a,b)) \rangle\right)\] \[= \,{\rm Re}\left(\langle(p_{1}a,p_{2}b),({\rm i}\alpha(\overline{ f_{u}})_{(d-p_{1},P)}(a,b)-{\rm i}\bar{\alpha}(f_{\bar{u}})_{(d-p_{1},P)}(a,b),\right.\] \[\left.{\rm i}\alpha(\overline{f_{v}})_{(d-p_{2},P)}(a,b)-{\rm i} \bar{\alpha}(f_{\bar{v}})_{(d-p_{2},P)}(a,b))\rangle\right)\] \[= \,{\rm Re}\left(\langle(p_{1}a,p_{2}b),(w_{1}(a,b),w_{2}(a,b)) \rangle\right). \tag{81}\] Suppose that \(w_{1}(a,b)\) and \(w_{2}(a,b)\) are not both equal to \(0\). Recall that \(w_{1}(a,b)\) and \(w_{2}(a,b)\) are equal to \(\lambda_{0}a\) and \(\lambda_{0}b\), respectively, if they are non-zero. From this we obtain that \(\langle(p_{1}a,p_{2}b),(w_{1}(a,b),w_{2}(a,b))\rangle\) is equal to \(\lambda_{0}|a|^{2}p_{1}\), \(\lambda_{0}|b|^{2}p_{2}\) or \(\lambda_{0}(|a|^{2}p_{1}+|b|^{2}p_{2})\), depending on which of \(w_{1}(a,b)\) and \(w_{2}(a,b)\) vanishes. In either case, we obtain a non-zero real number if \(w_{1}(a,b)\neq 0\) or \(w_{2}(a,b)\neq 0\), i.e., \[{\rm Re}\left(\langle(p_{1}a,p_{2}b),(w_{1}(a,b),w_{2}(a,b))\rangle\right)\neq 0, \tag{82}\] which implies that \(d=q\) and \[\langle(p_{1}a,p_{2}b),\big{(}(\overline{f_{u}})_{(d-p_{1},P)}(a, b),(\overline{f_{v}})_{(d-p_{2},P)}(a,b))\rangle\] \[+\langle(p_{1}\bar{a},p_{2}\bar{b}),(\overline{f_{\bar{u}}}_{(d-p _{1},P)}(a,b),(\overline{f_{\bar{v}}})_{(d-p_{2},P)}(a,b)\Big{)}\rangle=q\alpha. \tag{83}\] Then Eq. (80) implies that \[\langle(p_{1}a,p_{2}b),\big{(}{\rm i}\alpha(\overline{f_{u}})_{ (d-p_{1},P)}(a,b),{\rm i}\alpha(\overline{f_{v}})_{(d-p_{2},P)}(a,b))\rangle\] \[+\langle(p_{1}\bar{a},p_{2}\bar{b}),({\rm i}\alpha\overline{f_{ \bar{u}}}_{(d-p_{1},P)}(a,b),{\rm i}\alpha(\overline{f_{\bar{v}}})_{(d-p_{2},P)}(a,b )\Big{)}\rangle=-{\rm i}q|\alpha|^{2}. \tag{84}\] Taking the real part on both sides, we get an obvious contradiction from Eqs. (81) and (82): \[0={\rm Re}\left(\langle(p_{1}a,p_{2}b),(w_{1}(a,b),w_{2}(a,b))\rangle\right)\neq 0. \tag{85}\] This proves the claim that \(w_{1}(a,b)\) and \(w_{2}(a,b)\) are both \(0\). If \(k_{1}\geq\frac{p_{1}}{p_{2}}\geq k_{N}\), then \(\Delta(P)\) is not an extreme vertex. Then by Lemma 5.2\((a,b)\in(\mathbb{C}^{*})^{2}\) is a critical point of \(f_{P}\), contradicting SIND-(ii). If \(\frac{p_{1}}{p_{2}}>k_{1}\) and \(f\) is not \(v\)-convenient, then \(f_{P}=f_{\Delta_{1}}\) is SND by Lemma 5.1. Again, \((a,b)\in(\mathbb{C}^{*})^{2}\) is a critical point of \(f_{P}\), which is a contradiction to \(f_{P}\) being strongly non-degenerate. The case of \(\frac{p_{1}}{p_{2}}<k_{N}\) and \(f\) not \(u\)-convenient is analogous. If \(\frac{p_{1}}{p_{2}}>k_{1}\), \(f\) is \(v\)-convenient and \((1,n)\notin\Delta(P_{1})\cap\operatorname{supp}(f)\) for all \(n\in\mathbb{N}_{0}\), then by Lemma 5.4\((0,b)\) is a critical point of \(f_{P_{1}}\), contradicting SIND-(i). The analogous statement holds if \(\frac{p_{1}}{p_{2}}<k_{N}\), \(f\) is \(u\)-convenient and \((n,1)\notin\Delta(P_{N})\cap\operatorname{supp}(f)\) for all \(n\in\mathbb{N}_{0}\). We are left with one last case (within Case 1), which is that \(\frac{p_{1}}{p_{2}}>k_{1}\), \(f\) is \(v\)-convenient and there is an \(n\in\mathbb{N}_{0}\) with \((1,n)\in\Delta(P_{1})\cap\operatorname{supp}(f)\) (and the analogous case where \(\frac{p_{1}}{p_{2}}<k_{N}\), \(f\) is \(u\)-convenient and \((n,1)\in\Delta(P_{N})\cap\operatorname{supp}(f)\) for some \(n\in\mathbb{N}_{0}\)). Since \(\frac{p_{1}}{p_{2}}>k_{1}\), we know that \(f_{P}=f_{\Delta_{1}}\) and it depends on \(v\) or \(\bar{v}\), so that \(\widetilde{w_{2}}=w_{2}\) and therefore \(\widetilde{w_{2}}(a,b)=w_{2}(a,b)=0\) by Eqs. (81)-(85). Now if \(\widetilde{w_{1}}(a,b)\) is \(0\) as well, then Lemma 5.6 implies that \((0,b)\) is a critical point of \(f_{P_{1}}\), contradicting SIND-(i). Thus \(\widetilde{w_{1}}(a,b)\neq 0\). Note that in this case \(\widetilde{w_{1}}(a,b)\) is the coefficient of \(\tau^{q+d_{1}}\) in \(\operatorname{i}(f\overline{f_{u}}-\bar{f}f_{\bar{u}})(z(\tau))\) and by Lemma 5.8 it is the lowest order coefficient. Therefore, Assumption (ii) implies that \(q+d_{1}=s+p_{1}\). On the other hand, \(\widetilde{w_{2}}(a,b)\), the coefficient of \(\tau^{q+d_{2}}\) of the second coordinate of the left hand side in Assumption (ii), vanishes. This means (again by Lemma 5.8) that the lowest order of \(\tau\) in the second coordinate of the right hand side is strictly greater than \(q+d_{2}\). Comparing with the lowest orders of the right hand side, we obtain \[q+d_{1} =s+p_{1}\] \[q+d_{2} <s+p_{2}. \tag{86}\] Now note that if there is an \(n\in\mathbb{N}_{0}\) with \((1,n)\in\Delta(P_{1})\cap\operatorname{supp}(f)\), then \(k_{1}\geq 1\). Otherwise, the intersection between \(\Delta(P_{1})\) and the vertical line through \((1,n)\) would not be an integer lattice point. Since \(f\) is \(v\)-convenient, there is an intersection between \(\Delta(P_{1})\) and the vertical axis. Denote this intersection point by \((0,n_{0})\) with \(n_{0}\in\mathbb{N}\). Then \(n=n_{0}-k_{1}\) and since \(k_{1}\geq 1\), we have \(n_{0}-n\geq 1\). Note that \(d_{2}=(n_{0}-1)p_{2}\) and \(d_{1}=np_{2}\). It follows from Eq. (86) that \[q+np_{2}=s+p_{1}, \tag{87}\] \[q+(n_{0}-1)p_{2}<s+p_{2} \tag{88}\] and therefore \[q+(n_{0}-1)p_{2} <q+np_{2}-p_{1}+_{2}\] \[\implies(n_{0}-n-1)p_{2} <p_{2}-p_{1}\] \[\implies 0\leq n_{0}-n-1 <1-\frac{p_{1}}{p_{2}}<1-k_{1}<0, \tag{89}\] which is a contradiction and concludes our discussion of Case 1. Case 2: \(u(\tau)\equiv 0\) or \(v(\tau)\equiv 0\) We discuss the case of \(u(\tau)\equiv 0\). The case of \(v(\tau)\equiv 0\) is analogous. Since \(u(\tau)\) and \(v(\tau)\) cannot both be constant \(0\), we have \(v(\tau)\not\equiv 0\). It is thus given by a Taylor series \(v(\tau)=b\tau^{p_{2}}+h.o.t.\) with \(b\in\mathbb{C}^{*}\). We may now pick any positive weight vector \(P\) of the form \(P=(p_{1},p_{2})\) with \(\frac{p_{1}}{p_{2}}>k_{1}\), that is, we are free to choose \(p_{1}\) sufficiently large, while \(p_{2}\) is determined by the Taylor series of \(v(\tau)\). Since \(f(z(\tau))\neq 0\) for \(\tau>0\) by Assumption (i) and \(u(\tau)\equiv 0\), it follows that \(f\) is \(v\)-convenient. Otherwise, \(f(0,v)=0\) for all \(v\in\mathbb{C}\), contradicting Assumption (i). This implies that as in Case 1, \(\widetilde{w_{1}}(0,b)\) and \(\widetilde{w_{2}}(0,b)\) cannot both be \(0\). Otherwise, \((0,b)\) would be a critical point of \(f_{P_{1}}\) by Lemma 5.6, contradicting SIND-(i). Since \(\widetilde{w_{1}}(0,b)\) is the coefficient of \(\tau^{q+d_{1}}\) in the first coordinate of the left hand side of the equation in Assumption (ii), it must vanish, since the first coordinate of the right hand side is constant \(0\). Since \(\widetilde{w_{1}}(0,b)=0\), we must therefore have \(\widetilde{w_{2}}(0,b)\neq 0\). Since \(\frac{p_{1}}{p_{2}}>k_{1}\), \(f_{P}=f_{\Delta_{1}}\) and it depends on \(v\) or \(\bar{v}\), which implies that \(\widetilde{w_{2}}=w_{2}\) and thus \(w_{2}(0,b)\neq 0\). But then the same calculation as in Case 1 results in \[0=\operatorname{Re}\left(\langle(p_{1}0,p_{2}b),(w_{1}(0,b),w_{2}(0,b)) \rangle\right)=\lambda_{0}|b|^{2}p_{2}\neq 0. \tag{90}\] We thus obtain a contradiction in both cases, which proves that \(f\) satisfies the strong Milnor condition. Recall from Corollary 3.6 that for a radially weighted homogeneous mixed polynomial \(f\), the presence of a weakly isolated singularity and the properties PND and IND are all equivalent, and also in the strong sense, i.e., an isolated singularity, SPND and SIND are all equivalent. By Theorem 1.4 and Corollary 3.6, a radially weighted homogeneous mixed polynomial \(f\) with isolated singularity satisfies \(\Sigma_{\arg(f)}=M_{\arg(f)}=\emptyset\). (Note that for radially weighted homogeneous polynomials \(M_{\arg(f)}=\emptyset\) and \(\Sigma_{\arg(f)}=\emptyset\) if and only if \(U\cap M_{\arg(f)}=\emptyset\) and \(U\cap\Sigma_{\arg(f)}=\emptyset\), respectively, for some open neighbourhood \(U\) of the origin.) It is known that if \(f\) is also semibolomorphic, then \(\Sigma_{f}\setminus V_{f}=\Sigma_{\arg(f)}\). This implies that for semiholomorphic radially weighted homogeneous polynomials, under the assumption of a weakly isolated singularity, the strong Milnor condition is equivalent to the existence of an isolated singularity at the origin. We will see now that this equivalence is also true for all radially weighted homogeneous mixed polynomials, not only semiholomorphic ones. **Proposition 5.9**.: _Let \(f\) be a radially weighted homogeneous mixed polynomial with weakly isolated singularity. Then \(M_{\arg f}=\Sigma_{\arg f}=\Sigma_{f}\setminus\{0\}\). Moreover, \(f\) satisfies the strong Milnor condition if and only if \(f\) has an isolated singularity at the origin._ Proof.: From Theorem 1.2 we have that \(M_{\arg f}=\Sigma_{\arg f}\). The fact that \(f\) has a weakly isolated singularity implies that \(\Sigma_{f}\setminus V_{f}=\Sigma_{f}\setminus\{0\}\). Thus it suffices to prove that \(\Sigma_{f}\setminus\{0\}=\Sigma_{\arg f}\). From a direct calculation using the definitions of both sets we have that \(\Sigma_{\arg f}\subseteq\Sigma_{f}\). Consider the real mapping \((|f|,\arg f)\), and let \(p=(u_{*},v_{*})\) a regular point of \(\arg f\). Then there exist \(3\) directions (linear independent) \(w_{1},w_{2}\), and \(w_{3}\in T_{p}\mathbb{R}^{4}\) such that \(\frac{\partial\arg f}{dw_{i}}=0,\ i=1,2,3\), and some direction \(w_{4}\) such that \(\frac{\partial\arg f}{dw_{4}}(p)\neq 0\). On the other hand, note that for \(T=(p_{1}\mathrm{Re}(u),p_{1}\mathrm{Im}(u),p_{2}\mathrm{Re}(v),p_{2}\mathrm{Im }(v))\in\mathbb{R}^{4}\), where \(P=(p_{1},p_{2})\) is the radial weight type of \(f\), we have by the radial action that \(T\cdot\nabla|f|(p)\neq 0\) and also that \(T\cdot\nabla\arg f(p)=0\). It follows that \(T\), interpreted as an element of \(T_{p}\mathbb{R}^{4}\), \(p=(\operatorname{Re}(u),\operatorname{Im}(u),\operatorname{Re}(v),\operatorname {Im}(v))\), is in the span of \(w_{1},w_{2}\) and \(w_{3}\) and we may (after redefining \(w_{1}\) and \(w_{2}\)) assume that \(w_{1},w_{2}\) and \(T\) span the tangent space of the level set of \(\arg(f)\) at \(p\). Therefore, the real Jacobian in the basis \((w_{1},w_{2},T,w_{4})\) is \[Jf(p)=\begin{pmatrix}\frac{\partial|f|}{dw_{1}}(p)&\frac{\partial|f|}{dw_{2}}( p)&\frac{\partial|f|}{dT}(p)\neq 0&\frac{\partial|f|}{dw_{4}}(p)\\ \\ 0&0&\frac{\partial\arg f}{dT}(p)=0&\frac{\partial\arg f}{dw_{4}}(p)\neq 0 \end{pmatrix} \tag{91}\] Thus, \(p\) is a regular point of \(f\), which implies \(\Sigma_{f}\setminus V_{f}\subseteq\Sigma_{\arg f}\). Hence, under the assumption that \(f\) has a weakly isolated singularity, so that \(\Sigma_{f}\cap V_{f}=\{0\}\), we have \(M_{\arg f}=\emptyset\) if and only if \(\Sigma_{f}=\{0\}\). Since the strong Milnor condition is equivalent to \(M_{\arg f}=\emptyset\) in the case of weakly isolated singularity, we have proved the proposition. A different proof of \(M_{\arg f}=\Sigma_{\arg f}=\Sigma_{f}\setminus\{0\}\) for any radially weighed homogeneous mixed polynomial \(f\) can be found in [8], where the author uses a different technique. Together with Corollary 3.6 the proposition proves Theorem 1.5.
2302.01723
A phase transition in block-weighted random maps
We consider the model of random planar maps of size $n$ biased by a weight $u>0$ per $2$-connected block, and the closely related model of random planar quadrangulations of size $n$ biased by a weight $u>0$ per simple component. We exhibit a phase transition at the critical value $u_C=9/5$. If $u<u_C$, a condensation phenomenon occurs: the largest block is of size $\Theta(n)$. Moreover, for quadrangulations we show that the diameter is of order $n^{1/4}$, and the scaling limit is the Brownian sphere. When $u > u_C$, the largest block is of size $\Theta(\log(n))$, the scaling order for distances is $n^{1/2}$, and the scaling limit is the Brownian tree. Finally, for $u=u_C$, the largest block is of size $\Theta(n^{2/3})$, the scaling order for distances is $n^{1/3}$, and the scaling limit is the stable tree of parameter $3/2$.
William Fleurat, Zéphyr Salvy
2023-02-03T13:19:51Z
http://arxiv.org/abs/2302.01723v4
# A phase transition in block-weighted random maps ###### Abstract We consider the model of random planar maps of size \(n\) biased by a weight \(u>0\) per \(2\)-connected block, and the closely related model of random planar quadrangulations of size \(n\) biased by a weight \(u>0\) per simple component. We exhibit a phase transition at the critical value \(u_{C}=9/5\). If \(u<u_{C}\), a condensation phenomenon occurs: the largest block is of size \(\Theta(n)\). Moreover, for quadrangulations we show that the diameter is of order \(n^{1/4}\), and the scaling limit is the Brownian sphere. When \(u>u_{C}\), the largest block is of size \(\Theta(\log(n))\), the scaling order for distances is \(n^{1/2}\), and the scaling limit is the Brownian tree. Finally, for \(u=u_{C}\), the largest block is of size \(\Theta(n^{2/3})\), the scaling order for distances is \(n^{1/3}\), and the scaling limit is the stable tree of parameter \(3/2\). ## 1 Introduction Models of planar maps exhibit a form of _universality_: many "natural" classes of random maps exhibit a similar behaviour when the size grows to infinity. This can be made precise by considering _scaling limits_: when taking an object \(\mathbf{M}_{n}\) uniformly among all objects of size \(n\) in some class, then, after an appropriate rescaling, the sequence converges towards a certain metric space. This was first proved for uniform quadrangulations by Miermont [14] and Le Gall [13], following a sequence of results on this subject [11, 12, 13, 14]. Since then, these results have been extended to other families of maps: the sequence converges towards the _Brownian sphere_\(\mathcal{M}_{e}\) (also called Brownian map, see Fig. 1), always with a rescaling by \(cn^{1/4}\) for some model-dependent \(c>0\). Gromov-Hausdorff's topology allows to give meaning to the convergence of a sequence of maps to a certain limit, considering them as (isometry classes of) compact metric spaces. In particular, uniform planar maps also converge towards the Brownian sphere[1], as well as other families such as uniform triangulations and uniform \(2q\)-angulations (\(q\geqslant 2\)) [13], uniform simple triangulations and uniform simple quadrangulations [1], bipartite planar maps with a prescribed face-degree sequence [10], \((2q+1)\)-angulations [1] and Eulerian triangulations [11]. On the other hand, "degenerate" classes of maps that "look like" trees also exhibit similar behaviours. In particular, up to rescaling by \(cn^{1/2}\), there is a convergence to the _Brownian tree_\(\mathcal{T}^{(2)}\) (see Fig. 2), the scaling limit of uniform critical Galton-Watson trees [1, 1]. This is the case for classes of maps with a tree-decomposition such as stack triangulations [1]; classes of maps with some particular boundary conditions, such as quadrangulations of a polygon [1], outerplanar maps [11]; or, more generally for "subcritical" classes [14] (see [15] for the case of graphs). ## 1 Introduction The theory of the Brownian sphere is a very general theory of statistical mechanics. The theory of statistical mechanics is based on the theory of statistical mechanics. Models interpolating between the Brownian tree and the circle can be obtained by using _looptrees_[13]. Curien and Kortchemski considered the boundary of large percolation clusters in the _uniform infinite planar triangulation_ (which is the local limit of large triangulations) where each vertex is coloured (independently) white with probability \(a\in(0,1)\) and black otherwise. They showed that if \(a\in(0,1/2)\), the scaling limit is the Brownian tree, if \(a\in(1/2,1)\) it is the unit circle and if \(a=1/2\) it is the _stable looptree of parameter_\(3/2\)[13], which correspond to the _stable tree of parameter_\(3/2\) (see Fig. 3) where each branching point is replaced by a circle. Richier [13] also showed that the boundary of critical Boltzmann planar maps with degrees of faces in the domain of attraction of a stable distribution with parameter \(\alpha\in(1,2]\) exhibit a similar phase transition: if \(\alpha\in(1,3/2)\), the scaling limit is the stable looptree of parameter \((\alpha-1/2)^{-1}\), and, with Kortchemski, Richier showed that it is the circle of unit length if \(\alpha\in(3/2,2]\) and conjectured that this holds also for \(\alpha=3/2\)[13]. In both cases, the parameter of the model allows the number of cut vertices appearing on the boundary to be adjusted, thus changing from a "round" to a "tree" phase. Some natural models also interpolate between the Brownian sphere and the Brownian tree. For example, consider random quadrangulations with \(n\) faces and a boundary of length \(\ell\), where \(\ell/\sqrt{n}\to\sigma\). When \(\sigma=0\), the scaling limit is the Brownian sphere, when \(\sigma=\infty\) it is the Brownian tree, and for all \(\sigma\in(0,\infty)\) it is the _Brownian disk_ with boundary length \(\sigma\)[14]. Model.The purpose of this paper is to propose yet another model interpolating between the Brownian sphere and the Brownian tree, but where the transition does not appear through the boundary. It relies on a parameter tuning the density of separating elements. In this model, a map \(\mathfrak{m}\) is sampled with a probability which depends on its number \(b(\mathfrak{m})\) of maximal \(2\)-connected Figure 2: Approximation of the Brownian tree by a binary tree of size approximately \(70\ 000\). Figure 3: Approximation of the stable tree \(3/2\) by a tree of size approximately \(150\ 000\). components, or "blocks", for which a precise definition will be given later in Section2. In fact, we will consider two probability distributions on maps, both indexed by a parameter \(u>0\). The first one is a fixed size model: for any \(n\in\mathbb{Z}_{\geqslant 0}\), we define \[\mathbb{P}_{n,u}\left(\mathfrak{m}\right)=\frac{u^{b(\mathfrak{m})}}{[z^{n}]M(z,u)}\quad\text{for any }\mathfrak{m}\in\mathcal{M}_{n}, \tag{1}\] where \(\mathcal{M}_{n}\) is the set of maps with \(n\) edges, \(M(z,u)=\sum_{\mathfrak{m}\in\mathcal{M}}u^{b(\mathfrak{m})}z^{|\mathfrak{m}|}= \sum_{n\in\mathbb{Z}_{\geqslant 0}}\left([z^{n}]M(z,u)\right)z^{n}\) and \(|\mathfrak{m}|\) is the number of edges of \(\mathfrak{m}\). The second one is a Boltzmann-type distribution which samples maps with random sizes. More precisely, write \(\rho(u)\) for the radius of convergence of \(z\mapsto M(z,u)\). We define: \[\mathbb{P}_{u}\left(\mathfrak{m}\right)=\frac{u^{b(\mathfrak{m})}}{M(\rho(u), u)}\rho(u)^{|\mathfrak{m}|}\quad\text{for any }\mathfrak{m}\in\mathcal{M}. \tag{2}\] The qualitative properties of maps sampled according to these measures change drastically when \(u\) varies, and we will see that it gives rise to different regimes with a phase transition. In this paper, blocks will be either maximal \(2\)-connected components of maps, or maximal simple components of quadrangulations. Indeed, both models have the same underlying structure, so one study gives results for both (see Sections2.4 and 2.5), except for some of the scaling limit results, where some convergence results for \(2\)-connected maps are missing. However, our approach could be generalised to many other models with an underlying tree structure (see Table3), such as the ones described in [10]. In particular, the case \(u=1\) corresponds to sampling a uniform map and \(u\to 0\) to sampling a uniform block. Block decompositions have already been used in the context of scaling limits, and some joint convergences are known: a quadrangulation, its largest \(2\)-connected block, and its largest simple block jointly converge to the same Brownian sphere [1]. The model with a weight per \(2\)-connected blocks was already analysed with a combinatorics point of view by Bonzom [14, SS8] with physical applications in mind (see [13] for a thorough discussion). The so-called _quadric_ model studied in his work can be specialized to our model. Bonzom obtains rational parametrisations for the generating series, and exhibits the possible singular behaviours, which suggest the existence of three different regimes: a "map behaviour", a "tree-behaviour", and in-between a "proliferation of baby universes". Since his focus is much broader, he does not go into details to study this particular model from a probabilistic point of view, and this is the main topic of the present article. For \(u=1\), which corresponds to sampling maps uniformly, this model has also been studied with the point of view of block decomposition in [10] and [1]. Results.Our results are summarized in Table1. In Section4, we show that, with high probability, when \(u<9/5\), there is condensation with one block of size \(\Theta(n)\) and all others of size \(O(n^{2/3})\) (Theorem2); when \(u>9/5\), the largest block has size \(\Theta(\log(n))\) (Theorem3) and when \(u=9/5\) the largest block is of size \(\Theta(n^{2/3})\) (Theorem4). In Section5, we give a unified proof of the convergence towards \(\mathcal{T}^{(2)}\), after renormalising distances by \(n^{1/2}\), in the supercritical case \(u>9/5\); and towards \(\mathcal{T}^{(3/2)}\), after renormalising distances by \(n^{2/3}\), in the critical case \(u=u_{C}\) (Theorem5). For \(u>9/5\), we retrieve a previous result by Stufler for more general weighted models [11]. All these results hold for both maps and their \(2\)-connected cores, and quadrangulations and their simple cores. Finally, when \(u<9/5\), we show in Theorem6 that quadrangulations converge towards the Brownian sphere when renormalising distances by \(n^{1/4}\). In the case of quadrangulations, these results are consistent with existing literature for the case \(u=1\)[13, 14, 15], as well as when \(u\to 0\)[1]. We rely crucially on the convergence of uniform _simple_ quadrangulations with the same normalisation, which is proven in [1], and recalled in Proposition 22 below. A similar convergence result for uniform \(2\)-connected maps would be needed in order to prove a version of Theorem 6 for maps, see the discussion after the statement of Proposition 22. Such a convergence is expected to hold and hinted at for instance by Lehericy's results [14], which show that graph distances on a uniform map of size \(n\) and on its quadrangulation via Tutte's bijection behave similarly when \(n\to\infty\). Section 2 and Theorem 1 introduce tools to prove these theorems. We show that maps and quadrangulations can be decomposed into blocks with an underlying tree structure. We show that the law of such trees can be described by a Galton-Watson model (as in several papers cited above). From there, we exhibit in Section 3 a phase diagram going from a condensation phenomenon (\(u<9/5\)) to a critical "generic" regime (\(u>9/5\)) going through a "non-generic" critical point (\(u=9/5\)). ## 2 Tree decomposition of maps ### Maps and their enumeration A _planar map_\(\mathfrak{m}\) is the proper embedding into the two-dimensional sphere of a connected planar finite multigraph, considered up to homeomorphisms. Let \(V(\mathfrak{m})\) be the set of its vertices, \(E(\mathfrak{m})\) the set of its edges and \(F(\mathfrak{m})\) the set of its faces. The size of a planar map \(\mathfrak{m}\) -- denoted by \(|\mathfrak{m}|\) -- is defined as its number of edges. A _half-edge_\(e\) is an oriented edge from \(u\) to \(v\) (with possibly \(v=u\)) and is represented as half of an edge starting from \(u\). Its _starting vertex_\(u\) is denoted by \(e^{-}\) and its _end vertex_\(v\) is denoted \(e^{+}\). Let \(\overrightarrow{E}(\mathfrak{m})\) be the set of half-edges of \(\mathfrak{m}\). A _corner_ is the angular sector between two consecutive edges in counterclockwise order around a vertex. Each half-edge is canonically associated to the corner that follows it in counterclockwise order around its starting vertex. The _degree_ of a face is the number of corners incident to it. All the maps considered in this paper are _rooted_, meaning that one of their half-edges (or one of their corners) is distinguished. The set of rooted planar maps -- simply called maps in the following -- is denoted by \(\mathcal{M}\). For \(n\) in \(\mathbb{Z}_{\geqslant 0}\), let \(m_{n}\) be the number of maps of size \(n\) and \(M(z)=\sum_{n\in\mathbb{Z}_{\geqslant 0}}m_{n}z^{n}\) be the associated generating series. By convention, we set \(m_{0}=1\) which corresponds to the _vertex map_: the map reduced to a single vertex. Similarly, define the _edge map_ as the map reduced to a single edge between two vertices. Rooting simplifies the study by avoiding symmetry problems, however we expect our results remain true in the non-rooted setting due to the general results of [11]. The enumerative study \begin{table} \begin{tabular}{c|c c c} & Largest block & Scaling & Scaling limit \\ \hline \(u<9/5\) & \(\Theta(n)\) & \(n^{1/4}\) & Brownian sphere \(\mathcal{M}_{e}\)1 \\ \(u=9/5\) & \(\Theta(n^{2/3})\) & \(n^{1/3}\) & Stable tree \(\mathcal{T}^{(3/2)}\) \\ \(u>9/5\) & \(\Theta(\log(n))\) & \(n^{1/2}\) & Brownian tree \(\mathcal{T}^{(2)}\) \\ \end{tabular} \end{table} Table 1: Behaviour of the model when \(u\) varies. of rooted planar maps was initiated by Tutte in the 60s. In particular, he obtained the following result: **Proposition 1** ([14]).: _The number \(m_{n}\) of maps of size \(n\) is equal to_ \[m_{n}=\frac{2(2n)!3^{n}}{(n+2)!n!}\sim\frac{2}{\sqrt{\pi}}12^{n}n^{-5/2},\quad n \to\infty. \tag{3}\] _This implies in particular that \(\rho_{M}=1/12\) and \(M(\rho_{M})<\infty\), where \(\rho_{M}\) denotes the radius of convergence of \(M(z)\)._ ### 2-connected maps and block decomposition **Definition 1**.: _Let \(\mathfrak{m}\in\mathcal{M}\) and \(v\in V(\mathfrak{m})\). Then, \(v\) is said to be a cut vertex if \(\mathfrak{m}\setminus\{v\}\) is not connected. A map \(\mathfrak{m}\) is said to be separable if it has at least one cut vertex. It is said to be 2-connected otherwise, see Fig. 5._ For \(n\in\mathbb{Z}_{\geqslant 0}\), we write \(\mathcal{B}_{n}\) for the set of 2-connected maps of size \(n\), and \(b_{n}=|\mathcal{B}_{n}|\). From Fig. 5, we see that \(b_{0}=1\), \(b_{1}=2\) and \(b_{2}=1\). Notice in particular that the only 2-connected map with a loop is the map reduced to a loop-edge. **Definition 2**.: _A block of a planar map \(\mathfrak{m}\) is a maximal 2-connected submap of positive size. The number of blocks of \(\mathfrak{m}\) is denoted by \(b(\mathfrak{m})\)._ In the 60's, Tutte introduced the so-called "block decomposition of maps" [14], which roughly speaking corresponds to cutting the map at all cut-vertices, and is illustrated on Fig. 6 (this is known for graphs as well and called _block-cut tree_, see _e.g._[14]). We describe here this decomposition drawing inspiration from Addario-Berry's presentation [1, SS2]. Let \(\mathfrak{m}\) be a map and let \(\mathfrak{b}\) be the block containing its root. For each half-edge \(e\) of \(\mathfrak{b}\), we define the _pendant submap_\(\mathfrak{m}_{e}\) of \(e\) as the maximal submap of \(\mathfrak{m}\) disjoint from \(\mathfrak{b}\) except at \(e^{-}\) and located to the left of \(e\) (it is possibly reduced to the vertex map). If \(\mathfrak{m}_{e}\) has at least one edge, we root it at the half-edge of \(\mathfrak{m}\) following \(e\) in counterclockwise order around \(e^{-}\) (see Fig. 7). From \(\mathfrak{b}\) and the \(2|E(\mathfrak{b})|\) pendant submaps \(\{\mathfrak{m}_{e},e\in\overrightarrow{E}(\mathfrak{b})\}\), it is possible to reconstruct the map \(\mathfrak{m}\): for each \(\mathfrak{m}_{e}\) rooted at the half-edge \(\rho\), insert \(\mathfrak{m}_{e}\) in the corner associated to \(e\) in such a way that \(\rho\) is the first edge after \(e\) in counterclockwise order and merge \(\rho^{-}\) and \(e^{-}\). Thus, a map can be encoded as a block where each edge is decorated by two maps. This decomposition induces an identity of generating series, thanks to the symbolic method [14, Ch1]. Letting \(B(y)=\sum_{n\in\mathcal{Z}_{\geqslant 0}}b_{n}y^{n}\), Tutte's block decomposition translates into the following equality of generating series: \[M(z)=B(zM(z)^{2}). \tag{4}\] Thanks to (4) and an explicit expansion for \(M(z)\) obtained in [13], Tutte obtained the following enumerative results for 2-connected maps. **Proposition 2** ([13]).: _The number \(b_{n}\) of 2-connected maps of size \(n\) is_ \[b_{0}=1,\qquad\text{and for }n\geqslant 1,\ b_{n}=\frac{2(3n-3)!}{n!(2n-1)!} \sim\sqrt{\frac{3}{\pi}}\frac{2}{27}\left(\frac{27}{4}\right)^{n}n^{-5/2}, \quad n\to\infty. \tag{5}\] _Moreover, writing \(\rho_{B}\) for the radius of convergence of the series \(B\), we have_ \[\rho_{B}=\frac{4}{27},\qquad B(\rho_{B})=4/3\qquad\text{and}\qquad\rho_{B} \times B^{\prime}(\rho_{B})=\sum_{n\in\mathcal{Z}_{\geqslant 0}}nb_{n}\rho_{B}^{ \,n}=4/9. \tag{6}\] In the following, we consider maps enumerated by both their number of edges and their numbers of blocks. Namely, we consider the following bivariate series: \(M(z,u)=\sum_{\mathfrak{m}\in\mathcal{M}}z^{|\mathfrak{m}|}u^{b(\mathfrak{m})}\) (recall that \(b(\mathfrak{m})\) is the number of blocks of \(\mathfrak{m}\) and \(|\mathfrak{m}|\) is its number of edges). Tutte's decomposition of a map into blocks translates in the following refined version of (4): \[M(z,u)-1=u\left(B(zM(z,u)^{2})-1\right)\qquad\text{i.e.}\qquad M(z,u)=uB(zM(z,u )^{2})+1-u, \tag{7}\] where the term \(1-u\) accounts for the fact that the vertex map has no block by Definition 2 (even if it is 2-connected). For \(u>0\), denote by \(\rho(u)\) the radius of convergence of \(z\mapsto M(z,u)\). Since for \(z\geqslant 0\) and \(u\geqslant 1\) \[M(z,u)\leqslant\sum_{\mathfrak{m}\in\mathcal{M}}z^{|\mathfrak{m}|}u^{| \mathfrak{m}|}=M(uz),\] if \(|uz|<\rho_{M}=1/12\), then \(M(z,u)\) is a converging sum. Hence, for \(u\geqslant 1\), \(\rho(u)\geqslant\frac{1}{12u}>0\). On the other hand, since \(\rho(u)\) is decreasing, for \(u\leqslant 1\) we have \(\rho(u)\geqslant\rho(1)=\rho_{M}=1/12\) (and \(\rho(u)\leqslant\rho(0)=\rho_{B}=4/27\)). In view of the form of the equation (7) and in particular that it is non-linear, it holds that \(M(\rho(u),u)<\infty\). Indeed, since \(B(y)\geqslant 1+2y\) for all \(y\geqslant 0\), we get \(M(z,u)\geqslant 1+2uzM(z,u)^{2}\). This shows that it is impossible that \(M(z,u)\xrightarrow[z\to\rho(u)^{-}]{}+\infty\). ### Block tree of a map and its applications Tutte's block decomposition can also be applied recursively, _i.e._ we consider first the root block and then apply the block decomposition to each of the pendant submaps. By doing so, for any map \(\mathfrak{m}\) we can obtain a decomposition tree \(T_{\mathfrak{m}}\), which was first explicity described by Addario-Berry in [1, SS2]. More precisely: 1. Let \(\mathfrak{b}=(\mathfrak{b},\rho)\) be the maximal \(2\)-connected submap containing the root \(\rho\). The root \(v_{\rho}\) of \(T_{\mathfrak{m}}\) represents \(\mathfrak{b}\), and has \(2|E(\mathfrak{b})|\) children (in particular, if \(\mathfrak{b}\) is of size \(0\), \(v_{\rho}\) is a leaf); 2. List the half-edges of \(\mathfrak{b}\) as \(a_{1},\ldots,a_{2|E(\mathfrak{b})|}\) according to an arbitrarily fixed deterministic order on half-edges (_e.g._ the order in a left-to-right depth first search). Let \(\mathfrak{m}_{i}\) be the pendant submap in the corner corresponding to the half-edge \(a_{i}\) in \(\mathfrak{b}\). The \(i\)-th pendant subtree of \(T_{\mathfrak{m}}\) is the subtree encoding \(T_{\mathfrak{m}_{i}}\). An example of such a correspondence is described in Fig. 8. This decomposition has three essential properties, that follow directly from its definition, and that we summarize in the following proposition. **Proposition 3** ([14, 15]).: * _The edges of_ \(T_{\mathfrak{m}}\) _correspond to the half-edges of_ \(\mathfrak{m}\)_;_ * _The internal nodes of_ \(T_{\mathfrak{m}}\) _correspond to the blocks of_ \(\mathfrak{m}\)_: if an internal node_ \(v\) _of_ \(T_{\mathfrak{m}}\) _has_ \(r\) _children, then the corresponding block_ \(\mathfrak{b}_{v}\) _of_ \(\mathfrak{m}\) _has size_ \(r/2\)_;_ * _The map_ \(\mathfrak{m}\) _is entirely determined by_ \((T_{\mathfrak{m}},(\mathfrak{b}_{v},v\in T_{\mathfrak{m}}))\) _where_ \(\mathfrak{b}_{v}\) _is the block of_ \(\mathfrak{m}\) _represented by_ \(v\) _in_ \(T_{\mathfrak{m}}\) _if_ \(v\) _is an internal node; else, by convention,_ \(\mathfrak{b}_{v}\) _is the vertex map._ Figure 8: Block tree corresponding to a planar map. By abuse of language, we might refer to \((\mathfrak{b}_{v},v\in T_{\mathfrak{m}})\) as the family of blocks (even if blocks necessarily have positive size). A direct consequence of this proposition is that to study the block sizes of a map \(\mathfrak{m}\), it is sufficient to study the degree distribution of \(T_{\mathfrak{m}}\). This is precisely the strategy developed by Addario-Berry in [1]. This allows him to study the block sizes of a uniform random map \(\mathbf{M}_{n}\) of size \(n\), by describing \(T_{\mathbf{M}_{n}}\) as a Galton-Watson tree with an explicit degree distribution conditioned to have \(2n\) edges, and one of our contributions is to extend his result to our model. ### Block tree of a quadrangulation We describe in this section how a quadrangulation can be decomposed into maximum simple quadrangular components, in the same way that a map can be decomposed into maximum \(2\)-connected components. **Definition 3**.: _A quadrangulation is a map with all faces of degree \(4\)._ Planar quadrangulations are _bipartite_, _i.e._ their vertices can be properly bicolored in black and white. In the following, we always assume that they are endowed with the unique such coloring having a black root vertex. Although quadrangulations are maps, when an object is explicitly defined as a quadrangulation, its size will be its number of faces. Thus, a quadrangulation of size \(n\) has \(2n\) edges. **Definition 4**.: _A quadrangulation of the \(2\)-gon is a map where the root face -- the face containing the corner associated to the root -- has degree \(2\) and all other faces have degree \(4\)._ A quadrangulation of the \(2\)-gon with at least two faces can be identified with a quadrangulation of the sphere by simply gluing together both edges of the root face. **Definition 5**.: _A quadrangulation is called simple if it has neither loops nor multiple edges._ **Definition 6**.: _Let \(e_{1}e_{2}\) be a \(2\)-cycle of a quadrangulation \(\mathfrak{q}\), its interior is the submap of \(\mathfrak{q}\) between \(e_{1}\) and \(e_{2}\) (both included) which does not contain the root corner of \(\mathfrak{q}\). A \(2\)-cycle is maximal when it does not belong to the interior of another \(2\)-cycle._ **Definition 7**.: _Let \(e_{1}e_{2}\) be a maximal \(2\)-cycle of a quadrangulation \(\mathfrak{q}\), its pendant subquadrangulation is defined as its interior, which is turned into a quadrangulation of the \(2\)-gon by rooting it at the corner incident to the unique black vertex of \(e_{1}e_{2}\)._ _Let \(e\) be a half-edge of a quadrangulation \(\mathfrak{q}\). If \(e\) is oriented from black to white and there exists a half-edge \(f\) such that \(ef\) is a maximal \(2\)-cycle of \(\mathfrak{q}\), then the pendant subquadrangulation of \(e\) is the pendant subquadrangulation of \(ef\). Else, it is the edge map (which is also a quadrangulation of the \(2\)-gon)._ For \(\mathfrak{q}\) a quadrangulation, its _simple core_\(\mathfrak{q}_{s}\) -- the simple block containing the root -- is obtained by collapsing the interior of every maximal \(2\)-cycle of \(\mathfrak{q}\). Similarly as for maps, a decomposition tree \(T_{\mathfrak{q}}^{(q)}\) can be associated to a quadrangulation \(\mathfrak{q}\), by recursively decomposing the pendant subquadrangulations at the simple core, see Fig. 9. _Simple blocks_ are recursively defined as the simple cores appearing in the underlying arborescent decomposition. We then have an exact parallel with the situation of maps and their 2-connected components. Given a simple quadrangulation \(\mathfrak{q}_{s}\) and a collection of \(|E(\mathfrak{q}_{s})|=2|q_{s}|\) quadrangulations of the 2-gon \(\{\mathfrak{m}_{e},e\in E(\mathfrak{q}_{s})\}\), it is possible to construct a quadrangulation: for each \(\mathfrak{m}_{e}\) of root \(\rho_{e}\) replace \(e\) by \(\mathfrak{m}_{e}\) such that \(\rho_{e}\) has the orientation \(e\). See Fig. 10 for an illustration. This transformation is invertible. Thus, a quadrangulation can be encoded as a simple quadrangulation where each edge is decorated by one quadrangulation of the 2-gon, _i.e._ each face is decorated by two quadrangulations of the 2-gon: \[Q(z,u)+1=uS(z(Q(z,u)+1)^{2})+1-u, \tag{8}\] where \(Q\) is the generating series for quadrangulations (with a weight \(z\) for faces, and \(u\) for simple blocks) and \(S\) is the generating series for simple quadrangulations (with a weight \(z\) for faces). Note that this equation is isomorphic to (7). This decomposition and the former one presented for general maps are in fact two sides of the Figure 10: Reconstructing a quadrangulation from its simple core and the pendant subquadrangulations Figure 9: The image of the map of Fig. 8 via Tutte’s bijection, and its block tree. same coin. Indeed, they can be related via Tutte's bijection as we now present: there exists an explicit bijective construction between quadrangulations of size \(n\) and (general) maps of size \(n\). More precisely, for a map \(\mathfrak{m}\) (rooted in \(\rho\)), its image by \(\varphi\), called its _angular map_, can be constructed as follows, see Fig. 11. 1. Add a (white) vertex inside each face of \(\mathfrak{m}\) and draw an edge from this new (white) vertex to each corner around the face (respecting the order of the corners); 2. The half-edge \(e\) created in the corner of \(\rho\) is now the root, oriented from black to white; 3. Remove the original edges. **Proposition 4**.: _For \(n\in\mathbb{Z}_{>0}\), the function \(\varphi\) is a bijection between maps of size \(n\) and quadrangulations of size \(n\). Moreover, it maps bijectively 2-connected maps of size \(n\geqslant 1\) to simple quadrangulations of size \(n\)._ The construction \(\varphi\) is due to Tutte [14, SS5] (he defines the notion of derived map, from which the angular map is extracted by deleting one of the 3 classes of vertices, as explained in [13, SS7]). The specialization to 2-connected maps is explained _e.g._ in [13]. In particular, it implies that \(S(y)=B(y)\). Moreover, given Equations (7) and (8), this gives \(M(z,u)=Q(z,u)+1\). Finally, when constructing the decomposition tree \(T_{\mathfrak{q}}^{(q)}\), if the deterministic orders used for the half-edges of 2-connected maps and for the edges of simple quadrangulations are consistent via Tutte's bijection, then the decomposition trees of \(\mathfrak{m}\) and of \(\varphi(\mathfrak{m})\) are the same, and for each node \(v\) of the tree, the 2-connected map (resp. simple quadrangulation) at \(v\) are in correspondence by Tutte's bijection, _e.g._ the example of Fig. 8 is consistent with the example of Fig. 9 via Tutte's bijection. This can be rephrased as the following result. **Proposition 5**.: _For all \(\mathfrak{m}\in\mathcal{M}\),_ \[T_{\varphi(\mathfrak{m})}^{(q)}=T_{\mathfrak{m}}\] _and, for all \(v\in T_{\varphi(\mathfrak{m})}^{(q)}\),_ \[\mathfrak{b}_{v}^{(q)}=\varphi(\mathfrak{b}_{v}).\] Figure 11: The quadrangulation corresponding to a map via Tutte’s bijection. ### Probabilistic consequences Recall the model defined in Equations (1) and (2) for general maps. As promised, we now define its analogue on quadrangulations, and show their equivalence. To that end, we set for all \(\mathfrak{m}\in\mathcal{M}_{n}\), and for all \(\mathfrak{q}\in\mathcal{Q}_{n}\), \[\mathbb{P}_{n,u}^{\mathrm{map}}\left(\mathfrak{m}\right)=\frac{u^{b( \mathfrak{m})}}{[z^{n}]M(z,u)}\propto u^{b(\mathfrak{m})}\qquad\text{and} \qquad\mathbb{P}_{n,u}^{\mathrm{quad}}\left(\mathfrak{q}\right)=\frac{u^{b( \mathfrak{q})}}{[z^{n}]Q(z,u)}\propto u^{b(\mathfrak{q})},\] and consider for all \(\mathfrak{m}\in\mathcal{M}\) and \(\mathfrak{q}\in\mathcal{Q}\) the singular Boltzmann laws (remember that, as explained in Section 2.2, \(M(\rho(u),u)=Q(\rho(u),u)<\infty\)) \[\mathbb{P}_{u}^{\mathrm{map}}\left(\mathfrak{m}\right)=\frac{u^{b(\mathfrak{ m})}\rho(u)^{|\mathfrak{m}|}}{M(\rho(u),u)}\qquad\text{and}\qquad\mathbb{P}_{u}^{ \mathrm{quad}}\left(\mathfrak{q}\right)=\frac{u^{b(\mathfrak{q})}\rho(u)^{| \mathfrak{q}|}}{Q(\rho(u),u)},\] then \[\mathbb{P}_{n,u}^{\mathrm{map}}=\mathbb{P}_{u}^{\mathrm{map}}\left(\cdot\mid \mathcal{M}_{n}\right)\qquad\text{and}\qquad\mathbb{P}_{n,u}^{\mathrm{quad}}= \mathbb{P}_{u}^{\mathrm{quad}}\left(\cdot\mid\mathcal{Q}_{n}\right).\] By Proposition 5, one has: **Proposition 6**.: _For all \(\mathfrak{q}\in\mathcal{Q}\) and \(n\in\mathbb{Z}_{\geqslant 0}\),_ \[\mathbb{P}_{n,u}^{\mathrm{quad}}\left(\mathfrak{q}\right)=\mathbb{P}_{n,u}^{ \mathrm{map}}\left(\varphi^{-1}(\mathfrak{q})\right)\qquad\text{and}\qquad \mathbb{P}_{u}^{\mathrm{quad}}\left(\mathfrak{q}\right)=\mathbb{P}_{u}^{ \mathrm{map}}\left(\varphi^{-1}(\mathfrak{q})\right),\] _so, denoting by \(*\) the pushforward, for all \(n\in\mathbb{Z}_{\geqslant 0}\),_ \[\mathbb{P}_{n,u}^{\mathrm{quad}}=\varphi_{*}\mathbb{P}_{n,u}^{\mathrm{map}} \qquad\text{and}\qquad\mathbb{P}_{u}^{\mathrm{quad}}=\varphi_{*}\mathbb{P}_{u }^{\mathrm{map}}.\] ### A word on the probabilistic setting We denote by \(\mathbf{M}:\mathcal{M}\to\mathcal{M}\) the canonical random variable on the space of maps, and let \(\mathbf{Q}=\varphi(\mathbf{M})\). We denote by \(\mathbf{T}\) the block tree associated to \(\mathbf{M}\) (and also to \(\mathbf{Q}\) by Proposition 5). In this way, under \(\mathbb{P}_{u}\) (resp. \(\mathbb{P}_{n,u}\)), \(\mathbf{M}\) has law \(\mathbb{P}_{u}^{\mathrm{map}}\) (resp. \(\mathbb{P}_{n,u}^{\mathrm{map}}\)), and, by Proposition 6, \(\mathbf{Q}\) has law \(\mathbb{P}_{u}^{\mathrm{quad}}\), (resp. \(\mathbb{P}_{n,u}^{\mathrm{quad}}\)). Therefore, we will simply use \(\mathbb{P}_{n,u}\) and \(\mathbb{P}_{u}\) as a shorthand notation for \(\mathbb{P}_{n,u}^{\mathrm{map}}\) and \(\mathbb{P}_{u}^{\mathrm{map}}\). Maximal simple components of quadrangulations will also be called "blocks" because everything that has been said about blocks (in the sense of maximum \(2\)-connected components) can also be said about the maximum simple quadrangular components of quadrangulations; and likewise in everything that follows. As a consequence, every result about the size of the blocks of a map of size \(n\) is valid for blocks of quadrangulations of size \(n\) as well. For \(v\) a vertex of \(\mathbf{T}\), we denote by \(\mathfrak{b}_{v}^{\mathbf{M}}\) (resp. \(\mathfrak{b}_{v}^{\mathbf{Q}}\)) the \(2\)-connected block of \(\mathbf{M}\) (resp. simple block of \(\mathbf{Q}\)) represented by \(v\) in \(\mathbf{T}\). By Proposition 5, it holds that \(\mathfrak{b}_{v}^{\mathbf{Q}}=\varphi(\mathfrak{b}_{v}^{\mathbf{M}})\) for all \(v\in\mathbf{T}\), where \(\varphi\) is Tutte's bijection. These random variables will be studied under probability measures \(\mathbb{P}_{u}\) and \((\mathbb{P}_{n,u})_{n\geqslant 1}\), which were introduced in Section 2.5. We write accordingly \(\mathbb{E}_{u}[\ldots]\) and \(\mathbb{E}_{n,u}[\ldots]\) the expectations with respect to these probability measures. Unless mentioned otherwise or if it is clear from context, other random variables shall be viewed as defined on some probability space \((\Omega,P)\), and the according expectations will be written as \(E\left[\ldots\right]\). In particular we will use the following random variables defined on \((\Omega,P)\): * For each \(u\geqslant 0\), the triplet \((\mathbf{T}_{n,u},\mathbf{M}_{n,u},\mathbf{Q}_{n,u})\) is \((\mathbf{T},\mathbf{M},\mathbf{Q})\) under the law \(\mathbb{P}_{n,u}\). * For each \(k\geqslant 1\), the pair \((B_{k}^{\mathrm{map}},B_{k}^{\mathrm{quad}})\) consists of a 2-connected map \(B_{k}^{\mathrm{map}}\) with \(k\) edges sampled uniformly, together with \(B_{k}^{\mathrm{quad}}=\varphi(B_{k}^{\mathrm{map}})\) its image by Tutte's bijection. By Proposition 4, the latter is a simple quadrangulation with \(k\) faces sampled uniformly. ## 3 Phase diagram For \(\mu\) a probability distribution on \(\mathbb{Z}_{\geqslant 0}\) and \(n\in\mathbb{Z}_{\geqslant 0}\), we denote by \(GW(\mu,n)\) the law of a Galton-Watson tree with offspring distribution \(\mu\) and conditioned to have \(n\) edges. Following [1], for \(u>0\) we aim at finding a measure \(\mu^{u}\) such that \(\mathbf{T}\) under \(\mathbb{P}_{n,u}\) has law \(GW(\mu^{u},2n)\). To that end, for any \(y\in[0,\rho_{B}]\) we introduce the following probability distribution \[\mu^{y,u}(2j):=\frac{b_{j}y^{j}u^{\mathbb{1}_{j\neq 0}}}{1+u(B(y)-1)}\qquad \text{for all }j\in\mathbb{Z}_{\geqslant 0}. \tag{9}\] where \(b_{j}\) and \(B\) are defined in Proposition 2. Moreover (see Remark 1 for a discussion), we set \[y(u):=\rho(u)M^{2}(\rho(u),u)\quad\text{and}\quad\mu^{u}:=\mu^{y(u),u}\qquad \text{for any }u>0, \tag{10}\] where we recall that \(\rho(u)\) is the radius of convergence of \(z\mapsto M(z,u)\). On Fig. 12, the value of \(y(u)\) is represented, using an explicit expression (see Remark 2). Notice that in view of (7), \(y(u)\leqslant\rho_{B}\) for all \(u>0\) and \[1+u(B(y(u))-1)=M(\rho(u),u). \tag{11}\] Then, by (5), for all \(u>0\), we have: \[\mu^{u}(\{2j\})\sim\sqrt{\frac{3}{\pi}}\frac{2}{27}\frac{u}{M(\rho(u),u)}\left( \frac{27}{4}y(u)\right)^{j}j^{-5/2},\quad\text{as }j\to\infty,\] so that by setting \[c(u)=\sqrt{\frac{3}{\pi}}\frac{2}{27}\frac{u}{M(\rho(u),u)}, \tag{12}\] it holds that \[\mu^{u}(\{2j\})\sim c(u)\left(\frac{27}{4}y(u)\right)^{j}j^{-5/2},\quad\text{ as }j\to\infty. \tag{13}\] The following proposition extends [1, Proposition 3.1] to our setting. **Proposition 7**.: _Let \((\mathbf{B}_{v},v\in\mathbf{T})\) be either the family \((\mathfrak{b}_{v}^{\mathbf{M}})_{v\in\mathbf{T}}\) of blocks of \(\mathbf{M}\), or \((\mathfrak{b}_{v}^{\mathbf{Q}})_{v\in\mathbf{T}}\) of blocks of \(\mathbf{Q}\). For every \(u>0\), under \(\mathbb{P}_{u}\), the law of tree of blocks \((\mathbf{T},(\mathbf{B}_{v},v\in\mathbf{T}))\) can be described as follows._ * \(\mathbf{T}\) _follows the law_ \(GW(\mu^{u})\)_;_ * _Conditionally given_ \(\mathbf{T}=\mathfrak{t}\)_, the blocks_ \((\mathbf{B}_{v},v\in\mathfrak{t})\) _are independent random variables, and, for_ \(v\in\mathfrak{t}\)_,_ \(\mathbf{B}_{v}\) _follows a uniform distribution on the set of blocks of size_ \(k_{v}(\mathfrak{t})/2\)_, where_ \(k_{v}(\mathfrak{t})\) _is the number of children of_ \(v\) _in_ \(\mathfrak{t}\) _For every \(n\geqslant 1\), the same statements hold under \(\mathbb{P}_{n,u}\), only replacing \(GW(\mu^{u})\) with \(GW(\mu^{u},2n)\)._ Proof.: It suffices to prove the first part of the statement. Let \(\mathfrak{t}\) be a tree with \(2n\) edges, where each vertex has an even number of children, and let \((\mathfrak{b}_{v},v\in\mathfrak{t})\) be a family of (2-connected, or simple) blocks, with \(2|\mathfrak{b}_{v}|=k_{v}(\mathfrak{t})\) for any \(v\in\mathfrak{t}\). Let \(\mathfrak{m}\) be the map (or quadrangulation) with block decomposition given by \((\mathfrak{t},(\mathfrak{b}_{v},v\in\mathfrak{t}))\). Then, we have \[\mathbb{P}_{u}\left(\mathbf{T}=\mathfrak{t},\mathbf{B}_{v}= \mathfrak{b}_{v}\ \forall v\in\mathfrak{t}\right)=\mathbb{P}_{u}(\mathfrak{m})\] \[=\frac{\rho(u)^{|\mathfrak{m}|}u^{b(\mathfrak{m})}}{M(\rho(u),u) }=\frac{\rho(u)^{\sum_{v\in\mathfrak{t}}k_{v}(\mathfrak{t})/2}u^{\sum_{v\in \mathfrak{t}}\mathbbm{1}_{k_{v}(\mathfrak{t})\neq 0}}}{M(\rho(u),u)}\prod_{v\in \mathfrak{t}}\frac{b_{\frac{k_{v}(\mathfrak{t})}{2}}}{b_{\frac{k_{v}( \mathfrak{t})}{2}}}\] \[=\frac{1}{M(\rho(u),u)}\left(\frac{y(u)}{M^{2}(\rho(u),u)}\right) ^{\sum_{v\in\mathfrak{t}}k_{v}(\mathfrak{t})/2}\prod_{v\in\mathfrak{t}}b_{ \frac{k_{v}(\mathfrak{t})}{2}}u^{\mathbbm{1}_{k_{v}(\mathfrak{t})\neq 0}} \times\prod_{v\in\mathfrak{t}}\frac{1}{b_{\frac{k_{v}(\mathfrak{t})}{2}}}\] \[=\frac{\prod_{v\in\mathfrak{t}}b_{\frac{k_{v}(\mathfrak{t})}{2}} y(u)^{k_{v}(\mathfrak{t})/2}u^{\mathbbm{1}_{k_{v}(\mathfrak{t})\neq 0}}}{M(\rho(u),u) ^{1+\sum_{v\in\mathfrak{t}}k_{v}(\mathfrak{t})}}\times\prod_{v\in\mathfrak{t }}\frac{1}{b_{\frac{k_{v}(\mathfrak{t})}{2}}}\] \[=\prod_{v\in\mathfrak{t}}\frac{b_{\frac{k_{v}(\mathfrak{t})}{2}} y(u)^{k_{v}(\mathfrak{t})/2}u^{\mathbbm{1}_{k_{v}(\mathfrak{t})\neq 0}}}{M(\rho(u),u)} \times\prod_{v\in\mathfrak{t}}\frac{1}{b_{\frac{k_{v}(\mathfrak{t})}{2}}}\] \[=GW(\mu^{u})(\mathfrak{t})\times\prod_{v\in\mathfrak{t}}\frac{1}{ b_{\frac{k_{v}(\mathfrak{t})}{2}}}\,.\] This concludes the proof. Figure 12: Plot of \(y\) as a function of \(u\). **Theorem 1**.: _Recall the definition of \(c(u)\) given in (12). Then, depending on the value of \(u\), the model \(\mathbb{P}_{u}\) undergoes the following phase transition, driven by the properties of \(\mu^{u}\):_ **Subcritical case**.: _For \(u<u_{C}:=9/5\),_ \[E(u):=E\left[\mu^{u}\right]=\frac{8u}{3(3+u)}<1\qquad\text{and}\qquad\mu^{u}( \{2j\})\sim c(u)j^{-5/2} \tag{14}\] _where \(c(u)=\sqrt{\frac{3}{\pi}}\frac{2u}{9(3+u)}\);_ **Critical case**.: _For \(u=u_{C}:=9/5\),_ \[E\left[\mu^{u}\right]=1\qquad\text{and}\qquad\mu^{u_{C}}(\{2j\})\sim\frac{1} {4\sqrt{3\pi}}j^{-5/2};\] **Supercritical case**.: _For \(u>9/5\),_ \[E\left[\mu^{u}\right]=1\qquad\text{and}\qquad\mu^{u}(\{2j\})\sim c(u)\left( \frac{27}{4}y(u)\right)^{j}j^{-5/2},\] _where \(y(u)<4/27\) so that \(\mu^{u}\) has exponential moments._ Notice that the case \(u=1\), which corresponds to uniform planar maps, as studied by Addario-Berry [1], falls in the subcritical regime. Proof.: Let us first explain how the value \(u_{C}:=9/5\) appears. Let \(u>0\) and \(y\in(0,4/27]\). By (9), \[E\left[\mu^{y,u}\right]=\sum_{j\in\mathcal{I}_{\geqslant 0}}\frac{2jb_{j}y^{j}u ^{\mathbb{1}_{j\neq 0}}}{1+u(B(y)-1)}=\frac{2uyB^{\prime}(y)}{1+u(B(y)-1)}. \tag{15}\] It follows that \[E\left[\mu^{y,u}\right]=1\Leftrightarrow u=\frac{1}{2yB^{\prime}(y)-B(y)+1}. \tag{16}\] The mapping \(y\in(0,4/27]\mapsto d(y):=2yB^{\prime}(y)-B(y)+1\) is increasing. Indeed, for all \(y\in(0,4/27]\), \[d(y)=\sum_{n\geqslant 1}2nb_{n}y^{n}-\sum_{n\geqslant 0}b_{n}y^{n}+1=\sum_{n \geqslant 1}(2n-1)b_{n}y^{n}.\] Moreover, if follows from (6) that \(d(0)=0\) and \(d(4/27)=5/9\). So \(1/d(y)\) maps bijectively \((0,4/27]\) to \([9/5,+\infty)\). Therefore, there exists \(y\in(0,4/27]\) such that the law \(\mu^{y,u}\) is critical if and only if \(u\in[9/5,+\infty)\), and this \(y\) is unique. We now conclude the proof of the theorem. For the sake of completeness, we recall an argument from [1, SS8.2.2]. Recall (7): \[M(z,u)=uB(zM(z,u)^{2})+1-u.\] For a fixed \(u\), there are two possible sources of singularity: 1. The pair \((z_{0}=\rho(u),m_{0}=M(\rho(u),u))\) satisfies \(\frac{\partial H}{\partial m}(z_{0},m_{0})=0\) for \(H:(z,m)\mapsto m-uB(zm^{2})-1+u\), thus being a singularity by the contraposition of the implicit function theorem. In this case, \[1-2\rho(u)M(\rho(u),u)uB^{\prime}(\rho(u)M^{2}(\rho(u),u))=0,\qquad\text{so} \qquad 2\rho(u)M(\rho(u),u)uB^{\prime}(y(u))=1.\] Then, by (11), \[2y(u)B^{\prime}(y(u))-B(y(u))+1=\frac{2\rho(u)M^{2}(\rho(u),u)}{2u\rho(u)M( \rho(u),u)}-\frac{M(\rho(u),u)+u-1}{u}+1=\frac{1}{u},\] (17) which is to say that \(y(u)=\rho(u)M^{2}(\rho(u),u)\) satifies (16). This is possible if and only if \(u\geqslant 9/5\). Then, it follows that \(E\left[\mu^{u}\right]=1\), and (13) gives the asymptotic behaviour of \(\mu^{u}(2j)\). 2. A singularity of \(B\) is reached so \(\rho(u)M^{2}(\rho(u),u)=\rho_{B}=4/27\)_i.e._\(y(u)=4/27\). Then, the value of \(E(u)\) is obtained as an immediate consequence of Equations (6) and (15), and the asymptotic behaviour of \(\mu^{u}(2j)\) comes from Equations (12) and (13). This happens iff \(u\leqslant 9/5\). Notice that at \(u=u_{C}\), both types of singularity are reached. **Remark 1**.: _The proof of Theorem 1 highlights the reasons behind our choice of \(y(u)\). When \(u\geqslant 9/5\), we choose \(y(u)\) such that \(E(u)=1\). When \(u<9/5\), this is not possible, and we choose the value of \(y(u)\) maximising \(E(u)\) so that, when conditioning the trees to be of size \(2n\), the conditioning is as little degenerated as possible._ **Remark 2**.: _Using (16), we obtain an explicit expression for \(y\) in terms of \(u\) for \(u\geqslant u_{C}\). By [14], the series \(B\) is algebraic and for all \(y\in[0,4/27]\),_ \[B(y)^{3}-B(y)^{2}-18yB(y)+27y^{2}+16y=0. \tag{18}\] _This gives an expression of \(B^{\prime}\) in terms of \(B\), and taking the resultant between this new equation and (16) allows to eliminate \(B\). Initial conditions then give_ \[u=\frac{1}{2yB^{\prime}(y)-B(y)+1}\Leftrightarrow y=\left(1-\sqrt{1-\frac{1}{ u}}\right)\left(1-\frac{1}{u}\right). \tag{19}\] ## 4 Study of the size of the largest blocks ### Subcritical case To investigate the distribution of the size of the largest blocks, in the subcritical case, we follow the approach developped in [1], which consists in studying the degrees in the block tree of a map. To that end, we rely on results of _condensation_ in Galton-Watson trees: exactly one of the nodes has a degree linear in the size. To that end, we rely on Janson's survey [13], in which there is a refinement of the study of the largest degree of a subcritical Galton-Watson tree with condensation by Jonsson and Stefansson [11]. The condensation phenomenon is visible in the following result where, denoting by \(d_{TV}\) the total variation distance, we write \(X_{n}\stackrel{{(d)}}{{\approx}}Y_{n}\) if \(d_{TV}(X_{n},Y_{n})\to 0\) as \(n\to\infty\): **Proposition 8** ([12, Theorem 19.34]).: _Let \(\mu\) be a probability distribution on \(\mathbb{Z}_{\geqslant 0}\) such that \(\mu(0)>0\), \(E\left[\mu\right]<1\) and there exists \(c\) satisfying \(\mu(k)\sim_{k\to\infty}ck^{-5/2}\). Let \(D_{n,1}\geqslant D_{n,2}\geqslant\ldots\geqslant D_{n,n}\) be the ranked list of the number of children of a \(\mu\)-Galton-Watson tree conditioned to have \(n\) edges. Then, letting \(\xi_{1},\ldots,\xi_{n-1}\) be a family of \(n-1\) independent random variables of law \(\mu\) and \(\left(\xi_{1}^{(n)},\ldots,\xi_{n-1}^{(n)}\right)\) their decreasing reordering, it holds that:_ \[(D_{n,1},\ldots,D_{n,n})\stackrel{{(d)}}{{\approx}}\left(n-\sum_ {i=1}^{n-1}\xi_{i},\xi_{1}^{(n)},\ldots,\xi_{n-1}^{(n)}\right). \tag{20}\] We combine this proposition with the fact that \(\mathbf{T}\) is a Galton-Watson tree under \(\mathbb{P}_{u}\) to get the following generalization of [1, Theorem 3.3] to every value of \(u\in(0,9/5)^{2}\). This is a rephrasing of the results for trees of [12], to which we add the proof of the joint convergence. For \(\mathfrak{m}\) a map of size \(n\), denote by \(\operatorname{LB}_{1}(\mathfrak{m})\geqslant\ldots\geqslant\operatorname{LB}_ {b(\mathfrak{m})}(\mathfrak{m})\) the sizes of its blocks in decreasing order. By convention, we set \(\operatorname{LB}_{k}(\mathfrak{m})=0\) if \(k>b(\mathfrak{m})\). **Theorem 2**.: _Let \(u\in(0,9/5)\). Recall that \(E(u)\) and \(c(u)\) are defined in Equations (12) and (14). Then,_ \[\operatorname{LB}_{1}(\mathbf{M}_{n,u})=(1-E(u))n+O_{\mathbb{P}}(n^{2/3}) \quad\text{and}\quad\operatorname{LB}_{2}(\mathbf{M}_{n,u})=O_{\mathbb{P}}(n ^{2/3}).\] _Moreover, the following joint convergence holds:_ \[\left(\frac{1}{2nc(u)}\right)^{2/3}\left((1-E(u))n-\operatorname{LB}_{1}( \mathbf{M}_{n,u}),(\operatorname{LB}_{j}(\mathbf{M}_{n,u}),j\geqslant 2) \right)\xrightarrow[n\to\infty]{(d)}\left(L_{1},\left(\Delta L_{(j-1)},j \geqslant 2\right)\right) \tag{21}\] _where \((L_{t})_{t\in[0,1]}\) is a Stable process of parameter \(3/2\) such that_ \[E\left[e^{-sL_{1}}\right]=e^{\Gamma(-3/2)s^{3/2}}\] _and \(\Delta L_{(1)}\geqslant\Delta L_{(2)}\geqslant\ldots\) is the ranked sequence of its jumps._ When \(u\to 0\), \(1-E(u)\to 1\): as expected, if the map has only one block, its size is \(n\). **Remark 3**.: _If \((L_{t})_{t\in[0,1]}\) is a Stable process of parameter \(3/2\) satisfying \(E\left[e^{-sL_{1}}\right]=e^{\Gamma(-3/2)s^{3/2}}\) for \(s\) such that \(Re(s)\geqslant 0\); then, it is known that (see [1, Theorem 1] and its proof):_ \[L_{1}\stackrel{{(d)}}{{=}}\lim_{\varepsilon\to 0}\sum_{j:\Delta L_{(j)} \geqslant\varepsilon}\Delta L_{(j)}-\frac{2}{\sqrt{\varepsilon}}.\] Proof.: Recall that the subcritical case corresponds to \(u\in(0,9/5)\), for which we have \[\rho(u)M^{2}(\rho(u),u)=4/27.\] We follow essentially the same lines of proof as in [1], but refining the arguments so as to establish the joint convergence stated in (21). Theorem 1 shows that the hypotheses of Proposition 8 are satisfied in the subcritical case. Let \(\left(\xi_{i}\right)_{i\geqslant 1}\) be a family of iid random variables of law \(\mu^{u}\) and let \(\left(\xi_{1}^{(n)},\ldots,\xi_{n}^{(n)}\right)\) be the decreasing reordering of its first \(n\) variables (take the convention \(\xi_{i}^{(n)}=0\) if \(i>n\)). Let us consider the following cumulative process: \[L_{t}^{(n)}=\frac{\sum_{i=1}^{[2nt]}\xi_{i}-2ntE(u)}{C(u)(2n)^{2/3}}\quad\text{ for }t\in[0,1],\quad\text{where}\quad C(u)=2c(u)^{2/3}.\] It is standard [11, Theorem XVII.5.2][14, Chapter VII, Corollary 3.6] that there exists a Levy process \((L_{t})_{t\in[0,1]}\) with Levy measure \(\pi(dx)=x^{-5/2}dx\mathbb{1}_{\{x>0\}}\) so that for \(s\) such that \(Re(s)\geqslant 0\), \[e^{-sL_{1}}=e^{\Gamma(-3/2)s^{3/2}},\] and such that the following convergence holds in the Skorokhod topology \[\left(L_{t}^{(n)}\right)_{t\in[0,1]}\xrightarrow[n\to\infty]{(d)}\left(L_{t} \right)_{t\in[0,1]}. \tag{22}\] By definition of the process \(L_{t}^{(n)}\), \(\frac{\xi_{i}}{C(u)(2n)^{2/3}}\) is its \(i\)-th jump. In particular, denoting by \(\Delta P_{t}\) the jump of the process \((P_{t})\) at time \(t\) (which may equal \(0\)), \[\frac{\xi_{1}^{(2n)}}{C(u)(2n)^{2/3}}=\sup_{0\leqslant t\leqslant 1}\Delta L_{t} ^{(n)}.\] By [14, Chapter VI, Proposition 2.4], (22) gives \[\frac{\xi_{1}^{(2n)}}{C(u)(2n)^{2/3}}\xrightarrow[n\to\infty]{(d)}\sup_{0 \leqslant t\leqslant 1}\Delta L_{t}:=\Delta L_{(1)}.\] By construction of a Levy process, \((\Delta L_{(j)})_{j\geqslant 1}\) has same law as the decreasing rearrangement of the atoms of a Poisson random measure with intensity \(\pi\) on \(\mathbb{R}^{+}\) (see e.g. [1, Theorem 1]). By denoting \(t_{1}^{(n)}\) the time at which the jump \(\xi_{1}^{(2n)}\) of the process \(L_{t}^{(n)}\) is realised, one has: \[\frac{\xi_{2}^{(2n)}}{C(u)(2n)^{2/3}}=\sup_{0\leqslant t\leqslant 1}\Delta \left(L_{t}^{(n)}-\frac{\xi_{1}^{(2n)}}{C(u)(2n)^{2/3}}\mathbb{1}_{t\geqslant t _{1}^{(n)}}\right)_{t}.\] So, applying again [14, Chapter VI, Proposition 2.4], one gets, denoting by \(t_{1}\) the time of the largest jump of \((L_{1})\): \[\frac{\xi_{2}^{(2n)}}{C(u)(2n)^{2/3}}\xrightarrow[n\to\infty]{(d)}\sup_{0 \leqslant t\leqslant 1}\Delta\left(L_{t}-\Delta L_{(1)}\mathbb{1}_{t\geqslant t _{1}}\right)_{t}=\Delta L_{(2)}.\] It is again possible to iterate by subtracting the largest jump: for all \(k\geqslant 1\), \[\frac{1}{C(u)(2n)^{2/3}}\left(\xi_{1}^{(2n)},\ldots,\xi_{k}^{(2n)}\right) \xrightarrow[n\to\infty]{(d)}\left(\Delta L_{(1)},\ldots,\Delta L_{(k)}\right). \tag{23}\] However, by Proposition 8 and (20), one has (recall that a map of size \(n\) has \(2n+1\) components, some of which might be empty): \[2\left(\operatorname{LB}_{1}(\mathbf{M}_{n,u}),\ldots,\operatorname{LB}_{2n+1 }(\mathbf{M}_{n,u})\right)\stackrel{{(d)}}{{\approx}}\left(2n- \sum_{i=1}^{2n}\xi_{i},\xi_{1}^{(2n)},\ldots,\xi_{2n}^{(2n)}\right).\] Therefore, for all \(k\geqslant 2\) fixed \[\left(\frac{(1-E(u))n-\operatorname{LB}_{1}(\mathbf{M}_{n,u})}{ \frac{1}{2}C(u)(2n)^{2/3}},\frac{\operatorname{LB}_{2}(\mathbf{M}_{n,u})}{ \frac{1}{2}C(u)(2n)^{2/3}},\ldots,\frac{\operatorname{LB}_{k}(\mathbf{M}_{n,u })}{\frac{1}{2}C(u)(2n)^{2/3}}\right)\] \[\stackrel{{(d)}}{{\approx}}\left(\frac{\sum_{i=1}^{2 n}\xi_{i}-2E(u)n}{C(u)(2n)^{2/3}},\frac{\xi_{1}^{(2n)}}{C(u)(2n)^{2/3}},\ldots, \frac{\xi_{k}^{(2n)}}{C(u)(2n)^{2/3}}\right)\] \[\xrightarrow[n\to\infty]{(d)}\left(L_{1},\Delta L_{(1)},\ldots, \Delta L_{(k)}\right).\] This allows to conclude since \(k\) is arbitrary. ### Supercritical case The supercritical case corresponds to \(u\in(9/5,+\infty)\) and \(y(u)=\rho(u)M^{2}(\rho(u),u)\in(0,4/27)\). Recall that in this case \(T\) is distributed under \(\mathbb{P}_{u}\) as a critical Galton-Watson tree with exponential moments by Proposition 7 and Theorem 1. Properties of the maximum degree of critical Galton-Watson trees have been extensively studied by Janson [14], building on work by Meir and Moon [13]. For the case where the offspring distribution admits exponential moments, Janson shows the following result. **Proposition 9** ([14, Theorem 19.16]).: _Let \(\mu\) be a probability distribution on \(\mathbb{Z}_{\geqslant 0}\) such that \(\mu(0)>0\), and \(\mu(k+1)/\mu(k)\) converges to a finite limit as \(k\to\infty\). Let \(D_{n,i}\) be the \(i\)-th maximal number of children of nodes in a \(\mu\)-Galton-Watson tree conditioned to have \(n\) edges. Denote by \(\rho\) the radius of convergence of \(\Phi:t\mapsto\sum_{k\in\mathbb{Z}_{\geqslant 0}}\mu(k)t^{k}\), and \(\nu=\rho\frac{\Phi^{\prime}(\rho)}{\Phi(\rho)}\). Suppose \(\nu>1\). Then, denoting \(k(n)=\max\{k\in\mathbb{Z}_{\geqslant 0}\mid\mu(k)\geqslant 1/n\}\), for all \(j\geqslant 1\),_ \[D_{n,j}=k(n)+O_{\mathbb{P}}(1).\] In our case, the asymptotic of \(k(n)\) can be computed thanks to results about the Lambert \(W\) function, which is the compositional inverse of \(x\in\mathbb{R}\mapsto xe^{x}\in[-e^{-1},+\infty)\). This gives the following theorem. **Theorem 3**.: _Let \(u>u_{C}\). For all fixed \(j\geqslant 1\), it holds as \(n\to\infty\) that_ \[\operatorname{LB}_{j}(\mathbf{M}_{n,u})=\frac{\ln(n)}{2\ln\left(\frac{4}{27y(u )}\right)}-\frac{5\ln(\ln(n))}{4\ln\left(\frac{4}{27y(u)}\right)}+O_{\mathbb{ P}}(1).\] Proof.: The probability \(\mu^{u}(\{2k\})\) is decreasing with \(k\). So, by (13), for \(n\) large enough, to study \(k(n)\) it is sufficient to study for which \(k\) one has \[c(u)\left(\frac{27}{4}\rho(u)M^{2}(\rho(u),u)\right)^{k}k^{-5/2}\left(1+o(1) \right)\geqslant\frac{1}{n}.\] For sake of compactness, set \(w(u)=\left(\frac{27}{4}\rho(u)M^{2}(\rho(u),u)\right)^{-1}=\left(\frac{27}{4} y(u)\right)^{-1}\). Note that \(w(u)>1\) since \(u>u_{C}\). Consequently, the previous inequality is equivalent to \[w(u)^{k}k^{5/2}\leqslant c(u)n\left(1+o(1)\right).\] Notice that this is equivalent to \[\frac{2}{5}\ln(w(u))k\times e^{\frac{2}{5}\ln(w(u))k}\leqslant\frac{2}{5}\ln( w(u))(nc(u))^{2/5}\left(1+o(1)\right).\] Therefore, \(k(n)\) is the largest integer such that: \[\frac{2}{5}\ln(w(u))k(n)\leqslant W\left(\frac{2}{5}\ln(w(u))(nc(u))^{2/5} \left(1+o(1)\right)\right)\] where \(W\) denotes the Lambert \(W\) function. It is known that \(W\) satisfies, for \(x\to\infty\), \[W(x)=\ln(x)-\ln(\ln(x))+o(1),\] which concludes the proof. ### Critical case The critical case corresponds to \(u=9/5\) and \(\rho(u)M^{2}(\rho(u),u)=4/27\). As shown in Theorem 1, the offspring distribution has a power law tail in \(cj^{-\alpha-1}\), where \(\alpha=3/2\in(1,2)\). In this case, the variance is infinite, so that the method of Section 4.2 cannot be used. However, this case is directly treated in Janson's survey [12, Example 19.27 and Remark 19.28]. **Theorem 4**.: _The following convergence holds:_ \[\left(\frac{\operatorname{LB}_{j}(\mathbf{M}_{n,u_{c}})}{n^{2/3}},j\geqslant 1 \right)\xrightarrow[n\to\infty]{(d)}\left(E_{(j)},j\geqslant 1 \right),\] _where the \(\left(E_{(j)}\right)\) are the ordered atoms of a Point Process \(E\) on \([0,\infty]\), satisfying that the random variable \(E_{a,b}=\#\left(E\cap[a,b]\right)\) has a probability generating function convergent for all \(z\in\mathbb{C}\) with_ \[E\left[z^{E_{a,b}}\right]=\frac{1}{2\pi g(0)}\int_{-\infty}^{\infty}\exp\left( c\Gamma(-3/2)(-it)^{3/2}+(z-1)c\int_{a}^{b}x^{-5/2}e^{itx}dx\right)dt,\] _where_ \[g:x\mapsto\frac{1}{2\pi}\int_{-\infty}^{\infty}e^{-ixt+c\Gamma(-3/2)(-it)^{3/2}}dt.\] _The intensity measure \(\pi\) of \(E\) satisfies, for \(x>0\),_ \[\pi(x)=cx^{-5/2}\frac{g(-x)}{g(0)}dx,\] _and, for all \(j\geqslant 1\),_ \[E_{(j)}>0\qquad\text{almost surely}.\] ## 5 Scaling limits The preceding sections exhibited, _via_ a study of the block-tree, a phase transition of a combinatorial nature, in terms of the size of the largest blocks, when the parameter \(u\) reaches \(u_{C}=9/5\), both for the model on general maps and the one on quadrangulations. The goal of the present section is to expand on this phase transition by considering metric properties of the models in each phase, in the sense of taking _scaling limits_, see Section 5.1 for definitions. Because Tutte's bijection commutes with the block decomposition of both models under consideration, as stated in Proposition 5, the combinatorial picture of Section 4 is the same for both models. However, obtaining global metric properties under either model requires a good understanding of the metric behaviour of the underlying blocks. As of now, the required results exist only for simple quadrangulations. Consequently, our scaling limit results are complete only for the quadrangulation model. In Section 5.1, we introduce the relevant formalism to state our scaling limit results, as well as a deviation estimate for the diameters of blocks, which will be useful for all values of \(u\). In Section 5.2, we prove Theorem 5, which identifies scaling limits simultaneously when \(u>u_{C}\) and \(u=u_{C}\). For both models, there is convergence after suitable rescaling to a random continuous tree, namely a _Brownian tree_ when \(u>u_{C}\) and a \(3/2\)_stable tree_ when \(u=u_{C}\). This convergence holds in the Gromov-Hausdorff (GH) sense, between metric spaces. In the quadrangulation model, this convergence is strengthened to a Gromov-Hausdorff-Prokhorov (GHP) convergence between measured metric spaces when quadrangulations are equipped with their mass measure on vertices. Doing the same with the model on maps is harder, so we only state a GHP convergence for maps equipped with an _ad hoc_ measure, the degree-biased measure on their vertices. Finally in Section 5.3, we prove Theorem 6 which deals with the GHP scaling limit when \(u<u_{C}\). In this phase, the one-big-block identified in Theorem 2 converges after rescaling to a scalar multiple of the _Brownian sphere_, and the contribution of all other blocks is negligible. This result is proved only for the quadrangulation model since it relies crucially on the scaling limit result for uniform simple quadrangulations obtained in [1]. No such result is available yet for uniform 2-connected general maps, although one expects that it should hold. ### Preliminaries #### 5.1.1 The Gromov-Hausdorff and Gromov-Hausdorff-Prokhorov topologies Originating from the ideas of Gromov, the following notions of metric geometry have become widely used in probability theory to state scaling limit results. We refer the interested reader to [1] for general background on metric geometry and [14, Section 6] for an exposition of the main properties of the Gromov-Hausdorff and Gromov-Hausdorff-Prokhorov topologies, and especially their definition _via_ correspondences and couplings that we use here. Define a _correspondence_ between two sets \(X\) and \(Y\) as a subset \(C\) of \(X\times Y\) such that for all \(x\in X\), there exists \(y\in Y\) such that \((x,y)\in C\), and _vice versa_. The set of correspondences between \(X\) and \(Y\) is denoted as \(\operatorname{Corr}(X,Y)\). If \((X,d_{X})\) and \((Y,d_{Y})\) are compact metric spaces and \(C\in\operatorname{Corr}(X,Y)\) is a correspondence, one may define its _distortion_: \[\operatorname{dis}(C;d_{X},d_{Y})=\sup\Bigl{\{}|d_{X}(x,\widetilde{x})-d_{Y}( y,\widetilde{y})|\colon(x,y)\in C,\,(\widetilde{x},\widetilde{y})\in C\Bigr{\}}.\] This allows to define the _Gromov-Hausdorff_ distance between (isometry classes of) compact metric spaces \[d_{\operatorname{GH}}\bigl{(}(X,d_{X}),(Y,d_{Y})\bigr{)}=\frac{1}{2}\inf \Bigl{\{}\operatorname{dis}(C;d_{X},d_{Y})\colon C\in\operatorname{Corr}(X,Y) \Bigr{\}}.\] One can modify this notion of distance in order to get a distance between compact measured metric spaces. For measured spaces \((X,\nu_{X})\) and \((Y,\nu_{Y})\) such that \(\nu_{X}\) and \(\nu_{Y}\) are _probability_ measures, let us denote by \(\operatorname{Coupl}(\nu_{X},\nu_{Y})\) the set of couplings between \(\nu_{X}\) and \(\nu_{Y}\), _i.e._ the set of measures \(\gamma\) on \(X\times Y\) with respective marginals \(\nu_{X}\) and \(\nu_{Y}\). Then the _Gromov-Hausdorff-Prokhorov_ distance is defined as \[d_{\operatorname{GHP}}\bigl{(}(X,d_{X},\nu_{X}),(Y,d_{Y},\nu_{Y} )\bigr{)}\\ =\inf\Bigl{\{}\max\Bigl{(}\tfrac{1}{2}\operatorname{dis}(C;d_{X}, d_{Y}),\gamma\bigl{(}(X\times Y)\setminus C\bigr{)}\Bigr{)}\colon C\in \operatorname{Corr}(X,Y),\,\gamma\in\operatorname{Coupl}(\nu_{X},\nu_{Y}) \Bigr{\}}.\] When \((X,d_{X})\) and \((Y,d_{Y})\) are the same metric space, one can bound this distance by the _Prokhorov distance_ between the measures \(\nu_{X}\) and \(\nu_{Y}\). This distance is defined for \(\nu_{1}\) and \(\nu_{2}\) two Borel measures on the same metric space \((X,d)\) by \[d_{\operatorname{P}}^{(X,d)}(\nu_{1},\nu_{2})=\inf\Bigl{\{}\varepsilon>0\colon \nu_{1}(A)\leqslant\nu_{2}(A^{\varepsilon})+\varepsilon\text{ and }\nu_{2}(A) \leqslant\nu_{1}(A^{\varepsilon})+\varepsilon,\forall A\in\mathcal{B}(X) \Bigr{\}},\] where \(A^{\varepsilon}\) is the set of points \(x\in X\) such that \(d(x,A)<\varepsilon\). The bound mentioned above then corresponds to the inequality \[d_{\operatorname{GHP}}\bigl{(}(X,d,\nu_{1}),(X,d,\nu_{2})\bigr{)}\leqslant d_ {\operatorname{P}}^{(X,d)}(\nu_{1},\nu_{2}), \tag{24}\] which is a consequence of Strassen's Theorem, see [13, Section 11.6]. Finally we will use the following fact, the proof of which is left to the reader. For \(a\in[0,1)\) and Borel probability measures \(\mu\), \(\nu\) and \(\nu^{\prime}\) on some metric space \((X,d)\), it holds that \[d_{\operatorname{P}}^{(X,d)}\bigl{(}a\mu+(1-a)\nu,a\mu+(1-a)\nu^{\prime}\bigr{)} =(1-a)d_{\operatorname{P}}^{(X,d)}(\nu,\nu^{\prime}). \tag{25}\] #### 5.1.2 Formulation of the GHP-scaling limit problem Let us begin by setting the notations for the measured metric spaces that one can canonically associate to the combinatorial objects under consideration. We associate to a tree (resp. map or quadrangulation) the following measured metric spaces: * For \(\mathfrak{t}\) a _tree_ with at least one edge, denote by \(V_{+}(\mathfrak{t})\) the set of its non-root vertices, \(d_{\mathfrak{t}}\) the distance that the graph distance induces on \(V_{+}(\mathfrak{t})\), \(\nu_{\mathfrak{t}}\) the uniform probability measure on \(V_{+}(\mathfrak{t})\) and \(\mathfrak{t}\) the measured metric space \(\mathfrak{t}=(V_{+}(\mathfrak{t}),d_{\mathfrak{t}},\nu_{t})\). Recall that for \(v\in\mathfrak{t}\), the number of children of \(v\) is denoted by \(k_{v}(\mathfrak{t})\). * For \(\mathfrak{m}\) a _map_, recall that \(V(\mathfrak{m})\) is its vertex set, and denote by \(d_{\mathfrak{m}}\) the graph distance on \(V(\mathfrak{m})\), \(\nu_{\mathfrak{m}}\) the uniform probability measure on \(V(\mathfrak{m})\) and \(\mathfrak{m}\) the measured metric space \(\mathfrak{m}=(V(\mathfrak{m}),d_{\mathfrak{m}},\nu_{m})\). Denote also by \(\nu_{\mathfrak{m}}^{\mathrm{B}}\) the _degree-biased_ measure on \(V(\mathfrak{m})\), _i.e._\(\nu_{\mathfrak{m}}^{\mathrm{B}}(\{x\})=\deg(x)/(2|E(\mathfrak{m})|)\). Accordingly set \(\mathfrak{m}^{\mathrm{B}}=(V(\mathfrak{m}),d_{\mathfrak{m}},\nu_{\mathfrak{m} }^{\mathrm{B}})\). * For \(\mathfrak{q}\) a _quadrangulation_, denote by \(V(\mathfrak{q})\) its vertex set, \(d_{\mathfrak{q}}\) the graph distance on \(V(\mathfrak{q})\), \(\nu_{\mathfrak{q}}\) the uniform probability measure on \(V(\mathfrak{q})\) and \(\mathfrak{q}\) the measured metric space \(\mathfrak{q}=(V(\mathfrak{q}),d_{\mathfrak{q}},\nu_{q})\). Similarly as above define \(\mathfrak{q}^{\mathrm{B}}=(V(\mathfrak{q}),d_{\mathfrak{q}},\nu_{q}^{\mathrm{ B}})\) its degree-biased version. The problem of finding a GHP-scaling limit consists in finding a suitable rescaling of a sequence of random compact measured metric spaces so that it admits a non-trivial limit in distribution for the GHP-topology. Let us introduce a convenient notation for the rescaling operation on a measured metric space. For \(\underline{X}=(X,d,\nu)\) a measured metric space and \(\lambda>0\), we denote by \(\lambda\cdot\underline{X}\) the measured metric space \((X,\lambda d,\nu)\). #### 5.1.3 A useful deviation estimate We shall now prove a deviation estimate for the diameters of the blocks of \(\mathbf{M}\) and \(\mathbf{Q}\). It will prove useful for all values of \(u>0\). We recall the definition of _stretched-exponential_ quantities, as this notion provides a concise way to deal with the probabilities of exceptional events. **Definition 8**.: _A sequence \((p_{n})\) of real numbers is said to be stretched-exponential as \(n\to\infty\) if there exist constants \(\gamma,C,c>0\) such that_ \[|p_{n}|\leqslant C\exp(-cn^{\gamma}).\] As is evident from the definition, if \((p_{n})_{n}\) and \((q_{n})_{n}\) are stretched-exponential sequences, then so are the sequences \((p_{n}+q_{n})_{n}\), \((p_{n}q_{n})_{n}\), \((n^{\alpha}p_{n})_{n}\) and \((n^{\alpha}\sup_{k\geqslant n^{\beta}}p_{k})_{n}\) with arbitrary \(\alpha,\beta>0\). The input we shall rely on to derive our estimate is a deviation estimate for the diameter of **one** block, in both the case of 2-connected blocks of maps and simple blocks of quadrangulations. **Proposition 10**.: _For any \(\varepsilon>0\), the probabilities_ \[P\left(\mathrm{diam}(B_{k}^{\mathrm{map}})\geqslant k^{1/4+\varepsilon}\right) \qquad\text{ and }\qquad P\left(\mathrm{diam}(B_{k}^{\mathrm{quad}}) \geqslant k^{1/4+\varepsilon}\right)\] _are stretched-exponential as \(k\to\infty\)._ Proof.: The estimate for uniform \(2\)-connected maps \((B_{k}^{\text{map}})_{k\geqslant 0}\) is obtained from [15, Theorem 3.7, specialized to \(x=1\)]. To obtain the estimate for uniform simple blocks of quadrangulations \((B_{k}^{\text{quad}})_{k\geqslant 0}\), one easily checks that for any path of length \(l\geqslant 0\) in a map \(\mathfrak{m}\), there exists a path with same endpoints and length at most \(2l\) in \(\varphi(\mathfrak{m})\), its image by Tutte's bijection. Therefore for every map \(\mathfrak{m}\) one has \(\operatorname{diam}(\varphi(\mathfrak{m}))\leqslant 2\operatorname{diam}( \mathfrak{m})\). In particular \(\operatorname{diam}(B_{k}^{\text{quad}})\leqslant 2\operatorname{diam}(B_{k}^{ \text{map}})\), and the conclusion follows from the estimate for \((B_{k}^{\text{map}})_{k\geqslant 0}\) This deviation estimate for the diameter of **one** block allows to control the deviations of the diameter of **every** block of \(\mathbf{M}_{n,u}\) and \(\mathbf{Q}_{n,u}\), in the sense of the following corollary. **Corollary 11**.: _For all \(u>0\) and all \(\delta>0\), the probabilities_ \[P\left(\exists v\in\mathbf{T}_{n,u},\,\operatorname{diam}( \mathfrak{b}_{v}^{\mathbf{M}_{n,u}})\geqslant\max\left(n^{1/6},k_{v}(\mathbf{ T}_{n,u})^{(1+\delta)/4}\right)\right),\quad n\geqslant 1, \tag{26}\] \[P\left(\exists v\in\mathbf{T}_{n,u},\,\operatorname{diam}( \mathfrak{b}_{v}^{\mathbf{Q}_{n,u}})\geqslant\max\left(n^{1/6},k_{v}(\mathbf{ T}_{n,u})^{(1+\delta)/4}\right)\right),\quad n\geqslant 1, \tag{27}\] _are stretched-exponential as \(n\to\infty\)._ Proof.: Let \(\mathfrak{b}\) be either a \(2\)-connected map, or a simple quadrangulation. Then \(\operatorname{diam}(\mathfrak{b})\) is bounded by its number of edges, which is \(|\mathfrak{b}|\) if \(\mathfrak{b}\) is a map, and \(2|\mathfrak{b}|\) if it is a quadrangulation. In particular, recalling that the outdegrees in the block-tree are twice the sizes of the respective blocks, we get for all \(u>0\) and \(n\geqslant 1\), that \[\forall v\in\mathbf{T}_{n,u},\,\left[\operatorname{diam}(\mathfrak{b}_{v}^{ \mathbf{M}_{n,u}})\leqslant k_{v}(\mathbf{T}_{n,u})/2\right]\text{ and }\left[\operatorname{diam}(\mathfrak{b}_{v}^{\mathbf{Q}_{n,u}}) \leqslant\,2\cdot k_{v}(\mathbf{T}_{n,u})/2\right].\] Denote by \(\operatorname{A}(\mathbf{M}_{n,u})\) the "bad" subset of \(\mathbf{T}_{n,u}\) made of the vertices \(v\) such that both \(k_{v}(\mathbf{T}_{n,u})/2\geqslant n^{1/6}\) and \(\operatorname{diam}(\mathfrak{b}_{v}^{\mathbf{M}_{n,u}})\geqslant k_{v}( \mathbf{T}_{n,u})^{(1+\delta)/4}\). By the above trivial bound on diameters, to show that the probabilities (26) are stretched-exponential as \(n\to\infty\), it suffices to see that the probability of the event \(\{\operatorname{A}(\mathbf{M}_{n,u})\neq\emptyset\}\) is stretched-exponential as \(n\to\infty\). By Proposition 7, conditionally on \(\mathbf{T}_{n,u}\), each block \(\mathfrak{b}_{v}^{\mathbf{M}_{n,u}}\) is sampled uniformly from \(2\)-connected maps with size \(k_{v}(\mathbf{T}_{n,u})/2\) respectively. Therefore, conditionally on \(\mathbf{T}_{n,u}\), for each vertex \(v\) in \(\mathbf{T}_{n,u}\) we have \[P\big{(}v\in\operatorname{A}(\mathbf{M}_{n,u})\,|\,\mathbf{T}_{n,u}\big{)} =\mathbbold{1}_{\left\{k_{v}(\mathbf{T}_{n,u})/2\geqslant n^{1/6 }\right\}}P\left(\operatorname{diam}(B_{k/2}^{\text{map}})\geqslant k^{(1+ \delta)/4}\right)\Big{|}_{k=k_{v}(\mathbf{T}_{n,u})}\] \[\leqslant\sup_{k/2\geqslant n^{1/6}}P\left(\operatorname{diam}(B_ {k/2}^{\text{map}})\geqslant k^{(1+\delta)/4}\right)\] \[\leqslant\sup_{k\geqslant n^{1/6}}P\left(\operatorname{diam}(B_ {k}^{\text{map}})\geqslant k^{(1+\delta)/4}\right).\] Since \(\mathbf{T}_{n,u}\) has \(2n+1\) vertices, this yields by a union bound, \[P(\operatorname{A}(\mathbf{M}_{n,u})\neq\emptyset)\leqslant(2n+1)\sup_{k \geqslant n^{1/6}/2}P\left(\operatorname{diam}(B_{k}^{\text{map}})\geqslant k ^{(1+\delta)/4}\right),\] which is stretched-exponential as \(n\to\infty\) by Proposition 10, as announced. A similar use of Proposition 10 proves that the probabilities (27) are stretched-exponential as \(n\to\infty\) ### The supercritical and critical cases #### 5.2.1 Statement of the result For \(1<\theta\leqslant 2\), let us denote by \(\mathcal{T}^{(\theta)}\) a \(\theta\)-stable Levy tree equipped with its mass measure. There are several equivalent constructions of these objects. A common way is to define them _via_ excursions of \(\theta\)-stable Levy processes. Namely, \(\mathcal{T}^{(\theta)}\) is the real tree encoded by the height process of an excursion of length one of a \(\theta\)-stable Levy process, see [10]. To fix a normalization for \(\mathcal{T}^{(\theta)}\), we consider in the construction an excursion obtained by a cyclic shift from a \(\theta\)-stable Levy Bridge with Laplace exponent \(\lambda\mapsto\lambda^{\theta}\). Note that the measured metric space \(\mathcal{T}^{(2)}\) corresponds to \(\sqrt{2}\) times the _Brownian Continuum Random Tree_, which is encoded by an excursion of length \(1\) of the standard Brownian motion. The precise definition _via_ excursions is not important for our statement and one can take Proposition 14 below as an alternative definition. **Theorem 5**.: _There exist positive constants \((\kappa_{u}^{\mathrm{map}},\kappa_{u}^{\mathrm{quad}})_{u\geqslant u_{C}}\) such that we have the following joint convergences in distribution, in the Gromov-Hausdorff-Prokhorov sense:_ 1. _If_ \(u>u_{C}\)_, we have_ \[\frac{\sigma(u)}{\sqrt{2}}(2n)^{-1/2}\cdot\left(\underline{\mathbf{\Gamma}}_{ n,u},\underline{\mathbf{M}}_{n,u}^{\mathrm{B}},\underline{\mathbf{Q}}_{n,u} \right)\xrightarrow[n\to\infty]{\mathrm{GHP},\,(d)}\left(\mathcal{T}^{(2)}, \kappa_{u}^{\mathrm{map}}\cdot\mathcal{T}^{(2)},\kappa_{u}^{\mathrm{quad}} \cdot\mathcal{T}^{(2)}\right),\] _where we set_ \[\sigma(u)^{2}=1+\frac{4u\,\left(y(u)\right)^{2}\,B^{\prime\prime}\left(y(u) \right)}{uB\left(y(u)\right)+1-u}=\frac{3u-3+2\sqrt{u\left(u-1\right)}}{5u-9}.\] (28) 2. _If_ \(u=u_{C}=9/5\)_, we have_ \[\frac{2}{3}(2n)^{-1/3}\cdot\left(\underline{\mathbf{\Gamma}}_{n,u_{C}}, \underline{\mathbf{M}}_{n,u_{C}}^{\mathrm{B}},\underline{\mathbf{Q}}_{n,u_{C}} \right)\xrightarrow[n\to\infty]{\mathrm{GHP},\,(d)}\left(\mathcal{T}^{(3/2)}, \kappa_{u_{C}}^{\mathrm{map}}\cdot\mathcal{T}^{(3/2)},\kappa_{u_{C}}^{ \mathrm{quad}}\cdot\mathcal{T}^{(3/2)}\right).\] _Additionally, the constants_ \((\kappa_{u}^{\mathrm{map}},\kappa_{u}^{\mathrm{quad}})_{u\geqslant u_{C}}\) _can be expressed as follows._ \[\kappa_{u}^{\mathrm{map}}=\sum_{j\geqslant 1}2j\mu^{u}(2j)\mathcal{D}_{j}^{ \mathrm{map}}\qquad\text{ and }\qquad\kappa_{u}^{\mathrm{quad}}=\sum_{j\geqslant 1}2j\mu^{u}(2j) \mathcal{D}_{j}^{\mathrm{quad}},\] _where_ \(\mathcal{D}_{j}^{\mathrm{map}}\) _(resp._ \(\mathcal{D}_{j}^{\mathrm{quad}}\)_) is the expectation of the distance, in a uniform_ \(2\)_-connected map with_ \(j\) _edges (resp. simple quadrangulation with_ \(j\) _faces) of the distance of the root vertex to the base vertex of a uniform corner (resp. to the closest endpoint of a uniform edge)._ **Remark 4**.: _We believe that the same statement holds with \(\underline{\mathbf{M}}\) instead of \(\underline{\mathbf{M}}^{\mathbf{B}}\), the choice of the latter is for convenience._ **Remark 5**.: _Let us explain how one gets the second equality of (28), which allows to draw Fig. 13._ _From the proof of the convergence, one gets:_ \[\sigma(u)^{2}=1+\frac{4u\left(y(u)\right)^{2}B^{\prime\prime}(y(u))}{uB(y(u))+1-u}.\] _Differentiating the algebraic equation satisfied by \(B\) (18) with respect to \(y\) and taking (16) as a fourth equation gives a polynomial system from which can get \(B(y(u))\) and \(B^{\prime\prime}(y(u))\) as functions of \(u\) only (using the resultant or a Grobner basis algorithm). Replacing in the expression of \(\sigma(u)\) allows to conclude._ **Remark 6**.: _The quantity \(\kappa_{u}^{\mathrm{quad}}\) could in principle be obtained via the explicit formula obtained in [1] for the generating function \(g_{\ell}\) of simple edge-rooted quadrangulations with a distinguished edge at prescribed distance \(\ell\) from the root vertex._ #### 5.2.2 Discussion and overview of the proof Let \(u\geqslant u_{C}\). Consider a geodesic in either \(\mathbf{M}_{n,u}\) or \(\mathbf{Q}_{n,u}\) between two distant blocks \(\mathfrak{b}\) and \(\widetilde{\mathfrak{b}}\), respectively indexed by \(v\) and \(\widetilde{v}\) in the block-tree. This geodesic must go through all the blocks whose index \(w\) in the block-tree \(\mathbf{T}_{n,u}\) is on the path from \(v\) to \(\widetilde{v}\), in the order induced by this path in the tree. We have seen in Proposition 7 that under the law of \(\mathbf{M}_{n,u}\) or \(\mathbf{Q}_{n,u}\), the blocks are independant conditionally on the block-tree, and when \(u\geqslant u_{C}\) they tend to all have non-macroscopic \(o(n)\) size by Theorems 3 and 4. One therefore expects that when \(n\) is large, the distance between two distant blocks \(\mathfrak{b}\) and \(\widetilde{\mathfrak{b}}\) falls into a _law of large numbers_ behaviour and is of the same order as \(d_{\mathbf{T}_{n,u}}(v,\widetilde{v})\). Figure 13: Plot of \(\sigma\) as a function of \(u\). The vertical line corresponds to \(u=u_{C}\). According to this heuristic, the macroscopic distances in \(\mathbf{M}_{n,u}\) and \(\mathbf{Q}_{n,u}\) should be concentrated around a deterministic scalar multiple of the distances in \(\mathbf{T}_{n,u}\). But \(\mathbf{T}_{n,u}\) is a critical Galton-Watson tree conditioned to have \(2n+1\) vertices, with explicit tail asymptotic given by Theorem 1 for its offspring distribution, yielding that its scaling limit is a stable tree. To make this heuristic work, one needs to understand the typical distribution of degrees on a typical path in the tree. It turns out that on a typical path in a size-\(n\) critical Galton-Watson tree, the degrees are asymptotically independent and identically distributed, with distribution the size-biased version of the offspring distribution. This will be obtained by a spine decomposition for trees, adapted to our context. We bring the attention of the reader to the fact that a proof similar in spirit has been done for the Gromov-Hausdorff metric in the general abstract setup of enriched trees by Stufer [20, Theorem 6.60], and we could readily apply this result to deal with the case \(u>u_{C}\), modulo a technical complication regarding the additivity of distances in the quadrangulation case. When \(u=u_{C}\) however, the distances within blocks have fat tails, so we fall outside the scope of Stufer's result. To deal with this, our last technical ingredient is a suitable large deviation estimate: we show that after an adequate truncation of the variables depending on \(n\), large (and moderate) deviation events still have very small probability. We now proceed with the proof. #### 5.2.3 Additivity of the distances along consecutive blocks In Lemmas 12 and 13, we justify that a macroscopic distance is indeed a sum of distances on "in-between" blocks, in the case of blocks lying on the same branch in the block-tree. The map case.For \(\mathfrak{b}\) a \(2\)-connected map, and \(l\) an integer in \(\{1,\dots,2|\mathfrak{b}|\}\), let us denote by \(D(\mathfrak{b},l)\) the graph distance in \(\mathfrak{b}\) between its root vertex and the vertex on which lies the \(l\)-th corner of \(\mathfrak{b}\) in breadth-first order (or in whatever arbitrary ordering rule is chosen in the block-tree decomposition, see Section 2.3). Fix a vertex \(x\) on \(\mathfrak{m}\). Let \(c_{\star}\) be any corner incident to \(x\), and \(v_{\star}\) be the vertex corresponding to \(c_{\star}\) in the block-tree \(\mathfrak{t}\). Denote by \(h_{\star}:=h_{\mathfrak{t}}(v_{\star})\), and \((v_{i})_{0\leqslant i\leqslant h_{\star}}\) the ancestor line of \(v_{\star}\) in \(\mathfrak{t}\), with \(v_{0}\) the root and \(v_{h_{\star}}=v_{\star}\). For \(0\leqslant i\leqslant h_{\star}\), set \(x_{i}\) the root vertex of \(\mathfrak{b}_{v_{i}}^{\mathfrak{m}}\), so that in particular \(x=x_{h_{\star}}\) Figure 14: Situations of Lemmas 12 and 13. Finally, let \((l_{i})_{0\leqslant i<h_{\star}}\) be the respective breadth-first index of the corner in \(\mathfrak{b}_{v_{i}}^{\mathfrak{m}}\) in which the block \(\mathfrak{b}_{v_{i+1}}^{\mathfrak{m}}\) is attached. The situation is illustrated on Fig. 14. **Lemma 12**.: _For \(0\leqslant i\leqslant h_{\star}\), we have_ \[d_{\mathfrak{m}}(x,x_{i})=\sum_{i\leqslant j<h_{\star}}D(\mathfrak{b}_{v_{j}}^ {\mathfrak{m}},l_{j}).\] Proof.: By definition, \(D(\mathfrak{b}_{v_{j}}^{\mathfrak{m}},l_{j})=d_{\mathfrak{m}}(x_{j},x_{j+1})\). Recalling that \(x_{h_{\star}}=x\), we get by the triangle inequality that the left-hand-side is at most the right-hand side. Therefore it suffices to show that any geodesic path in \(\mathfrak{m}\) from \(x=x_{h_{\star}}\) to \(x_{i}\) visits each of the points \((x_{j})_{i<j<h_{\star}}\), in decreasing order of \(j\). Let \(j\) be such that \(i<j<h_{\star}\). Denote by \(\mathfrak{t}_{j}\) the tree of descendants of \(v_{j}\) in \(\mathfrak{t}\) and also \(\mathfrak{m}_{j}\) and \(\widetilde{\mathfrak{m}}_{j}\) the submaps of \(\mathfrak{m}\) made of the blocks \((b_{v}^{\mathfrak{m}})_{v\in\mathfrak{t}_{j}}\) and \((b_{v}^{\mathfrak{m}})_{v\in\mathfrak{t}_{j}}\) respectively. By the recursive description of the block-tree, the submaps \(\mathfrak{m}_{j}\) and \(\widetilde{\mathfrak{m}}_{j}\) share only the vertex \(x_{j}\). But \(x\) is a vertex of \(\mathfrak{m}_{j}\) since \(v_{\star}\) is a descendant of \(v_{j}\), and \(x_{i}\) is a vertex of \(\widetilde{\mathfrak{m}}_{j}\) since \(v_{i}\) is an ancestor of \(v_{j}\). Hence any injective path between \(x\) and \(x_{i}\) must visit \(x_{j}\) in decreasing order of \(j\); and in particular for a geodesic path. Notice that it does not require the \((x_{j})\) to be mutually distinct. This concludes the proof. The quadrangulation case.A slight complication arises for quadrangulations because the "interface" between two blocks is a double edge, containing two vertices instead of a single vertex in the map case. At first sight it is thus unclear through which of these vertices a geodesic should go. We show that there is a canonical choice: the vertex between those two which is closest to the root vertex. This relies crucially on the fact that quadrangulations are bipartite. Fix a quadagulation \(\mathfrak{q}\). For \(\mathfrak{b}\) a simple block of \(\mathfrak{q}\), and \(l\) an integer in \(\{1,\dots,2|\mathfrak{b}|\}\), let us denote by \(D_{\mathfrak{q}}(\mathfrak{b},l)\) the graph distance in \(\mathfrak{b}\) between the endpoint of the \(l\)-th edge of \(\mathfrak{b}\) in the ordering described thereafter, and the endpoint of the root edge of \(\mathfrak{b}\) which is closest to the root vertex of \(\mathfrak{q}\). The order on the edges of \(\mathfrak{b}\) that we use is the image of the lexicographic order on vertices of \(\mathfrak{t}\)_via_ the block-tree decomposition. This is consistent with the ordering of corners in the map case. Fix a vertex \(x\) on \(\mathfrak{q}\) and let \(e_{\star}\) be any edge incident to \(x\), and \(v_{\star}\) be the vertex corresponding to \(e_{\star}\) in the block-tree \(\mathfrak{t}\). Define the height \(h_{\star}\) and the ancestor line \((v_{i})_{0\leqslant i\leqslant h_{\star}}\) as in the map case. Let also \((x_{i})_{0\leqslant i\leqslant h_{\star}}\) be the respective root vertex of \(\mathfrak{b}_{v_{i}}\). In particular \(x_{h_{\star}}\) is either \(x\) or the other endpoint of \(e_{\star}\). Finally, let \((l_{i})_{1\leqslant i\leqslant h_{\star}}\) be the respective breadth-first index of the edge in \(\mathfrak{b}_{v_{i-1}}^{\mathfrak{q}}\) to which the root edge of \(\mathfrak{b}_{v_{i}}^{\mathfrak{q}}\) is attached. The situation is illustrated on Fig. 14. **Lemma 13**.: _For all \(0\leqslant i\leqslant h_{\star}\), there exists \(\delta_{x,x_{i}}\in\{0,\pm 1,\pm 2\}\) such that_ \[d_{\mathfrak{q}}(x,x_{i})=\delta_{x,x_{i}}+\sum_{i\leqslant j<h_{\star}}D_{ \mathfrak{q}}(\mathfrak{b}_{v_{j}}^{\mathfrak{q}},l_{j}).\] Proof.: The idea is quite similar in principle as in the preceding lemma, except that consecutive blocks share two vertices in the quadrangulation case, instead of one. For \(0\leqslant j\leqslant h_{\star}\), denote by \(y_{j}\) the endpoint of the root-edge of \(\mathfrak{b}_{v_{j}}^{\mathfrak{q}}\) which is closest to the root vertex of \(\mathfrak{q}\). In particular \(y_{0}\) is the root vertex of \(\mathfrak{q}\). Let \(0\leqslant i\leqslant h_{\star}\). Then by construction, \(y_{i}\) and are adjacent to the root edge of \(\mathfrak{b}^{\mathfrak{q}}_{v_{i}}\), and similarly \(y:=y_{h_{\star}}\) and \(x\) are adjacent to the root edge of \(\mathfrak{b}^{\mathfrak{q}}_{v_{h_{\star}}}\). Therefore, there exists some \(\delta_{x,x_{i}}\in\{0,\pm 1,\pm 2\}\) such that \[d_{\mathfrak{q}}(x,x_{i})=\delta_{x,x_{i}}+d_{\mathfrak{q}}(y,y_{i}).\] We shall prove the following, which are sufficient to conclude: 1. for \(0\leqslant i\leqslant h_{\star}\), the identity \(d_{\mathfrak{q}}(y,y_{i})=\sum_{i\leqslant j<h_{\star}}d_{\mathfrak{q}}(y_{j+1 },y_{j})\), 2. for \(0\leqslant j<h_{\star}\), the identity \(d_{\mathfrak{q}}(y_{j+1},y_{j})=D_{\mathfrak{q}}(\mathfrak{b}^{\mathfrak{q}}_ {v_{j}},l_{j})\). Let us argue that it is sufficient to show the following \[\forall 0\leqslant i\leqslant j\leqslant k\leqslant h_{\star},\quad d_{ \mathfrak{q}}(y_{i},y_{k})=d_{\mathfrak{q}}(y_{i},y_{j})+d_{\mathfrak{q}}(y_{ j},y_{k}). \tag{29}\] Assuming this is true, we directly get by applying it iteratively that \(d_{\mathfrak{q}}(y,y_{i})=\sum_{i\leqslant j<h_{\star}}d_{\mathfrak{q}}(y_{j+1 },y_{j})\). To verify the second set of identities, recall that \(y_{j}\) is defined as the endpoint of the root edge of \(\mathfrak{b}^{\mathfrak{q}}_{v_{j}}\) which is closest to the root vertex \(y_{0}\) of \(\mathfrak{q}\). Denote by \(y^{\prime}_{j}\) the other endpoint. Then, for \(0\leqslant j<h_{\star}\) we have \[D_{\mathfrak{q}}(\mathfrak{b}^{\mathfrak{q}}_{v_{j}},l_{j})=\min\left(d_{ \mathfrak{b}^{\mathfrak{q}}_{\mathfrak{q}}}(y_{j+1},y_{j}),d_{\mathfrak{b}^{ \mathfrak{q}}_{j}}(y^{\prime}_{j+1},y_{j})\right)=\min\left(d_{\mathfrak{q}}(y _{j+1},y_{j}),d_{\mathfrak{q}}(y^{\prime}_{j+1},y_{j})\right).\] The first equality is equivalent to the definition of \(D_{\mathfrak{q}}(\mathfrak{b}^{\mathfrak{q}}_{v_{j}},l_{j})\). The second one comes from the fact that within a block \(\mathfrak{b}\) of \(\mathfrak{q}\), the graph distance respective to \(\mathfrak{q}\) and the graph distance respective to \(\mathfrak{b}\) coincide. But assuming (29), it holds that \[d_{\mathfrak{q}}(y_{j+1},y_{j})=d_{\mathfrak{q}}(y_{j+1},y_{0})-d_{\mathfrak{ q}}(y_{j},y_{0})\leqslant d_{\mathfrak{q}}(y^{\prime}_{j+1},y_{0})-d_{ \mathfrak{q}}(y_{j},y_{0})\leqslant d_{\mathfrak{q}}(y^{\prime}_{j+1},y_{j}),\] where the first inequality comes from the definition of \(y_{j+1}\), and the second inequality from triangle inequality. In particular, the above minimum is \(d_{\mathfrak{q}}(y_{j+1},y_{j})\) and we have as needed \(d_{\mathfrak{q}}(y_{j+1},y_{j})=D_{\mathfrak{q}}(\mathfrak{b}^{\mathfrak{q}}_ {v_{j}},l_{j})\). We still have to prove (29). Let us first prove the case \(i=0\) and deduce the general case. Let \(0\leqslant j\leqslant k\leqslant h_{\star}\), and let \(\gamma\) be a geodesic path from \(y_{0}\) to \(y_{k}\). If \(\gamma\) visits \(y_{j}\), we readily have \[d_{\mathfrak{q}}(y_{0},y_{k})=d_{\mathfrak{q}}(y_{0},y_{j})+d_{\mathfrak{q}}(y _{j},y_{k}). \tag{30}\] Otherwise it visits \(y^{\prime}_{j}\), and denote by \(\gamma_{1}\), \(\gamma_{2}\) the portions of \(\gamma\) form \(y_{0}\) to \(y^{\prime}_{j}\), and from \(y^{\prime}_{j}\) to \(y_{k}\) respectively. By definition of \(y_{j}\), we have \(d_{\mathfrak{q}}(y_{0},y_{j})\leqslant d_{\mathfrak{q}}(y_{0},y^{\prime}_{j})\). But since \(\mathfrak{q}\) is a quadrangulation, it is bipartite and the inequality is strict \(d_{\mathfrak{q}}(y_{0},y_{j})<d_{\mathfrak{q}}(y_{0},y^{\prime}_{j})\). Form \(\widetilde{\gamma}_{1}\) the concatenation of a geodesic path from \(y_{0}\) to \(y_{j}\) and of the oriented edge \((y_{j},y^{\prime}_{j})\). Then from the strict inequality we mentioned, \(\operatorname{len}(\widetilde{\gamma}_{1})\leqslant\operatorname{len}(\gamma_ {1})\), and in particular the concatenation of \(\widetilde{\gamma}_{1}\) and \(\gamma_{2}\) is a geodesic path from \(y_{0}\) to \(y_{k}\) which visits \(y_{j}\). Therefore we also have in this case the identity (30). Finally, let us deduce the case \(i\neq 0\). Let \(0\leqslant i\leqslant j\leqslant k\leqslant h_{\star}\). We have \[d_{\mathfrak{q}}(y_{i},y_{k})=d_{\mathfrak{q}}(y_{0},y_{k})-d_{ \mathfrak{q}}(y_{0},y_{i})=(d_{\mathfrak{q}}(y_{0},y_{k})-d_{\mathfrak{q}}(y_{0},y_{j}))+(d_{\mathfrak{q}}(y_{0},y_{j})-d_{\mathfrak{q}}(y_{0},y_{i}))\\ =d_{\mathfrak{q}}(y_{j},y_{k})+d_{\mathfrak{q}}(y_{i},y_{j}).\] This proves (29) and concludes the proof of the lemma. #### 5.2.4 Scaling limit and largest degree of critical Galton-Watson trees A slight technical complication that arises in our setting is that the block-tree has a _lattice_ offspring distribution with span \(2\), in the sense of the following definition **Definition 9**.: _A measure \(\mu\) on \(\mathbb{Z}\) is called lattice if its support is included in a subset \(b+d\mathbb{Z}\) of \(\mathbb{Z}\), with \(d\geqslant 2\). The largest such \(d\) is called its span. If \(d=1\), \(\mu\) is called non-lattice._ The results that we need [13, Theorem 3] are stated for non-lattice offspring distributions. This turns out to be purely for convenience and we state the following more general result that is suited to our needs. We recall that a probability distribution \(\mu\) with mean \(m_{\mu}\) is said to be in the domain of attraction of a stable law of index \(\theta\in(1,2]\) if there exist positive constants \((C_{n})_{n\geqslant 0}\) such that we have the following convergence in distribution \[\frac{U_{1}+\dots+U_{n}-nm_{\mu}}{C_{n}}\xrightarrow[n\to\infty]{(d)}X^{( \theta)}, \tag{31}\] where \((U_{1},\dots U_{n})\) are i.i.d. samples of the law \(\mu\), and \(X^{(\theta)}\) is a random variable with Laplace transform \(E\left[\exp(-\lambda X^{(\theta)})\right]=\exp(\lambda^{\theta})\). **Proposition 14**.: _For all \(1<\theta\leqslant 2\), there exists a random measured metric space \(\mathcal{T}^{(\theta)}=\left(\mathcal{T}^{(\theta)},d^{(\theta)},\nu^{( \theta)}\right)\) satisfying the following scaling limit result._ _Let \(\mu\) be a probability distribution on \(\mathbb{Z}_{\geqslant 0}\), with \(\mu(1)\neq 1\), and which is assumed to be critical. Assume additionally that it is in the domain of attraction of a stable law of index \(\theta\in(1,2]\). Let \(d\geqslant 1\) be the span of the measure \(\mu\). Then under those assumptions, we have_ 1. _For all_ \(m\) _large enough, the_ \(\operatorname{GW}_{\mu}(\mathrm{d}T)\)_-probability that_ \(T\) _has_ \(dm\) _edges is positive. This probability is equivalent to_ \(c_{\theta}/(C_{dm}dm)\) _for some constant_ \(c_{\theta}>0\)_._ 2. _If we denote by_ \(T_{n}\) \(a\) \(\operatorname{GW}_{\mu}\)_-tree conditioned to have_ \(n\) _edges, then_ \[\left(\frac{C_{dm}}{dm}\right)\cdot\underline{T}_{dm}\xrightarrow[m\to\infty]{ (d)}\mathcal{T}^{(\theta)},\] _in the Gromov-Hausdorff-Prokhorov sense, with_ \((C_{n})_{n\geqslant 0}\) _the sequence in (_31_)._ 3. _The largest degree in_ \(T_{dm}\) _is of order at most_ \(C_{dm}\)_, in the sense that for any_ \(\varepsilon>0\)__ \[P\left(\exists v\in T_{dm},k_{v}(T_{dm})\geqslant(C_{dm})^{1+\varepsilon} \right)\xrightarrow[m\to\infty]{}0.\] Proof.: The first statement can be obtained by a straightforward adaptation of the proof of [13, Lemma 1], which relies on a local limit theorem and the cycle lemma. We specify below how this local limit theorem should be adapted. The cycle lemma adapts straightforwardly. For the second statement, let us justify that [13, Theorem 3] still applies when the _non-lattice_ (or _aperiodic_) assumption is dropped, but with the number of vertices \(n+1\) taken only along the subsequence \((dm+1)_{m\geqslant 0}\). This will prove functional convergence of the contour functions of the trees \((T_{dm})_{m}\) when properly rescaled, to the contour function of \(\mathcal{T}^{(\theta)}\). This convergence of contour functions is sufficient to get the announced Gromov-Hausdorff-Prokhorov convergence. The local limit theorem [13, Theorem 2, (ii)] changes as follows \[\lim_{n\to\infty}\sup_{k\in\mathbb{Z}}\left|\frac{a_{n}}{d}P\left(Y_{n}=k\right)- p_{1}\left(\frac{k}{a_{n}}\right)\right|=0.\] See for instance [11, Theorem 4.2.1]. Notice that the only difference with the _non-lattice_\((d=1)\) local limit theorem is the factor \(1/d\) in the last display. Examining the details of Kortchemski's arguments, this extra \(1/d\) factor would appear only in the discrete absolute continuity relations which are used in the proof. But in each instance, it would appear in both the numerator and denominator of some fraction. Hence the fraction simplifies and this factor has no impact on the proof, which carries without change, except that the integer \(n\), which in the paper is the number of _vertices_, should now only be taken in \(d\mathbb{Z}+1\). Finally, in order to get the third statement, one can take as a basis the local limit theorem above. From this, one can get the functional convergence of the Lucasiewicz path of \(T_{dm}\), when it is rescaled by \(dm\) in time and \(C_{dm}\) in space. In particular, \((C_{dm})^{-1}\) times the largest degree in \(T_{dm}\) is tight, and one obtains the claimed probabilistic bound. One could for instance use the same arguments as in the proof of [13, Proposition 3.4] Corollary 15 then just identifies the explicit scaling constants in specific instances of the above-mentioned scaling limit theorem. **Corollary 15**.: _Let \(\mu\) be a critical probability distribution on \(\mathbb{Z}_{\geqslant 0}\) with span \(d\geqslant 1\), and with \(\mu(1)\neq 1\). Denote by \(T_{n}\) a \(\operatorname{GW}_{\mu}\)-tree conditioned to have \(n\) edges, for \(n\in d\mathbb{Z}\) large enough. Then the following holds._ 1. _If_ \(\mu\) _has finite variance_ \(\sigma^{2}\)_, then_ \(P(|T|=dm)\sim cm^{-3/2}\) _for some constant_ \(c>0\)_, and_ \[(dm)^{-1/2}\cdot\underline{T}_{dm}\xrightarrow[m\to\infty]{\text{\rm\tiny GMP },\,(d)}\frac{\sqrt{2}}{\sigma}\cdot\mathcal{T}^{(2)}.\] _Additionally for all_ \(\varepsilon>0\) _the largest degree of_ \(T_{dm}\) _is_ \(o(m^{1/2+\varepsilon})\) _in probability._ 2. _If_ \(\mu\left([x,+\infty)\right)\underset{x\to\infty}{\sim}cx^{-\theta}\) _for some_ \(c>0\) _and_ \(\theta\in(1,2]\)_, then_ \(P(|T|=dm)\sim c_{\theta}^{\prime}m^{-(1+1/\theta)}\) _for some constant_ \(c_{\theta}^{\prime}>0\)_, and_ \[(dm)^{-(1-1/\theta)}\cdot\underline{T}_{dm}\xrightarrow[m\to\infty]{\text{\rm \tiny GMP},\,(d)}\left[\frac{\theta-1}{c\,\Gamma(2-\theta)}\right]^{1/\theta} \cdot\mathcal{T}^{(\theta)}.\] _Additionally for all_ \(\varepsilon>0\) _the largest degree of_ \(T_{dm}\) _is_ \(o(m^{1/\theta+\varepsilon})\) _in probability._ Proof.: Note that in the case where \(\nu\) has exponential moments, [10] treats the case of lattice distributions. That would suffice for our applications when \(u>u_{C}\). We still need the second statement to treat the case \(u=u_{C}\). Let us apply the preceding proposition and identify the right constants, in these two cases. Statement 1.If \(\mu\) has finite variance \(\sigma^{2}\), then by the Central Limit Theorem, for i.i.d. samples \((U_{i})_{i}\) of the law \(\mu\), we have the convergence in distribution \[\frac{U_{1}+\dots+U_{n}-n}{\sigma\cdot n^{1/2}}\xrightarrow[n\to\infty]{(d)} \mathcal{G},\] where \(\mathcal{G}\) is a standard normal variable. In particular, \(\mathcal{G}\) has the same law as \(\frac{1}{\sqrt{2}}\cdot X^{(2)}\). Therefore, the hypotheses of Proposition 14 are satisfied, with \[C_{n}=\tfrac{\sigma}{\sqrt{2}}\cdot n^{1/2},\] and the conclusion follows from this proposition. Statement 2.We consider the case where \(\mu\left([x,+\infty)\right)\underset{x\to\infty}{\sim}cx^{-\theta}\) with \(\theta\in(1,2]\) and \(c>0\). Let \(U:=U_{1}\) and let us also introduce the notations \[M_{1}(x) =\int_{x}^{\infty}\mu(\mathrm{d}y)=\mu\left([x,+\infty)\right)\] \[M_{2}(x) =\int_{x}^{\infty}M_{1}(y)\,\mathrm{d}y\] \[M_{3}(x) =\int_{0}^{x}M_{2}(y)\mathrm{d}y.\] The function \(M_{3}\) is non-decreasing and using the assumed tail asymptotic of \(\mu\), one has the asymptotic \(M_{3}(x)\sim cx^{2-\theta}/(2-\theta)(\theta-1)\). We may therefore use the Karamata Tauberian theorem [1, Theorem 1.7.1] to get \[\widehat{M}_{3}(h)\sim\frac{c\,\Gamma(3-\theta)}{(2-\theta)(\theta-1)}h^{ \theta-2}=\frac{c\,\Gamma(2-\theta)}{\theta-1}h^{\theta-2},\] where \(\widehat{M}_{3}\) is the Laplace-Stieljes transform of \(M_{3}\)[1, Paragraph 1.7.0b]. Then, if we integrate by parts three times, we obtain \[E\left[\exp(-h(U-1))\right]=\int_{0}^{\infty}\mathrm{e}^{-h(x-1 )}\,\mu(\mathrm{d}x) =\mathrm{e}^{h}-h\mathrm{e}^{h}M_{2}(0)+h^{3}\mathrm{e}^{h}\int_{ 0}^{\infty}\mathrm{e}^{-hx}M_{3}(x)\,\mathrm{d}x\] \[=\mathrm{e}^{h}-h\mathrm{e}^{h}M_{2}(0)+h^{2}\mathrm{e}^{h}\, \widehat{M}_{3}(h),\] This, together with the fact that \(M_{2}(0)=1\) since it is the expectation of \(\mu\), yields the following expansion when \(h\to 0^{+}\), \[E\left[\exp(-h(U-1))\right]=1+\frac{c\,\Gamma(2-\theta)}{\theta-1}\cdot h^{ \theta}\left(1+o(1)\right). \tag{32}\] Now, if we set \[C_{n}=\left(\frac{c\,\Gamma(2-\theta)}{\theta-1}\right)^{1/\theta}n^{1/\theta},\] and plug \(h=\lambda/C_{n}\) into (32), we get for all \(\lambda\geqslant 0\), \[E\left[\exp\left(-\lambda\xrightarrow[C_{n}]{U_{1}+\dots+U_{n}-n}\frac{1}{C_{ n}}\right)\right]=\left(E\left[\exp(-\tfrac{\lambda}{C_{n}}(U-1))\right]\right)^{n} \xrightarrow[n\to\infty]{}\exp\left(\lambda^{\theta}\right).\] Hence there is convergence in distribution of \(\frac{U_{1}+\dots+U_{n}-n}{C_{n}}\) to \(X^{(\theta)}\), as required in Proposition 14. So this proposition applies with the above-chosen sequence \((C_{n})_{n}\), and the conclusion follows. #### 5.2.5 The spine decomposition and size-biased laws In this section we present a _size-biasing_ relation for the block-tree, in the sense of [13]. Actually, we extend in a straightforward way this size-biasing relation to our setting, where we have a Galton-Watson tree and some decorations, namely the blocks. More precisely, consider the following measure on maps with a distinguished corner \((\mathfrak{m},c_{\star})\) \[\mathbb{P}_{u}(\mathrm{d}\mathbf{M})\sum_{\text{$c$ corner of $\mathbf{M}$}} \delta_{c}(\mathrm{d}B_{\star}),\] where \(\delta_{c}\) is the Dirac measure \(A\mapsto\delta_{c}(A)=\mathbb{1}_{\{c\in A\}}\). Then this \(\sigma\)-finite measure can be decomposed as a sum of probability measures \(\sum_{h\geqslant 1}\widehat{\mathbb{P}}_{u,h}(\mathrm{d}\mathbf{M},\mathrm{d}B_{ \star})\), where under \(\widehat{\mathbb{P}}_{u}^{h}\) there are exactly \(h\) blocks on the path from the root corner to the distinguished corner, those blocks having a _size-biased_ law as defined below. The present section makes that precise. **Description of \(\widehat{\mathbb{P}}_{u,h}\)**. **Definition 10**.: _Let \(\nu\) be a probability distribution on \(\mathbb{Z}_{\geqslant 0}\) with finite expectation \(m_{\nu}\). Then the size-biased distribution \(\widehat{\nu}\) is defined by_ \[\forall k\in\mathbb{Z}_{\geqslant 0},\quad\widehat{\nu}(k)=\frac{k\,\nu(k)}{m_{ \nu}}.\] When \(\nu\) is a (sub-)critical offspring distribution with \(\nu(0),\nu(1)\neq 0\), denote by \(\left(\widehat{\mathrm{GW}}_{\nu,h}\right)_{h\geqslant 0}\) the following family of laws, on the sets of discrete trees with a distinguished vertex at height \(h\) respectively. It may be described algorithmically: * Each vertex will either be mutant or normal, and their number of offspring are sampled independently from each other; * Normal vertices have only normal children, whose number is sampled according to \(\nu\); * Mutant vertices of height less than \(h\) have a number of children sampled according to the size-biased distribution \(\widehat{\nu}\), all of which are normal except one, chosen uniformly, which is mutant; * The only mutant vertex at height \(h\) reproduces like a normal vertex and is the distinguished vertex \(V_{\star}\). This yields a pair \((T,V_{\star})\), where \(T\) is a discrete tree and \(V_{\star}\) is a distinguished vertex of \(T\) with height \(h\). We denote by \((V_{i})_{0\leqslant i\leqslant h_{\mathbf{T}}(V_{\star})}\) the ancestor line of \(V_{\star}\), and \(L_{i}\) the order of \(V_{i+1}\) in the children of \(V_{i}\) respectively. Observe that the construction gives that \((k_{V_{i}}(\mathbf{T}))_{i}\) are i.i.d. with law \(\widehat{\nu}\), and conditionally on those variables, the variables \((L_{i})_{i}\) are independent with uniform law on \(\{1,\ldots,k_{V_{i}}(\mathbf{T})\}\) respectively. We may now define the family of probability measures \((\widehat{\mathbb{P}}_{u,h})_{h\geqslant 0}\) as follows. Let \(h\geqslant 0\). * Sample \((\mathbf{T},V_{\star})\) according to the law \(\widehat{\mathrm{GW}}_{\mu^{u},h}\). * For each \(v\in\mathbf{T}\), sample independently and uniformly a \(2\)-connected map \(\mathfrak{b}_{v}^{\mathbf{M}}\) with \(k_{v}(\mathbf{T})/2\) edges. * Build the map \(\mathbf{M}\) whose block decomposition is \((b_{v}^{\mathbf{M}})_{v\in\mathbf{T}}\), and \(\mathbf{Q}=\varphi(\mathbf{M})\) its image by Tutte's bijection. * Denote by \(B_{\star}\) the corner of \(\mathbf{M}\), resp. \(E_{\star}\) the edge of \(\mathbf{Q}\), that the block-tree decomposition associates to the vertex \(V_{\star}\) of \(\mathbf{T}\). We are now equipped to state the size-biasing relation. **Proposition 16**.: _For \(u\geqslant u_{C}\), the \(\sigma\)-finite measure \(\mathbb{P}_{u}(\mathrm{d}\mathbf{M})\sum_{c\text{ corner of }\mathbf{M}}\delta_{c}( \mathrm{d}B_{\star})\) on maps with a distinguished corner decomposes as the following sum of probability measures,_ \[\mathbb{P}_{u}(\mathrm{d}\mathbf{M})\sum_{c\text{ corner of }\mathbf{M}} \delta_{c}(\mathrm{d}B_{\star})=\sum_{h\geqslant 0}\widehat{\mathbb{P}}_{u,h}( \mathrm{d}\mathbf{M},\mathrm{d}B_{\star}).\] Proof.: The standard size-biasing relation for (sub-)critical Galton-Watson trees reads \[\mathrm{GW}_{\nu}(\mathrm{d}\mathbf{t})\sum_{v\in\mathfrak{t}}\delta_{v}( \mathrm{d}v_{\star})=\sum_{h\geqslant 0}(m_{\nu})^{h}\cdot\delta_{h}(h_{ \mathfrak{t}}(v_{\star}))\cdot\widehat{\mathrm{GW}}_{\nu,h}(\mathrm{d} \mathbf{t},\mathrm{d}v_{\star}).\] When \(u\geqslant u_{C}\), the offspring distribution \(\mu^{u}\) is critical, so \(m_{\mu^{u}}=1\). Specializing the last display to \(\nu=\mu^{u}\) and to the value of \((\mathfrak{t},v_{\star})\) associated to some map with a distinguished corner \((\mathfrak{m},c_{\star})\), this gives for all such \((\mathfrak{m},c_{\star})\), \[\mathrm{GW}_{\mu^{u}}(\mathfrak{t})=\sum_{h\geqslant 0}\delta_{h}(h_{ \mathfrak{t}}(v_{\star}))\widehat{\mathrm{GW}}_{\mu^{u},h}(\mathfrak{t},v_{ \star}).\] Therefore, if we multiply both sides by \(\prod_{v\in\mathfrak{t}}\frac{1}{b_{kv(\mathfrak{t})/2}}\), we get the following by Proposition 7: \[\mathbb{P}_{u}(\mathfrak{m})=\sum_{h\geqslant 0}\delta_{h}(h_{\mathfrak{t}}(v _{\star}))\cdot\widehat{\mathrm{GW}}_{\mu^{u},h}(\mathfrak{t},v_{\star})\cdot \prod_{v\in\mathfrak{t}}\frac{1}{b_{kv(\mathfrak{t})/2}}=\sum_{h\geqslant 0} \widehat{\mathbb{P}}_{u,h}(\mathfrak{m},c_{\star}).\] Since \(\sum_{c\text{ corner of }\mathfrak{m}}\delta_{c}(c_{\star})=1\), the last display expresses the measure \(\mathbb{P}_{u}(\mathrm{d}\mathbf{M})\sum_{c\text{ corner of }\mathbf{M}}\delta_{c}( \mathrm{d}B_{\star})\) as a sum of the probability measures \((\widehat{\mathbb{P}}_{u,h})_{h\geqslant 0}\). Probabilistic properties of \(\widehat{\mathbb{P}}_{u,h}\).Since we need metric information on blocks whose size follows the size-biased law \(\widehat{\mu}^{u}\), let us introduce adequate notations. Let \(u\geqslant u_{C}\). Denote by \(\widehat{\xi}_{u}\) a sample of the distribution \(\widehat{\mu}^{u}\) on some probability space \((\Omega,P)\). Then jointly define the random variables \(\widehat{\mathbf{B}}_{u}^{\mathrm{map}}\) and \(\widehat{\mathbf{B}}_{u}^{\mathrm{quad}}\) as sampled uniformly among respective blocks with size \(\widehat{\xi}_{u}/2\), in such a way that they are linked by Tutte's bijection, i.e. their joint law satisfies \[\left(\widehat{\mathbf{B}}_{u}^{\mathrm{map}},\widehat{\mathbf{B}}_{u}^{ \mathrm{quad}}\right)\stackrel{{(d)}}{{=}}\left(B_{\widehat{\xi }_{u}/2}^{\mathrm{map}},B_{\widehat{\xi}_{u}/2}^{\mathrm{quad}}\right).\] Furthermore, conditionally on \(\widehat{\xi}_{u}\), sample independently \(U\) a uniform label in \(\{1,\ldots,\widehat{\xi}_{u}\}\). This yields the following 4-tuple \[\big{(}\widehat{\xi}_{u}\,,\,\widehat{\mathbf{B}}_{u}^{\mathrm{map}}\,,\, \widehat{\mathbf{B}}_{u}^{\mathrm{quad}}\,,\,U\big{)}.\] **Lemma 17**.: _For all \(h\geqslant 1\), we have the identity in law_ \[\mathrm{Law}\left(\left(k_{V_{i}}(\mathbf{T})\,,\,\mathfrak{b}_{V_{i}}^{\mathbf{ M}}\,,\,\mathfrak{b}_{V_{i}}^{\mathbf{Q}}\,,\,L_{i}\right)_{0\leqslant i<h};\, \widehat{\mathbb{P}}_{u,h}\right)=\left[\mathrm{Law}\Big{(}\big{(}\widehat{ \xi}_{u}\,,\,\widehat{\mathbf{B}}_{u}^{\mathrm{map}}\,,\,\widehat{\mathbf{B} }_{u}^{\mathrm{quad}}\,,\,U\big{)}\,;\,P\Big{)}\right]^{\otimes h}\] _where \(\mathrm{Law}(X;Q)\) is the law of \(X\) under \(Q\)._ Proof.: Recall that under \(\widehat{\mathbb{P}}_{u,h}\), the pair \((\mathbf{T},V_{\star})\) has law \(\widehat{\mathrm{GW}}_{\mu^{u},h}\). By definition of the law \(\widehat{\mathrm{GW}}_{\mu^{u},h}\), the ancestor line of the distinguished vertex \(V_{\star}\) in \(\mathbf{T}\) is made of mutant vertices. This means that the family \((k_{v_{i}}(\mathbf{T}))_{0,\leqslant i<h}\) is i.i.d. sampled from the size-biased distribution \(\widehat{\mu}^{u}\), which is the law of \(\widehat{\xi}_{u}\), and that independently of each other, each \(V_{i+1}\) has uniform rank \(L_{i}\) among the \(k_{V_{i}}(\mathbf{T})\) children of \(V_{i}\). Hence we have the identity in law \[\mathrm{Law}\left(\left(k_{V_{i}}(\mathbf{T})\,,\,L_{i}\right)_{0\leqslant i< h};\,\widehat{\mathbb{P}}_{u,h}\right)=\left[\mathrm{Law}\big{(}\widehat{\xi}_{u} \,,\,U\big{)}\right]^{\otimes h}.\] Now under \(\widehat{\mathbb{P}}_{u,h}\) the conditional law of the blocks \((\mathfrak{b}_{v}^{\mathbf{M}})_{v\in\mathbf{T}}\) with respect to \(\mathbf{T}\) is that of independent blocks, sampled uniformly from blocks with size \((k_{V}(\mathbf{T})/2)_{v\in\mathbf{T}}\) respectively. In particular, the blocks \((\mathfrak{b}_{V_{i}}^{\mathbf{M}})_{0\leqslant i<h}\) are sampled independently, uniformly from blocks with size \((k_{V_{i}}(\mathbf{T})/2)_{0\leqslant i<h}\) respectively. Therefore the preceding identity in law extends to the following one \[\mathrm{Law}\left(\left(k_{V_{i}}(\mathbf{T})\,,\,\mathfrak{b}_{V_{i}}^{ \mathbf{M}}\,,\,L_{i}\right)_{0\leqslant i<h};\,\widehat{\mathbb{P}}_{u,h} \right)=\left[\mathrm{Law}\big{(}\widehat{\xi}_{u}\,,\,\widehat{\mathbf{B}}_{ u}^{\mathrm{map}}\,,\,U\big{)}\right]^{\otimes h}.\] Finally, recall from Proposition 5 that \(\mathfrak{b}_{V_{i}}^{\mathbf{Q}}\) is the image of \(\mathfrak{b}_{V_{i}}^{\mathbf{M}}\) by Tutte's bijection. Since by definition \(\widehat{\mathbf{B}}_{u}^{\mathrm{quad}}\) is also the image of \(\widehat{\mathbf{B}}_{u}^{\mathrm{map}}\) by this bijection, the identity in law extends to the one in the proposition. We get in particular from Lemma 17 that the variables \(\big{(}D(\mathfrak{b}_{V_{i}}^{\mathbf{M}},L_{i})\big{)}_{0\leqslant i<h}\) are i.i.d. under \(\widehat{\mathbb{P}}_{u,h}\). It is a bit less clear that the variables \(\big{(}D_{\mathbf{Q}}(\mathfrak{b}_{V_{i}}^{\mathbf{Q}},L_{i})\big{)}_{0\leqslant i <h}\) from Lemma 13 are also i.i.d., since they seem to simultaneously depend on global metric properties of \(\mathbf{Q}\). **Lemma 18**.: _Denote by \(D(\mathfrak{b},l)\) the distance in a simple quadrangulation \(\mathfrak{b}\) between its root vertex and the closest endpoint of the \(l\)-th edge in the order induced by the block-tree decomposition, the same order as the one introduced before Lemma 13. Then for all \(h\geqslant 1\), there is the identity in law_ \[\mathrm{Law}\left((D_{\mathbf{Q}}(\mathfrak{b}_{V_{i}}^{\mathbf{Q}},L_{i}))_{ 0\leqslant i<h};\,\widehat{\mathbb{P}}_{u,h}\right)=\left[\mathrm{Law}\Big{(} D(\widehat{\mathbf{B}}_{u}^{\mathrm{quad}},U)\;;\,P\Big{)}\right]^{\otimes h}.\] Proof.: Recall from the notations introduced for Lemma 13 that for \(\mathfrak{b}\) a simple block of a quadrangulation \(\mathfrak{q}\), and \(l\) an integer in \(\{1,\dots,2|\mathfrak{b}|\}\), \(D_{\mathfrak{q}}(\mathfrak{b},l)\) is the graph distance in \(\mathfrak{b}\) between the endpoints of the \(l\)-th edge of \(\mathfrak{b}\) in breadth-first order, and the endpoint of the root edge of \(\mathfrak{b}\) which is closest to the root vertex of \(\mathfrak{q}\). Denote by \(\mathfrak{b}\mapsto F(\mathfrak{b})\) the mapping which reverses the rooted oriented edge of a simple quadrangulation. Introduce also for \(\mathfrak{b}\) a simple quadrangulation, \(f_{\mathfrak{b}}\) the permutation of \(\{1,\dots,2|\mathfrak{b}|\}\) which maps the breadth-first order on \(\mathfrak{b}\) to the breadth-first-order on \(F(\mathfrak{b})\). Finally, define the event that the root vertex of \(b_{V_{i}}^{\mathbf{Q}}\) is closer to the root vertex of \(\mathbf{Q}\) than the other endpoint of the root edge of \(b_{V_{i}}^{\mathbf{Q}}\). Then by definition, for all \(0\leqslant i<h\) we have that \[D_{\mathbf{Q}}(\mathfrak{b}_{V_{i}}^{\mathbf{Q}},L_{i})=\mathbbold{1}_{\{ \mathcal{E}_{i}\}}\cdot D\Big{(}\mathfrak{b}_{V_{i}}^{\mathbf{Q}},L_{i}\Big{)} +\big{(}1-\mathbbold{1}_{\{\mathcal{E}_{i}\}}\big{)}\cdot D\Big{(}F(\mathfrak{ b}_{V_{i}}^{\mathbf{Q}}),f_{\mathfrak{b}_{V_{i}}^{\mathbf{Q}}}(L_{i})\Big{)}.\] Let \(\mathcal{F}_{i}\) denote the sigma-algebra of the variables \((k_{V_{j}}(\mathbf{T})\,,\,\mathfrak{b}_{V_{j}}^{\mathbf{M}}\,,\,\mathfrak{b} _{V_{j}}^{\mathbf{Q}}\,,\,L_{j})_{0\leqslant j<i}\). Then by Lemma 17, we have that the tuple \((k_{V_{i}}(\mathbf{T})\,,\,\mathfrak{b}_{V_{i}}^{\mathbf{M}}\,,\,\mathfrak{b} _{V_{i}}^{\mathbf{Q}}\,,\,L_{i})\) is independent of \(\mathcal{F}_{i}\), and has the same law as \(\big{(}\widehat{\xi}_{u}\,,\,\widehat{\mathbf{B}}_{u}^{\mathrm{map}}\,,\, \widehat{\mathbf{B}}_{u}^{\mathrm{quad}}\,,\,U\big{)}\). Now the crucial point is that the event \(\mathcal{E}_{i}\) is \(\mathcal{F}_{i}\)-measurable, since it can be decided whether or not it holds by looking only at the first \(i\) blocks on the spine. In particular it is independent of \((k_{V_{j}}(\mathbf{T})\,,\,\mathfrak{b}_{V_{j}}^{\mathbf{M}}\,,\,\mathfrak{b} _{V_{j}}^{\mathbf{Q}}\,,\,L_{j})_{j\geqslant i}\). This implies the following \[\mathrm{Law}\left(\Big{(}D_{\mathbf{Q}}(\mathfrak{b}_{V_{i}}^{ \mathbf{Q}},L_{i})\Big{)}_{0\leqslant i<h}\ ;\,\widehat{\mathbb{P}}_{u,h}\right)\] \[=\bigotimes_{0\leqslant i<h}\!\!\!\left[\widehat{\mathbb{P}}_{u,h}(\mathcal{E}_{i})\cdot\mathrm{Law}\Big{(}D(\widehat{\mathbf{B}}_{u}^{ \mathrm{quad}},U)\Big{)}+(1-\widehat{\mathbb{P}}_{u,h}(\mathcal{E}_{i})) \cdot\mathrm{Law}\Big{(}D\big{(}F(\widehat{\mathbf{B}}_{u}^{\mathrm{quad}}),f _{\widehat{\mathbf{B}}_{u}^{\mathrm{quad}}}(U)\big{)}\Big{)}\right]\!.\] The proposition is therefore proved if we justify the identity in law \[D(\widehat{\mathbf{B}}_{u}^{\mathrm{quad}},U)\stackrel{{(d)}}{{= }}D\big{(}F(\widehat{\mathbf{B}}_{u}^{\mathrm{quad}}),f_{\widehat{\mathbf{B}}_ {u}^{\mathrm{quad}}}(U)\big{)}. \tag{33}\] To check this, first notice that \(F\) is a bijection since it is involutive, so that in particular the uniform law on simple quadrangulations with \(k\) edges is invariant under \(F\). By definition, for \(\mathfrak{b}\) a simple quadrangulation, \(f_{\mathfrak{b}}\) is also a bijection so that the uniform measure on \(\{1,\ldots,2|\mathfrak{b}|\}\) is invariant under it. Denoting \(U_{k}\) a uniform random variable on \(\{1,\ldots,2k\}\), this gives for each \(k\geqslant 1\) the identity in law \[D(B_{k}^{\mathrm{quad}},U_{k})\stackrel{{(d)}}{{=}}D\big{(}F(B_{k }^{\mathrm{quad}}),f_{B_{k}^{\mathrm{quad}}}(U_{k})\big{)}.\] Since the pair \((\widehat{\mathbf{B}}_{u}^{\mathrm{quad}},U)\) is the \(\widehat{\xi}_{u}/2\)-mixture of the laws \((B_{k},U_{k})_{k\geqslant 1}\), the identity in law (33) also holds and this conludes the proof. Moments of typical distances in a size-biased block.We may now examine how fat are the tails of this i.i.d. family of distances along the spine, which we wish to sum. **Proposition 19**.: _Let \(D\) be either the variable \(D(\widehat{\mathbf{B}}_{u}^{\mathrm{map}},U)\) or \(D(\widehat{\mathbf{B}}_{u}^{\mathrm{quad}},U)\). Then for \(u>u_{C}\), there exists \(\varepsilon>0\) such that \(E[\exp(tD)]<\infty\) for all real \(t<\varepsilon\). And for \(u=u_{C}\), we have \(E\left[D^{\beta}\right]<\infty\) for all \(0<\beta<2\)._ Proof.: The variable \(D\) is defined as a distance in \(\widehat{\mathbf{B}}_{u}\), where \(\widehat{\mathbf{B}}_{u}\) is either \(\widehat{\mathbf{B}}_{u}^{\mathrm{map}}\) or \(\widehat{\mathbf{B}}_{u}^{\mathrm{quad}}\). Hence it suffices to prove that the above moments are finite when we replace \(D\) by \(\mathrm{diam}(\widehat{\mathbf{B}}_{u})\). Let \(u>u_{C}\). Then \(\mathrm{diam}(\widehat{\mathbf{B}}_{u})\leqslant\widehat{\xi}_{u}\), and the latter variable has exponential moments since \(P(\widehat{\xi}_{u}\geqslant x)=\sum_{2j\geqslant x}2j\mu^{u}(\{2j\})\), where \(\mu^{u}\) has a tail decaying exponentially fast by Theorem 1. Now take \(u=u_{C}=9/5\) and let \(\delta\in(0,2)\). Also let \(\varepsilon>0\) to be chosen later depending on \(\delta\). Using the notation \(B_{k}\) for \(B_{k}^{\mathrm{map}}\) or \(B_{k}^{\mathrm{quad}}\), set \[p_{\varepsilon}(k)=P\left(\mathrm{diam}(B_{k})\geqslant k^{1/4+\varepsilon} \right).\] By Proposition 10, we have that \(p_{\varepsilon}(k)\) decays stretched exponentially as \(k\to\infty\). Therefore we get a constant \(C>0\) such that \(k^{2}p_{\varepsilon}(k)\leqslant C\) for all \(k\). Recall that we have \(\operatorname{diam}(\widehat{\mathbf{B}}_{u})\leqslant\widehat{\xi}_{u}\). Distinguishing upon whether \(\operatorname{diam}(\widehat{\mathbf{B}}_{u})\leqslant(\widehat{\xi}_{u})^{1/ 4+\varepsilon}\) or \(\operatorname{diam}(\widehat{\mathbf{B}}_{u})>(\widehat{\xi}_{u})^{1/4+\varepsilon}\) and taking a conditional expectation with respect to \(\widehat{\xi}_{u}\), we get \[E\left[\left(\operatorname{diam}(\widehat{\mathbf{B}}_{u})\right)^{2- \delta}\right] \leqslant E\left[\left(\left(\widehat{\xi}_{u}\right)^{1/4+ \varepsilon}\right)^{2-\delta}\mathbb{1}_{\left\{\operatorname{diam}( \widehat{\mathbf{B}}_{u})\leqslant(\widehat{\xi}_{u})^{1/4+\varepsilon} \right\}}\right]+E\left[\left(\widehat{\xi}_{u}\right)^{2-\delta}p_{ \varepsilon}\Big{(}\widehat{\xi}_{u}\Big{)}\right]\] \[\leqslant E\left[\left(\widehat{\xi}_{u}\right)^{(1/4+\varepsilon )(2-\delta)}\right]+C\] \[=\sum_{2j\geqslant 0}(2j)^{(1/4+\varepsilon)(2-\delta)}\cdot 2j\mu^{u _{C}}(\{2j\})+C.\] If \(\varepsilon\) is small enough so that \((1/4+\varepsilon)(2-\delta)<1/2\), then the last sum is finite since by Theorem 1 we have \(\mu^{u_{C}}(\{2j\})=O(j^{-5/2})\). Therefore \(E\left[\left(\operatorname{diam}(\widehat{\mathbf{B}}_{u})\right)^{2-\delta} \right]<\infty\). Let us make a brief commentary, and justify that when \(u=u_{C}\), Proposition 19 is optimal, in the sense that \(D(\widehat{\mathbf{B}}_{u_{C}}^{\operatorname{quad}},U)\) does not have moments of order \(\beta\) for \(\beta\geqslant 2\). Firstly, one easily checks that functionals on pointed measured metric spaces of the form \[(X,x_{0},d_{X},\nu_{X})\mapsto\int_{X}\nu_{X}(\mathrm{d}x)\left(d_{X}(x_{0},x) \right)^{\beta}\] are continuous with respect to the Gromov-Hausdorff-Prokhorov topology. Addario-Berry and Albenque [1] prove the GHP convergence of size \(k\) uniform simple quadrangulations, rescaled by \(\operatorname{cst}\cdot k^{-1/4}\), to the measured Brownian sphere \((\mathcal{S},D^{*},\lambda)\). This holds when putting either the uniform measure on vertices of \(B_{k}^{\operatorname{quad}}\) or the size-biased one by [1]. In particular, by the abovementioned continuity, we have the convergence in distribution \[E\left[\left(\operatorname{cst}\cdot k^{-1/4}D(B_{k}^{\operatorname{quad}},U_{ k})\right)^{\beta}\Bigm{|}B_{k}^{\operatorname{quad}}\right]\xrightarrow[k \to\infty]{(d)}\int_{\mathcal{S}}\lambda(\mathrm{d}x)\big{(}D^{*}(x_{0},x) \big{)}^{\beta},\] where \(U_{k}\) is uniform on \(\{1,\dots,2k\}\) and \(x_{0}\) is the distinguished point on the Brownian sphere. Since the variable \(\int_{\mathcal{S}}\lambda(\mathrm{d}x)\big{(}D^{*}(x_{0},x)\big{)}^{\beta}\) is almost surely positive, the left-hand-side forms a tight sequence of \((0,\infty)\)-valued random variables. Therefore it is bounded away from \(0\) with _uniform_ positive probability. This implies a lower bound \(E\left[D(B_{k}^{\operatorname{quad}},U_{k})^{\beta}\right]\geqslant c(k^{1/4 })^{\beta}\), for some \(c=c(\beta)>0\). In particular, \[E\left[D(\widehat{\mathbf{B}}_{u_{C}}^{\operatorname{quad}},U)^{ \beta}\right]=\sum_{2j\geqslant 0}E\left[D(B_{j}^{\operatorname{quad}},U_{j})^{ \beta}\right]\cdot\widehat{\mu}^{u_{C}}(\{2j\}) \geqslant\sum_{2j\geqslant 0}cj^{\beta/4}\cdot 2j\mu^{u_{C}}(\{2j\})\] \[=\sum_{2j\geqslant 0}\Theta(j^{\beta/4+1-5/2}).\] The latter sum is infinite when \(\beta\geqslant 2\), which proves that \(D(\widehat{\mathbf{B}}_{u_{C}}^{\operatorname{quad}},U)\) does not have moments of order \(\beta\) for \(\beta\geqslant 2\). The same argument would hold for \(D(\widehat{\mathbf{B}}_{u_{C}}^{\operatorname{map}},U)\), but we lack at the moment the GHP convergence of size-\(k\) uniform \(2\)-connected maps. #### 5.2.6 Moderate deviations estimate. When increments of a random walk possess only a polynomial moment of order \(\beta>1\), as is the case of \(D(\widehat{\mathbf{B}}_{u}^{\text{map}},U)\) and \(D(\widehat{\mathbf{B}}_{u}^{\text{quad}},U)\) when \(u=u_{C}\), moderate and large deviation events can possibly have probabilities which decay slowly, that is polynomially with \(n\). In the case of heavy-tailed increments, this indeed happens since those moderate and large deviation events can be realised by taking one large increment. This _one-big-jump_ behaviour is actually precisely how these large deviations events are realised. This phenomenon, which we have already encountered in Section 3 for \(u<u_{C}\), is known as condensation. For a more precise statement, see [12, 12, 19]. One could hope that if we prevent the variables from condensating, we could still get stretched-exponentially small probabilities for large deviation events. We make this precise in the following proposition, by stating that this is the case when we suitably truncate the increments. We were not able to find an instance of such an estimate in the literature, although it has certainly been encountered in some form. We thus include a short proof, which as usual relies on a Chernoff bound. **Proposition 20**.: _Let \(X\) be a real random variable with i.i.d. copies \((X_{i})_{i\geqslant 1}\). Assume that there exists \(\beta\in(1,2]\) such that \(E\left[|X|^{\beta}\right]<\infty\) and that we have \(E\left[X\right]=0\)._ _Then, for all \(\delta>0\), \(\gamma\in(0,1/\beta+\delta)\), and \(\nu\in\big{(}0,\delta\wedge(1/\beta+\delta-\gamma)\big{)}\), there exists a constant \(C>0\) such that for all \(n\geqslant 1\),_ \[P\left(\max_{1\leqslant k\leqslant n}\sum_{i=1}^{k}X_{i}\mathbb{1}_{\{X_{i} \leqslant n^{\gamma}\}}>n^{1/\beta+\delta}\right)\leqslant C\exp(-n^{\nu}).\] **Remark 7**.: _A straightforward adaptation of the proof shows that the conclusion still holds if the only assumptions on the variables \((X_{i})_{i}\) are \(E\big{[}X_{i}\bigm{|}X_{1},\ldots,X_{i-1}\big{]}\leqslant 0\) and \(\sup_{i\geqslant 1}E\big{[}|X_{i}|^{\beta}\bigm{|}X_{1},\ldots,X_{i-1}\big{]}<\infty\)._ Proof.: Fix an arbitrary \(\theta\) such that \(\max(\gamma,1/\beta)<\theta<1/\beta+\delta\). By Chernoff's bound, we get for all \(1\leqslant k\leqslant n\), \[P\left(\sum_{i=1}^{k}X_{i}\mathbb{1}_{\{X_{i}\leqslant n^{\gamma }\}}>n^{1/\beta+\delta}\right) \leqslant\exp(-n^{1/\beta+\delta-\theta})\Big{(}E\left[\exp \big{(}n^{-\theta}X\mathbb{1}_{\{X\leqslant n^{\gamma}\}}\big{)}\right]\Big{)} ^{k}\] \[\leqslant\exp(-n^{1/\beta+\delta-\theta})\Big{(}1\lor E\left[\exp \big{(}n^{-\theta}X\mathbb{1}_{\{X\leqslant n^{\gamma}\}}\big{)}\right]\Big{)} ^{n}.\] Therefore we obtain by a union bound the estimate \[P\left(\max_{1\leqslant k\leqslant n}\sum_{i=1}^{k}X_{i}\mathbb{1}_{\{X_{i} \leqslant n^{\gamma}\}}>n^{1/\beta+\delta}\right)\leqslant n\cdot\exp(-n^{1/ \beta+\delta-\theta})\Big{(}1\lor E\left[\exp\big{(}n^{-\theta}X\mathbb{1}_{\{X \leqslant n^{\gamma}\}}\big{)}\right]\Big{)}^{n}.\] Since \(\theta\) is arbitrary in the interval \(\big{(}\max(\gamma,1/\beta)\,,\,1/\beta+\delta\big{)}\), the exponent \(\nu:=1/\beta+\delta-\theta\) is arbitrary in the interval \(\big{(}0,\delta\wedge(1/\beta+\delta-\gamma)\big{)}\). As a consequence, to prove the proposition it is sufficient to show that \[E\left[\exp\Big{(}n^{-\theta}X\mathbb{1}_{\{X\leqslant n^{\gamma}\}}\Big{)} \right]\leqslant 1+O(n^{-1}). \tag{34}\] Notice that since \(\beta\in(1,2]\), for all \(M>0\) the following inequality holds for \(t\) near \(0\) or \(-\infty\) \[\exp(t)\leqslant 1+t+M|t|^{\beta}.\] Therefore, if one takes \(M\) large enough it holds for all \(t\in(-\infty,1]\). Fix such a constant \(M\). Given \(\lambda,s\geqslant 0\), distinguishing upon whether \(\lambda x\in(-\infty,1]\) or not and using that \(x\mathbbm{1}_{\{x\leqslant s\}}\leqslant x\) and \(x\mathbbm{1}_{\{x\leqslant s\}}\leqslant x\) (even when \(x<0\)), we get for all \(x\in\mathbb{R}\), \[\exp\Bigl{(}\lambda x\mathbbm{1}_{\{x\leqslant s\}}\Bigr{)} \leqslant\Bigl{(}1+\lambda x\mathbbm{1}_{\{x\leqslant s\}}+M \lambda^{\beta}|x|^{\beta}\mathbbm{1}_{\{x\leqslant s\}}\Bigr{)}\cdot \mathbbm{1}_{\{\lambda x\leqslant 1\}}+\exp(\lambda x\mathbbm{1}_{\{x\leqslant s\}}) \cdot\mathbbm{1}_{\{\lambda x>1\}}\] \[\leqslant 1+\lambda x+M\lambda^{\beta}|x|^{\beta}+\exp(\lambda s )\cdot\mathbbm{1}_{\{\lambda x>1\}}. \tag{35}\] Applying this inequality with \(x=X\), \(\lambda=n^{-\theta}\), \(s=n^{\gamma}\) and taking expectations we obtain \[E\left[\exp\Bigl{(}n^{-\theta}X\mathbbm{1}_{\{X\leqslant n^{\gamma}\}} \Bigr{)}\right]\leqslant 1+n^{-\theta}E\left[X\right]+Mn^{-\beta\theta}E\left[|X|^{ \beta}\right]+\exp\bigl{(}n^{\gamma-\theta}\bigr{)}\cdot P\left(X\geqslant n ^{\theta}\right).\] Recall that \(E[X]=0\) by hypothesis, that \(\gamma-\theta<0\) by choice of \(\theta\), and use Markov's inequality. This yields \[E\left[\exp\Bigl{(}n^{-\theta}X\mathbbm{1}_{\{X\leqslant n^{\gamma}\}} \Bigr{)}\right]\leqslant 1+0+Mn^{-\beta\theta}E\left[|X|^{\beta}\right]+\exp(1) \cdot n^{-\beta\theta}E\left[|X|^{\beta}\right].\] Since by hypothesis \(E\left[|X|^{\beta}\right]<\infty\), we have \[E\left[\exp\Bigl{(}n^{-\theta}X\mathbbm{1}_{\{X\leqslant n^{\gamma}\}} \Bigr{)}\right]\leqslant 1+O(n^{-\beta\theta})\leqslant 1+O(n^{-1})\] where the last inequality comes from the choice of \(\theta\), which is greater than \(1/\beta\). Therefore (34) is satisfied and the proposition is proved. #### 5.2.7 A lemma to compare \(\mathfrak{m}\), \(\mathfrak{q}\) and \(\mathfrak{t}\) Let us state a lemma which elaborates on the additivity of distances on consecutive blocks, so that we can bound the GHP-distance between a map (resp. a quadrangulation) and its block-tree scaled by some constant. Let \(\kappa_{1}\) and \(\kappa_{2}\) be positive constants. Let \(\mathfrak{m}\) be a map, \(\mathfrak{q}\) its associated quadrangulation by Tutte's bijection, and \(\mathfrak{t}\) their block-tree. For \(c_{\star}\) a corner of \(\mathfrak{m}\), denote by \(e_{\star}\) the associated edge in \(\mathfrak{q}\) and \(v_{\star}\) the associated vertex in \(\mathfrak{t}\). Then, as in Lemmas 12, 13 and 17, set \(h_{\star}:=h_{\mathfrak{t}}(v_{\star})\) the height of \(v_{\star}\) in \(\mathfrak{t}\), and \((v_{i})_{0\leqslant i\leqslant h_{\star}}\) the ancestor line of \(v_{\star}\) in \(\mathfrak{t}\), with \(v_{0}\) the root and \(v_{h_{\star}}=v_{\star}\). Also denote by \(x_{i}\) the root vertex of \(\mathfrak{b}_{v_{i}}\). Finally, let \((l_{i})_{0\leqslant i<h_{\star}}\) be the respective breadth-first index of the corner in \(\mathfrak{b}_{v_{i}}^{\mathfrak{m}}\) to which the root corner of \(\mathfrak{b}_{v_{i+1}}^{\mathfrak{m}}\) is attached, which is also the index of the edge in \(\mathfrak{b}_{v_{i}}^{\mathfrak{q}}\) to which the root edge of \(\mathfrak{b}_{v_{i+1}}^{\mathfrak{q}}\) is attached. Finally, denote by \(\Delta(\mathfrak{m})\) (resp. \(\Delta(\mathfrak{q})\)) the largest diameter of the blocks of \(\mathfrak{m}\) (resp. \(\mathfrak{q}\)), set the quantities \[R(\mathfrak{m},c_{\star},\kappa_{1}) =\max_{0\leqslant i<h_{\star}}\left|\sum_{j=i}^{h_{\star}-1} \left(D(\mathfrak{b}_{v_{j}}^{\mathfrak{m}},l_{j})-\kappa_{1}\right)\right|,\] \[R(\mathfrak{q},e_{\star},\kappa_{2}) =\max_{0\leqslant i<h_{\star}}\left|\sum_{j=i}^{h_{\star}-1}\left(D _{\mathfrak{q}}(\mathfrak{b}_{v_{j}}^{\mathfrak{q}},l_{j})-\kappa_{2}\right) \right|.\] **Lemma 21**.: _With this notation, we have for all \(\varepsilon>0\),_ \[\mathrm{d}_{\mathrm{GHP}}\left(\varepsilon\cdot\underline{\mathfrak{m}}^{\mathrm{ B}}\,,\,\varepsilon\kappa_{1}\cdot\underline{\mathfrak{t}}\right)\leqslant\frac{ \varepsilon}{2}\Delta(\mathfrak{m})+\varepsilon\max_{c_{\star}\text{ corner of } \mathfrak{m}}R(\mathfrak{m},c_{\star},\kappa_{1}),\] _and_ \[\mathrm{d}_{\mathrm{GHP}}\left(\varepsilon\cdot\underline{\mathfrak{q}}\,,\, \varepsilon\kappa_{2}\cdot\underline{\mathfrak{t}}\right)\leqslant 3 \varepsilon+\frac{\varepsilon}{2}\Delta(\mathfrak{q})+\varepsilon\max_{c_{ \star}\text{ edge of }\mathfrak{q}}R(\mathfrak{q},e_{\star},\kappa_{2})+\max( \varepsilon,1/|V(\mathfrak{q})|).\] Proof.: Recall the notation introduced in Section 2.6. Let us treat the inequality involving \(\mathfrak{q}\), which is a bit more involved. Consider the correspondence \(C\) between \(V(\mathfrak{q})\) and \(V_{+}(\mathfrak{t})\) defined as follows. A vertex \(x\) of \(\mathfrak{q}\) is set in correspondence with a non-root vertex \(v\) of \(\mathfrak{t}\) if and only if it belongs to the edge of \(\mathfrak{q}\) that the block-tree decomposition associates to \(v\). Let \(\gamma\) be the uniform measure on the set \(C\subset V(\mathfrak{q})\times V_{+}(\mathfrak{t})\). The projection on \(V_{+}(\mathfrak{t})\) is \(2\)-to-\(1\), so that the image measure of \(\gamma\) is uniform on \(V_{+}(\mathfrak{t})\), _i.e._ it is \(\nu_{\mathfrak{t}}\). The projection on \(V(\mathfrak{q})\) of the measure \(\gamma\) is biased on the other hand, as the weight each vertex receives is proportional to its degree. Therefore, \(\gamma\) tautologically defines a coupling between \(\nu_{\mathfrak{q}}^{\mathrm{B}}\) and \(\nu_{\mathfrak{t}}\) supported on \(C\), _i.e._\(\gamma\big{(}(V(\mathfrak{q})\times V(\mathfrak{t}))\setminus C\big{)}=0\). By the triangle inequality and the preceding observations, we have \[d_{\mathrm{GHP}}\left(\varepsilon\cdot\underline{\mathfrak{q}}\,,\, \varepsilon\kappa_{2}\cdot\underline{\mathfrak{t}}\right) \leqslant d_{\mathrm{GHP}}\left(\varepsilon\cdot\underline{ \mathfrak{q}}^{\mathrm{B}}\,,\,\varepsilon\kappa_{2}\cdot\underline{ \mathfrak{t}}\right)+d_{\mathrm{GHP}}\left(\varepsilon\cdot\underline{ \mathfrak{q}}\,,\,\varepsilon\cdot\underline{\mathfrak{q}}^{\mathrm{B}}\right)\] \[=\frac{\varepsilon}{2}\mathrm{dis}(C;d_{\mathfrak{q}},\kappa_{2}d _{\mathfrak{t}})+d_{\mathrm{GHP}}\left(\varepsilon\cdot\underline{\mathfrak{q }}\,,\,\varepsilon\cdot\underline{\mathfrak{q}}^{\mathrm{B}}\right)\] \[\leqslant\frac{\varepsilon}{2}\mathrm{dis}(C;d_{\mathfrak{q}}, \kappa_{2}d_{\mathfrak{t}})+d_{\mathrm{P}}^{(V(\mathfrak{q}),\varepsilon d_{ \mathfrak{q}})}(\nu_{\mathfrak{q}},\nu_{\mathfrak{q}}^{\mathrm{B}}).\] The last inequality uses (24). Now, [1, Lemma 5.1] bounds the Prokhorov distance between the uniform and degree-biased measures on \((V(\mathfrak{q}),\varepsilon d_{\mathfrak{q}})\). Namely, \[d_{\mathrm{P}}^{(V(\mathfrak{q}),\varepsilon d_{\mathfrak{q}})}(\nu_{ \mathfrak{q}},\nu_{\mathfrak{q}}^{\mathrm{B}})\leqslant\max(\varepsilon,1/|V( \mathfrak{q})|).\] It remains to bound the distortion \(\mathrm{dis}(C;d_{\mathfrak{q}},\kappa_{2}d_{\mathfrak{t}})\). Take \(x\), \(\widetilde{x}\) vertices of \(\mathfrak{q}\), and let \(v,\widetilde{v}\) be any vertices of \(\mathfrak{t}\) such that \((x,v)\) and \((\widetilde{x},\widetilde{v})\) are in correspondence in \(C\). We shall bound \(|d_{\mathfrak{q}}(x,\widetilde{x})-\kappa_{2}d_{\mathfrak{t}}(v,\widetilde{v })|\). Denote by \(e_{\star}\) and \(\widetilde{e}_{\star}\) the respective edges that the block-tree decomposition associates to \(v\) and \(\widetilde{v}\) respectively. By definition of the correspondence \(C\), \(e_{\star}\) is incident to \(x\) and \(\widetilde{e}_{\star}\) to \(\widetilde{x}\). Define accordingly as in the statement of Lemma 13, \(h_{\star}\), \((v_{i})_{0\leqslant i\leqslant h_{\star}}\), \((l_{i})_{0\leqslant i<h_{\star}}\), \((x_{i})_{0\leqslant i<h_{\star}}\), and respectively \(\widetilde{h}_{\star}\), \((\widetilde{v}_{i})_{0\leqslant i\leqslant\widetilde{h}_{\star}}\), \((\widetilde{l}_{i})_{0\leqslant i\leqslant\widetilde{h}_{\star}}\), \((\widetilde{x}_{i})_{0\leqslant i\leqslant\widetilde{h}_{\star}}\). Then, let \(i\) be such that \(v_{i}=\widetilde{v}_{i}\) is the last common ancestor of \(v\) and \(\widetilde{v}\) in \(\mathfrak{t}\). First remark that there exists \(\delta_{0}\in\{0,\pm 1,\pm 2\}\) such that \[d_{\mathfrak{q}}(x,\widetilde{x})+\delta_{0}=d_{\mathfrak{q}}(x,x_{i+1})+d_{ \mathfrak{b}_{\mathfrak{q}_{i}}^{\mathfrak{q}}}\left(x_{i+1},\widetilde{x}_{i+ 1}\right)+d_{\mathfrak{q}}(\widetilde{x}_{i+1},\widetilde{x}). \tag{36}\] Indeed, similarly as in the proof of Lemma 13, a geodesic from \(x\) to \(\widetilde{x}\) must visit, once and in that order, * the vertex \(x\), * either \(x_{i+1}\), or \(x^{\prime}_{i+1}\) the other endpoint of the root-edge of \(b_{v_{i+1}}^{\mathfrak{q}}\), * either \(\widetilde{x}_{i+1}\), or \(\widetilde{x}^{\prime}_{i+1}\) the other endpoint of the root-edge of \(b_{\widetilde{v}_{i+1}}^{\mathfrak{q}}\), * the vertex \(\widetilde{x}\). Since \(x_{i+1}\) and \(x_{i+1}^{\prime}\), and also \(\widetilde{x}_{i+1}\) and \(\widetilde{x}_{i+1}^{\prime}\) are at distance \(1\) respectively, and since a geodesic between points in \(b_{v_{i}}\) must stay in \(b_{v_{i}}\), we get that (36) holds, for some \(\delta_{0}\in\{0,1,2\}\). Then, Lemma 13 applies to decompose the distances \(d_{\mathfrak{q}}(x,x_{i+1})\) and \(d_{\mathfrak{q}}(\widetilde{x}_{i+1},\widetilde{x})\), with some \(\delta,\widetilde{\delta}\) in \(\{0,1\}\). Combining this with (36), this gives \[d_{\mathfrak{q}}(x,\widetilde{x})-\kappa_{2}d_{\mathfrak{t}}(v, \widetilde{v})=\delta+\widetilde{\delta}-\delta_{0}+d_{b_{v_{i}}^{\mathfrak{ q}}}(x_{i+1},\widetilde{x}_{i+1})+\sum_{i+1\leqslant j<h_{\star}-1}\big{(}D_{ \mathfrak{q}}(v_{j},l_{j})-\kappa_{2}\big{)}\\ +\sum_{i+1\leqslant j<\widetilde{h}_{\star}-1}\big{(}D_{ \mathfrak{q}}(\widetilde{v}_{j},\widetilde{l}_{j})-\kappa_{2}\big{)}.\] The sum of the first three terms has absolute value at most \(6\), the fourth one at most \(\Delta(\mathfrak{q})\), and the two sums each have absolute value at most \(R(\mathfrak{q},e_{\star},\kappa_{2})\). Therefore by the triangle inequality, \[|d_{\mathfrak{q}}(x,\widetilde{x})-\kappa_{2}d_{\mathfrak{t}}(v,\widetilde{v}) |\leqslant 6+\Delta(\mathfrak{q})+2R(\mathfrak{q},e_{\star},\kappa_{2}).\] Since this holds for every \((x,v)\in C\) and \((\widetilde{x},\widetilde{v})\in C\), the right-hand side is actually a bound on the distortion \(\operatorname{dis}(C;d_{\mathfrak{q}},\kappa_{2}d_{\mathfrak{t}})\), which is precisely what we needed to conclude. For the inequality involving \(\mathfrak{m}\), the reasoning is quite similar, except we keep the degree-biased measure and do not compare it to the uniform measure. Take \(C\) the correspondence such that \(x\in V(\mathfrak{m})\) is in correspondence with \(v\in V_{+}(\mathfrak{t})\) if and only if the vertex \(x\) is the origin of the corner associated to the non-root vertex \(v\). Similarly, the uniform measure \(\gamma\) on \(C\) defines a coupling between the measure \(\nu_{\mathfrak{t}}\) and the degree-biased measure \(\nu_{\mathfrak{m}}^{\mathrm{B}}\) on vertices of \(\mathfrak{m}\), and this coupling is tautologically supported on \(C\). The distortion of \(C\) is bounded with a very similar argument as above involving Lemma 12 instead of Lemma 13, except that there are no terms \(\delta_{0},\delta,\widetilde{\delta}\) to introduce. We leave the details to the reader. All in all we get \[\begin{split}\mathrm{d}_{\mathrm{GHP}}\left(\varepsilon\cdot \underline{\mathfrak{m}}^{\mathrm{B}}\,,\,\varepsilon\kappa_{1}\cdot\underline {\mathfrak{t}}\right)&\leqslant\max\Bigl{(}\tfrac{1}{2} \mathrm{dis}(C;\varepsilon d_{\mathfrak{m}},\varepsilon\kappa_{1}d_{\mathfrak{t} }),\gamma\bigl{(}(V(\mathfrak{m})\times V_{+}(\mathfrak{t}))\setminus C \bigr{)}\Bigr{)}\\ &=\frac{\varepsilon}{2}\operatorname{dis}(C;d_{\mathfrak{m}}, \kappa_{1}d_{\mathfrak{t}})\\ &\leqslant\frac{\varepsilon}{2}\Delta(\mathfrak{m})+\varepsilon \max_{c_{\star}\text{ corner of }\mathfrak{m}}R(\mathfrak{m},c_{\star},\kappa_{1}).\end{split}\] #### 5.2.8 Proof of Theorem 5 Let \(u\geqslant u_{C}\). Let us first prove the claimed scaling limit for the block-tree \(\mathbf{T}_{n,u}\). By Proposition 7, \(\mathbf{T}_{n,u}\) has law \(\operatorname{GW}(\mu^{u},2n)\), where the offspring distribution \(\mu^{u}\) has span \(2\). Scaling limit of \(\mathbf{T}_{n,u}\) for \(u>u_{C}\).If \(u>u_{C}\), then by the third statement of Theorem 1, \(\mu^{u}\) is critical and admits a variance \(\sigma(u)^{2}<\infty\). Corollary 15 thus gives the announced scaling limit for \(\mathbf{T}_{n,u}\), \[(2n)^{-1/2}\cdot\underline{\mathbf{T}}_{n,u}\xrightarrow[n\to\infty]{\frac{ GHP,(d)}{n\to\infty}}\xrightarrow[n\to\infty]{\frac{\sqrt{2}}{\sigma(u)}}\cdot \mathcal{T}^{(2)}.\] The expression for \(\sigma(u)\) given in the statement comes from a straightforward computation from the generating function of \(\mu^{u}\), which by (9) is \[\sum_{k\geqslant 0}x^{k}\mu^{u}(k)=\frac{uB(x^{2}y(u))+1-u}{uB(y(u))+1-u}.\] Scaling limit of \(\mathbf{T}_{n,u}\) for \(u=u_{C}\).If \(u=u_{C}\), then by the second statement of Proposition 7, \(\mu^{u_{C}}\) is critical and satisfies \(\mu^{u_{C}}(\{2j\})\sim\frac{1}{4\sqrt{3\pi}}j^{-5/2}\). Therefore we get the equivalent \[\mu^{u_{C}}([x,\infty))=\sum_{2j\geqslant x}\mu^{u_{C}}(\{2j\})\sim\int_{x/2}^ {\infty}\frac{1}{4\sqrt{3\pi}}s^{-5/2}\,\mathrm{d}s=\frac{1}{3}\sqrt{\frac{2}{ 3\pi}}x^{-3/2}.\] Therefore, using Corollary 15 with \(\theta=3/2\), we get \[(2n)^{1-2/3}\cdot\underline{\mathbf{T}}_{n,u_{C}}\xrightarrow[n\to\infty]{ GHP,(d)}\xrightarrow[n\to\infty]{\frac{\frac{3}{2}-1}{\frac{1}{3}\sqrt{\frac{2}{3 \pi}}\Gamma(2-\frac{3}{2})}}]^{2/3}\cdot\mathcal{T}^{(3/2)}.\] Using that \(\Gamma(1/2)=\sqrt{\pi}\), the constant on the right-hand side simplifies and this translates as announced to \[\frac{2}{3}(2n)^{-1/3}\cdot\underline{\mathbf{T}}_{n,u_{C}}\xrightarrow[n\to \infty]{\frac{GHP,(d)}{n\to\infty}}\mathcal{T}^{(\theta)}.\] Restatement of the problem.We let \(\alpha=2\) when \(u>u_{C}\), and \(\alpha=3/2\) when \(u=u_{C}\). We have identified the GHP-limit of \(n^{-(\alpha-1)/\alpha}\cdot\underline{\mathbf{T}}_{n,u}\). Let \(\kappa_{1}=\kappa_{u}^{\text{map}}\) and \(\kappa_{2}=\kappa_{u}^{\text{quad}}\). It remains to compare in the GHP sense the metric spaces \(n^{-(\alpha-1)/\alpha}\kappa_{1}\cdot\underline{\mathbf{M}}_{n,u}^{\text{B}}\) and \(n^{-(\alpha-1)/\alpha}\kappa_{2}\cdot\underline{\mathbf{Q}}_{n,u}\) to \(n^{-(\alpha-1)/\alpha}\cdot\underline{\mathbf{T}}_{n,u}\). That is to say, we want to show that both quantities \[\mathrm{d}_{\text{GHP}}\left(n^{-\frac{\alpha-1}{\alpha}}\cdot\underline{ \mathbf{M}}_{n,u}^{\text{B}}\,,\,n^{-\frac{\alpha-1}{\alpha}}\kappa_{1}\cdot \underline{\mathbf{T}}_{n,u}\right)\quad\text{ and }\quad\mathrm{d}_{\text{GHP}}\big{(}n^{-\frac{ \alpha-1}{\alpha}}\cdot\underline{\mathbf{Q}}_{n,u}\,,\,n^{-\frac{\alpha-1}{ \alpha}}\kappa_{1}\cdot\underline{\mathbf{T}}_{n,u}\big{)} \tag{37}\] converge to \(0\) in probability. For ease of reading, we introduce for \(\eta,\delta>0\) the following "bad" events, \[B_{n,\eta}^{\mathbf{M}} =\Big{\{}d_{\text{GHP}}\left(n^{-\frac{\alpha-1}{\alpha}}\cdot \underline{\mathbf{M}}^{\text{B}}\,,\,n^{-\frac{\alpha-1}{\alpha}}\kappa_{1} \cdot\underline{\mathbf{T}}\right)\geqslant 2\eta\Big{\}},\] \[B_{n,\eta}^{\mathbf{Q}} =\Big{\{}d_{\text{GHP}}\left(n^{-\frac{\alpha-1}{\alpha}}\cdot \underline{\mathbf{Q}}^{\text{B}}\,,\,n^{-\frac{\alpha-1}{\alpha}}\kappa_{1} \cdot\underline{\mathbf{T}}\right)\geqslant 2\eta\Big{\}},\] as well as auxiliary events \[A_{1;n,\eta}^{\mathbf{M}} =\Big{\{}\exists c_{\star}\in\mathbf{M},\,R(\mathbf{M},c_{\star}, \kappa_{1})\geqslant\eta n^{\frac{\alpha-1}{\alpha}}\Big{\}}\quad\text{ and }\quad A_{1;n,\eta}^{\mathbf{Q}}=\left\{\exists e_{\star}\in\mathbf{Q},\,R( \mathbf{Q},e_{\star},\kappa_{2})\geqslant\eta n^{\frac{\alpha-1}{\alpha}} \right\},\] \[A_{2;n,\delta}^{\mathbf{M}} =\left\{\Delta(\mathbf{M})\leqslant n^{(1+\delta)^{2}(\alpha-1)/2 \alpha}\right\}\quad\text{ and }\quad A_{2,n\delta}^{\mathbf{Q}}=\left\{\Delta(\mathbf{Q})\leqslant n^{(1+ \delta)^{2}(\alpha-1)/2\alpha}\right\},\] \[A_{3;n,\eta}^{\mathbf{M}} =\left\{\frac{n^{-\frac{\alpha-1}{\alpha}}}{2}\Delta(\mathbf{M}) \geqslant\eta\right\}\quad\text{ and }\quad A_{3;n,\eta}^{\mathbf{Q}}=\left\{3n^{-\frac{\alpha-1}{\alpha}}+\frac{n^{- \frac{\alpha-1}{\alpha}}}{2}\Delta(\mathbf{Q})+\max\left(n^{-\frac{\alpha-1}{ \alpha}},\frac{1}{|V(\mathbf{Q})|}\right)\geqslant\eta\right\}.\] With this notation, what we have to show is \[\lim_{\eta\to 0}\limsup_{n\to\infty}\,\mathbb{P}_{n,u}(B_{n,\eta}^{\mathbf{M}})=0 \quad\text{ and }\quad\lim_{\eta\to 0}\limsup_{n\to\infty}\,\mathbb{P}_{n,u}(B_{n,\eta}^{\mathbf{Q}})=0.\] Using Lemma 21.Thanks to the GHP upper bounds in Lemma 21, we have \[\mathbb{P}_{n,u}(B_{n,\eta}^{\mathbf{M}})\leqslant\mathbb{P}_{n,u}(A_{1;n,\eta}^{ \mathbf{M}})+\mathbb{P}_{n,u}(A_{3;n,\eta}^{\mathbf{M}})\quad\text{ and }\quad\mathbb{P}_{n,u}(B_{n,\eta}^{\mathbf{Q}})\leqslant\mathbb{P}_{n,u}(A_{1;n,\eta}^{\mathbf{Q}})+\mathbb{P}_{n,u}(A_{3;n,\eta}^{\mathbf{Q}}). \tag{38}\] Bounding the diameters of the blocks.By Corollary 11, for \(\delta>0\), the maximum diameter of blocks of either \(\mathbf{M}_{n,u}\) or \(\mathbf{Q}_{n,u}\) is bounded with probability \(1-o(1)\) by \(\max(n^{1/6},W(\mathbf{T}_{n,u})^{(1+\delta)/4})\), where \(W(\mathfrak{t})\) denotes the largest degree of \(\mathfrak{t}\). By Corollary 15, \(W(\mathbf{T}_{n,u})\) is \(o\left(n^{(1+\delta)/\alpha}\right)\) in probability. Since \((1+\delta)^{2}/4\alpha\geqslant 1/6\), what precedes gives that for all \(\delta>0\), \[\max\bigl{(}\Delta(\mathbf{M}_{n,u}),\Delta(\mathbf{Q}_{n,u})\bigr{)}=o\left( n^{(1+\delta)^{2}/4\alpha}\right)\qquad\text{in probability.}\] Notice that for \(\delta\) small enough, \((1+\delta)^{2}/4\alpha<(1+\delta)^{2}(\alpha-1)/2\alpha\) since \(\alpha\geqslant 3/2\). This implies that for all \(\delta>0\) sufficiently small, we have \[\limsup_{n\to\infty}\,\mathbb{P}_{n,u}\Bigl{(}(A_{2;n,\delta}^{\mathbf{M}})^{ C}\Bigr{)}=0\quad\text{ and }\quad\limsup_{n\to\infty}\,\mathbb{P}_{n,u}\Bigl{(}(A_{2;n,\delta}^{\mathbf{Q}})^{ C}\Bigr{)}=0.\] We also obtain from this bound on diameters the following \[\lim_{\eta\to 0}\limsup_{n\to\infty}\,\mathbb{P}_{n,u}(A_{3;n,\eta}^{\mathbf{M}} )=0\quad\text{ and }\quad\lim_{\eta\to 0}\limsup_{n\to\infty}\,\mathbb{P}_{n,u}(A_{3;n, \eta}^{\mathbf{Q}})=0.\] Thanks to (38), what remains to be shown is that for sufficiently small \(\delta>0\), \[\lim_{\eta\to 0}\limsup_{n\to\infty}\,\mathbb{P}_{n,u}(A_{1;n,\eta}^{\mathbf{M}} \cap A_{2;n,\delta}^{\mathbf{M}})=0\quad\text{ and }\quad\lim_{\eta\to 0}\limsup_{n\to\infty}\, \mathbb{P}_{n,u}(A_{1;n,\eta}^{\mathbf{Q}}\cap A_{2;n,\delta}^{\mathbf{Q}})=0.\] Bounding the height of \(\mathbf{T}_{n,u}\).We have identified above the scaling limit of \(\mathbf{T}_{n,u}\) and the appropriate normalization of distances. In particular, \(n^{(\alpha-1)/\alpha}\cdot\mathbf{T}_{n,u}\) is tight in the GHP-topology. An immediate consequence is that \(n^{-(\alpha-1)/\alpha}H(\mathbf{T}_{n,u})\) is tight, where \(H(\mathbf{T}_{n,u})\) is the height of \(\mathbf{T}_{n,u}\). In particular, our problem reduces once more to showing that for sufficiently small \(\delta>0\), \[\lim_{\eta\to 0}\limsup_{n\to\infty}\,\mathbb{P}_{n,u}\Bigl{(}A_{1;n, \eta}^{\mathbf{M}}\cap A_{2;n,\delta}^{\mathbf{M}}\cap\Bigl{\{}H(\mathbf{T}) \leqslant\eta^{-1}n^{\frac{(\alpha-1)}{\alpha}}\Bigr{\}}\Bigr{)}=0\\ \text{and }\quad\lim_{\eta\to 0}\limsup_{n\to\infty}\, \mathbb{P}_{n,u}\Bigl{(}A_{1;n,\eta}^{\mathbf{Q}}\cap A_{2;n,\delta}^{\mathbf{Q}} \cap\Bigl{\{}H(\mathbf{T})\leqslant\eta^{-1}n^{\frac{(\alpha-1)}{\alpha}} \Bigr{\}}\Bigr{)}=0. \tag{39}\] Using the spine decomposition.Fix \(\delta>0\), as small as necessary. Let us only treat the term involving \(\mathbf{Q}\) in (39), as the expression for \(R(\mathfrak{q},e_{\star},\kappa_{2})\) we used to define the event \(A_{1;n,\eta}^{\mathbf{Q}}\) carries more dependence than that of \(R(\mathfrak{m},c_{\star},\kappa_{1})\). Indeed the summands \(D_{\mathfrak{q}}(b_{v_{i}}^{\mathfrak{q}},l_{i})\) involve in their definition a global metric property of \(\mathfrak{q}\). The case of the term involving \(\mathbf{M}\) is similar and simpler. Recall that by definition, the law \(\mathbb{P}_{n,u}\) is the law \(\mathbb{P}_{u}\), conditioned on the event \(\{|\mathbf{T}|=n\}\). Since \(\mathbb{P}_{u}(|\mathbf{T}|=n)\) decays polynomially by Corollary 15, we may get rid of the conditioning if the unconditional version of the probabilities we wish to bound decays sufficiently fast. Namely, it suffices to prove that for all \(\eta>0\), the following (unconditional) probability is stretched-exponential in \(n\) \[\mathbb{P}_{u}\Bigl{(}A_{1;n,\eta}^{\mathbf{Q}}\cap A_{2;n,\delta}^{\mathbf{Q}} \cap\{H(\mathbf{T})\leqslant\eta^{-1}n^{\frac{(\alpha-1)}{\alpha}}\}\Bigr{)}. \tag{40}\] By a union bound and then by Proposition 16, using the notations introduced above it, one can bound this by \[\mathbb{E}_{u}\left[\sum_{e_{\star}\text{ corner of }\mathbf{Q}} \mathbb{1}_{\left\{R(\mathbf{Q},e_{\star},\kappa_{2})\geqslant\eta n^{\frac{ \alpha-1}{\alpha}}\right\}}\mathbb{1}_{\left\{H(\mathbf{T})\leqslant\eta^{-1}n ^{\frac{\alpha-1}{\alpha}}\right\}}\mathbb{1}_{\left\{A^{\mathbf{Q}}_{2;n, \delta}\right\}}\right]\] \[=\sum_{h\geqslant 1}\widehat{\mathbb{P}}_{u,h}\left(\{R(\mathbf{Q}, E_{\star},\kappa_{2})\geqslant\eta n^{\frac{\alpha-1}{\alpha}}\}\cap\{H( \mathbf{T})\leqslant\eta^{-1}n^{\frac{\alpha-1}{\alpha}}\}\cap A^{\mathbf{Q}} _{2;n,\delta}\right)\] \[=\sum_{h=1}^{\eta^{-1}n^{\frac{\alpha-1}{\alpha}}}\widehat{ \mathbb{P}}_{u,h}\left(\{R(\mathbf{Q},E_{\star},\kappa_{2})\geqslant\eta n^{ \frac{\alpha-1}{\alpha}}\}\cap A^{\mathbf{Q}}_{2;n,\delta}\right)\] \[=\sum_{h=1}^{\eta^{-1}n^{\frac{\alpha-1}{\alpha}}}\widehat{ \mathbb{P}}_{u,h}\left(\left\{\max_{0\leqslant i<h}\left|\sum_{j=i}^{h-1}\left( D_{\mathbf{Q}}(\mathfrak{b}^{\mathbf{Q}}_{v_{j}},L_{j})-\kappa_{2}\right) \right|\geqslant\eta n^{\frac{\alpha-1}{\alpha}}\right\}\cap A^{\mathbf{Q}}_{ 2;n,\delta}\right)\] \[\leqslant\sum_{h=1}^{\eta^{-1}n^{\frac{\alpha-1}{\alpha}}}\Bigg{[} \widehat{\mathbb{P}}_{u,h}\Bigg{(}\max_{0\leqslant i<h}\sum_{j=i}^{h-1}\psi_{n,\delta}\left(D_{\mathbf{Q}}(\mathfrak{b}^{\mathbf{Q}}_{V_{j}},L_{j})-\kappa_ {2}\right)\geqslant\eta n^{\frac{\alpha-1}{\alpha}}\Bigg{)}\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad+\widehat{\mathbb{P}}_{u,h}\Bigg{(}\max_{0\leqslant i<h}\sum_{j=i}^{h-1} \psi_{n,\delta}\left(\kappa_{2}-D_{\mathbf{Q}}(\mathfrak{b}^{\mathbf{Q}}_{V_{ j}},L_{j})\right)\geqslant\eta n^{\frac{\alpha-1}{\alpha}}\Bigg{)}\Bigg{]},\] where \[\psi_{n,\delta}(x)=x\mathbb{1}_{\left\{x\leqslant\max(\kappa_{2},n^{(1+ \delta)^{2}(\alpha-1)/2\alpha})\right\}}.\] The last inequality may require some explanations. First we apply a union bound with respect to the sign of the expression under the absolute value. Then we use the control that \(A^{\mathbf{Q}}_{2;n,\delta}\) offers on \(\Delta(\mathbf{Q})\) the maximum diameter of blocks of \(\mathbf{Q}\), and the positivity of the distances \(D_{\mathbf{Q}}(\mathfrak{b}^{\mathbf{Q}}_{v_{j}})\), to insert an indicator function. Hence the appearance of \(\psi_{n,\delta}\). Reducing to a large deviations event with truncated variables.We let \(\left(\widehat{\xi}_{u,j}\,,\,\widehat{\mathbf{B}}^{\text{quad}}_{u,j}\,,\,U_ {j}\right)_{j\geqslant 0}\) be an i.i.d. sequence of copies of the triple \(\left(\widehat{\xi}_{u}\,,\,\widehat{\mathbf{B}}^{\text{quad}}_{u}\,,\,U\right)\). We also let \(X_{j}=D\big{(}\widehat{\mathbf{B}}^{\text{quad}}_{u,j},U_{j}\big{)}-\kappa_{2}\). Then by Lemma 18, the arguments of the function \(\psi_{n,\delta}\) that appear in the last upper bound we obtained, are actually i.i.d. and have joint law under \(\widehat{\mathbb{P}}_{u,h}\) the law of \((X_{j})_{0\leqslant j<h}\). Therefore this last upper bound is equal to \[\sum_{h=1}^{\eta^{-1}n^{\frac{\alpha-1}{\alpha}}}\left[P\left(\max_{0\leqslant i <h}\sum_{j=i}^{h-1}\psi_{n,\delta}\left(X_{j}\right)\geqslant\eta n^{\frac{ \alpha-1}{\alpha}}\right)+P\left(\max_{0\leqslant i<h}\sum_{j=i}^{h-1}\psi_{n,\delta}\left(-X_{j}\right)\geqslant\eta n^{\frac{\alpha-1}{\alpha}}\right) \right].\] Since the sequence \((X_{j})_{0\leqslant j<h}\) is i.i.d., we re-order the terms of the two sums which appear inside the probabilities in the last display, so that they run on indices \(j=1,\ldots,i\). Hence, if we set \(h_{n}=n^{(\alpha-1)/\alpha}\), then we can bound the last display by \[\eta^{-1}h_{n}\left[P\left(\max_{0\leqslant i<\eta^{-1}h_{n}}\sum_{j=1}^{i} \psi_{n,\delta}\left(X_{j}\right)\geqslant\eta h_{n}\right)+P\left(\max_{0 \leqslant i<\eta^{-1}h_{n}}\sum_{j=1}^{i}\psi_{n,\delta}\left(-X_{j}\right) \geqslant\eta h_{n}\right)\right].\] Using the moderate deviations estimate.Let \(\gamma=\gamma(\delta)=(1+\delta)^{2}/2\). Then \((h_{n})^{\gamma}\geqslant\kappa_{2}\) for \(n\) large, so \[\psi_{n,\delta}(x)=x\mathbb{1}_{\{x\leqslant\max(\kappa_{2},(h_{n})^{\gamma}) \}}=x\mathbb{1}_{\{x\leqslant(h_{n})^{\gamma}\}}.\] By definition of \(\kappa_{2}\), the variables \((X_{j})\) are centered. And by Proposition 19, they possess moments of order \(\beta\) for all \(1\leqslant\beta<2\). But for \(\delta\) sufficiently small, we have \(\gamma<1\). Therefore Proposition 20 yields that \[P\left(\max_{0\leqslant i<\eta^{-1}h_{n}}\sum_{j=1}^{i}X_{j}\mathbb{1}_{\{X_{j }\leqslant(h_{n})^{\gamma}\}}\geqslant\eta h_{n}\right)\] is stretched-exponential as \(n\to\infty\), and the same holds when replacing \((X_{j})\) by \((-X_{j})\). This proves that for each \(\eta>0\), the probability (40) is indeed stretched-exponential in \(n\), and concludes the proof. ### Scaling limit of the quadrangulations in the subcritical case Let us finally identify the scaling limit of the quadrangulation \(\underline{\mathbf{Q}}_{n,u}\) when \(u<u_{C}\). #### 5.3.1 Statement of the result Denote by \(\mathcal{S}=(\mathcal{S},D^{*},\lambda)\) the _Brownian sphere_, also known as the _Brownian Map_. One may take Proposition 22 below as a definition. **Theorem 6**.: _Assume \(u<u_{C}=9/5\). We have the following convergence in distribution for the Gromov-Hausdorff-Prokhorov topology_ \[\left(\frac{9(3+u)}{8(9-5u)}\right)^{1/4}n^{-1/4}\cdot\underline{\mathbf{Q}}_{ n,u}\xrightarrow[n\to\infty]{(d),\mathrm{GHP}}\mathcal{S}.\] In the case \(u=1\), one recovers the Brownian sphere as the scaling limit of uniform quadrangulations with \(n\) faces, which has been proven in [13] and [11]. It is also the scaling limit of uniform _simple_ quadrangulations with \(n\) faces, which was proven in [1]. The latter corresponds informally to the case \(u\to 0\). We emphasize that those results, and especially the one of [1], serve as an input in our proof and we do not provide a new proof of them. Accordingly, let us precisely state the latter result, so that we can use it in the subsequent proof. **Proposition 22** ([1]).: _Uniform simple quadrangulations with \(k\) faces admit the Brownian sphere as scaling limit, with the following normalization_ \[\left(\frac{3}{8k}\right)^{1/4}\cdot B_{k}^{\mathrm{quad}}\xrightarrow[k\to \infty]{(d),\mathrm{GHP}}\mathcal{S}.\] This is precisely the result [1, Theorem 1.1], restricted to the case of simple quadrangulations. Notice that in their result, the scaling limit is stated in terms of \(M_{n}\), a uniform simple quadrangulation with \(n\) vertices, not faces. This is not a problem since by Euler's formula, a quadrangulation has \(n\) vertices if and only if it has \(n-2\) faces. Therefore \(B_{k}^{\mathrm{quad}}\) has the same law as \(M_{k+2}\). Note that Theorem 6 only deals with the quadrangulation \(\underline{\mathbf{Q}}_{n,u}\), but not the map \(\underline{\mathbf{M}}_{n,u}\). Let us detail what would be needed to obtain a similar statement for \(\underline{\mathbf{M}}_{n,u}\). * To obtain a Gromov-Hausdorff scaling limit, the missing ingredient is the equivalent for \(2\)-connected maps of the result of [1], that is to say \(\mathrm{GH}(\mathrm{P})\) convergence of uniform \(2\)-connected maps with \(n\) edges, rescaled by a constant times \(n^{-1/4}\), to the Brownian sphere. * In order to strengthen this to \(\mathrm{GHP}\) convergence when the map is equipped with the uniform measure on vertices, one would need the above mentioned convergence of \(2\)-connected maps, but in the \(\mathrm{GHP}\) sense. It would also require a way to compare, in the Prokhorov sense, the degree-biased measure on vertices of \(\underline{\mathbf{M}}_{n,u}\), and the uniform measure. For quadrangulations on the other hand, this comparison can be done using [1, Lemma 5.1]. The paper [1] makes precise the relationship between the convergence of **uniform** quadrangulations with \(n\) faces [13, 14], and the convergence of **simple uniform** quadrangulations with \(n\) faces [1]. It is shown that a quadrangulation sampled uniformly among those which have size \(n\) and whose biggest block has size \(k(n)\sim cn\) with an adequate \(c>0\), converges jointly with said biggest block to the Brownian sphere, in the \(\mathrm{GHP}\) sense. The proof of Gromov-Hausdorff convergence for these quadrangulations amounts to showing that pendant submaps that are grafted on the macroscopic block have negligible diameter, that is \(o\left(n^{1/4}\right)\), which is done by [1, Proposition 1.12]. The strategy of proof is not directly applicable here, since it uses an _a priori_ diameter bound on the pendant submaps, which we do not have for general \(u\). As explained in what follows, it is sufficient to have an _a priori_ diameter bound on single blocks themselves, which is why we need Proposition 10. To strengthen \(\mathrm{GH}\) convergence to \(\mathrm{GHP}\) convergence however, we use the same arguments as those exposed in [1] modulo some technical details. #### 5.3.2 Sketch of the proof On the combinatorics side, Theorem 2 characterizes the phase \(u<u_{C}\) by a condensation phenomenon: when \(n\) is large, there is precisely one block of linear size, while others have size \(O(n^{2/3})\). This theorem is stated for a map with law \(\mathbb{P}_{n,u}\), that is the law of \(\mathbf{M}_{n,u}\), but by Section 2.5, Tutte's bijection commutes with the block decomposition, so that the same happens for \(\mathbf{Q}_{n,u}\). On the metric side, there is not much more going on. The block-tree is subcritical in this phase by Theorem 1 and therefore has small height. Combining this with the \(O(n^{2/3})\) bound on the size of non-macroscopic blocks, and the deviation estimate of Proposition 10 on diameters of blocks, we get that \(\mathbf{Q}_{n,u}\) is approximately equal to its largest block, in the Gromov-Hausdorff sense in the scale \(n^{1/4}\). This argument is rather general and should be easy to adapt to other models of graphs or maps with a block-tree decomposition under a condensation regime. In order to strengthen this convergence to one in the Gromov-Hausdorff-Prokhorov sense, we use the rather general result [1, Corollary 7.2], by comparing the mass measure on vertices with a projection on the macroscopic block, which is modulo some technical details an exchangeable vector on the edges where the pendant submaps are attached. This corollary tells that this random measure is well-approximated by its expectation, which is uniform on the edges of the macroscopic block, or equivalently that it is degree-biased on its vertices. The last part of the argument is specific to quadrangulations, for which we can compare the degree-biased and the uniform measure on vertices by [1, Lemma 6.1]. #### 5.3.3 Comparison of a quadrangulation and its biggest block Let us introduce some notation. Denote by \(\mathfrak{t}[v]\) the subtree of descendants of a node \(v\) in \(\mathfrak{t}\), rooted at \(v\). For an edge \(e\) of \(\mathfrak{q}\), and \(v\) the vertex of \(\mathfrak{t}\) that the block-decomposition associates to \(e\), we will denote by \(\mathfrak{q}[e]\) the quadrangulation whose block-tree decomposition is \((\mathfrak{b}^{\mathfrak{q}}_{w})_{w\in\mathfrak{t}[v]}\). Recall that by convention, if \(v\) is a leaf then \(\mathfrak{q}[e]\) is the edge map, with \(2\) vertices and \(1\) edge, the edge \(e\). Let \(v^{\circ}\) be the vertex of \(\mathfrak{t}\) with largest outdegree, choosing one arbitrarily if there are several, and let \(\mathfrak{q}^{\circ}=\mathfrak{b}^{\mathfrak{q}}_{v^{\circ}}\). Write also \(\mathfrak{q}^{+}\) for the quadrangulation whose block decomposition is \((\mathfrak{b}^{\mathfrak{q}}_{v})_{v\in\mathfrak{t}[v^{\circ}]}\). In particular, \(\mathfrak{q}^{\circ}\) is the simple core of \(\mathfrak{q}^{+}\), and its other blocks are the blocks of the pendant subquadrangulations \((\mathfrak{q}[e])_{e\in E(\mathfrak{q}^{\circ})}\). Finally, let \(\pi^{\mathfrak{q}^{+}}_{\mathfrak{q}^{\circ}}\) be the probability measure on vertices of \(\mathfrak{q}^{\circ}\) obtained by projection of the contribution to \(\nu_{\mathfrak{q}}\) of each pendant map \((\mathfrak{q}[e])_{e\in E(\mathfrak{q}^{\circ})}\) to the biggest block \(\mathfrak{q}^{\circ}\). More formally, for each edge \(e\) of \(\mathfrak{q}^{\circ}\), let \(\{e^{+},e^{-}\}\) be its extremities. Then, \[\pi^{\mathfrak{q}^{+}}_{\mathfrak{q}^{\circ}}=\frac{1}{|V(\mathfrak{q}^{+})|-| V(\mathfrak{q}^{\circ})|}\sum_{e\in E(\mathfrak{q}^{\circ})}\big{(}\big{|}V( \mathfrak{q}[e])\big{|}-2\big{)}\,\big{(}\tfrac{1}{2}\delta_{e^{-}}+\tfrac{1 }{2}\delta_{e^{+}}\big{)}.\] Observe that since \(\mathfrak{q}^{\circ}\) shares exactly \(2\) vertices with each pendant map \((\mathfrak{q}[e])_{e\in E(\mathfrak{q}^{\circ})}\), the last display indeed defines a probability measure. **Lemma 23**.: _It holds that_ \[d_{\rm GHP}(\varepsilon\cdot\underline{\mathfrak{q}},\varepsilon\cdot \underline{\mathfrak{q}}^{\circ})\leqslant 2r_{\rm GH}+r_{\rm P}+\big{(}1-\tfrac{|V( \mathfrak{q}^{\circ})|}{|V(\mathfrak{q}^{+})|}\big{)}d_{\rm P}^{(V(\mathfrak{q }^{\circ}),\varepsilon d_{\mathfrak{q}})}\Big{(}\pi^{\mathfrak{q}^{+}}_{ \mathfrak{q}^{\circ}},\nu_{\mathfrak{q}^{\circ}}\Big{)},\] _where_ \[r_{\rm GH}=2\varepsilon H(\mathfrak{t})\,\max_{v\neq v^{\circ}}{\rm diam}( \mathfrak{b}^{\mathfrak{q}}_{v})\quad\text{ and }\quad r_{\rm P}=\frac{2\big{|}V(\mathfrak{q})\setminus V( \mathfrak{q}^{+})\big{|}}{|V(\mathfrak{q})|}.\] Proof.: There are successive comparisons to be made for the GHP distance. Metric comparison.The term \(r_{\rm GH}\) bounds how distant the spaces \(\varepsilon\cdot\mathfrak{q}\), \(\varepsilon\cdot\mathfrak{q}^{+}\) and \(\varepsilon\cdot\mathfrak{q}^{\circ}\) are, from a metric point of view, _i.e._ in the GH sense. Recall that we can see \(\mathfrak{q}\) and \(\mathfrak{q}^{+}\) as their biggest block \(\mathfrak{q}^{\circ}\), together with some maps attached to it. Therefore one needs to bound the maximal diameter of the attached maps. We use a brutal bound on the diameter of the non-macroscopic blocks by their maximal diameter, together with a bound on the number of consecutive blocks in the attached maps. This number is bounded by \({\rm diam}(\mathfrak{t})\leqslant 2H(\mathfrak{t})\). Therefore the maximal diameter of attached maps in \(\varepsilon\cdot\mathfrak{q}\) or \(\varepsilon\cdot\mathfrak{q}^{+}\) is bounded by \[r_{\rm GH}:=2\varepsilon H(\mathfrak{t})\,\max_{v\neq v^{\circ}}{\rm diam}( \mathfrak{b}^{\mathfrak{q}}_{v}).\] In particular, take the correspondence \(B_{1}\) on \(V(\mathfrak{q})\times V(\mathfrak{q}^{+})\) such that \(x\in V(\mathfrak{q})\) is in correspondence with only itself if it belongs to \(V(\mathfrak{q}^{+})\), or otherwise with both endpoints of the root-edge of \(\mathfrak{q}^{+}\) if it belongs to \(V(\mathfrak{q})\setminus V(\mathfrak{q}^{+})\). The uniform measure on \(B_{1}\) is a coupling between \(\nu_{\mathfrak{q}}\) and some measure \(\mu^{+}\) on \(V(\mathfrak{q}^{+})\). One therefore gets, using triangle inequality and (24), \[d_{\rm GHP}(\varepsilon\cdot\underline{\mathfrak{q}},\varepsilon\cdot \underline{\mathfrak{q}}^{+})\leqslant r_{\rm GH}+d_{\rm P}^{(V(\mathfrak{q}^{+ }),\varepsilon d_{\mathfrak{q}})}(\mu^{+},\nu_{\mathfrak{q}^{+}}). \tag{41}\] Similarly, take the correspondence \(B_{2}\) on \(V(\mathfrak{q}^{+})\times V(\mathfrak{q}^{\circ})\) such that \(x\in V(\mathfrak{q}^{+})\) is in correspondence with only itself if it belongs to \(V(\mathfrak{q}^{\circ})\), or otherwise with both endpoints \(\{e^{+},e^{-}\}\) of the root-edge of \(\mathfrak{q}[e]\) if \(x\) belongs to \(V(\mathfrak{q}[e])\setminus\{e^{+},e^{-}\}\) for some edge \(e\in E(\mathfrak{q}^{\circ})\). Then the uniform measure on \(B_{2}\) is a coupling between \(\nu_{\mathfrak{q}^{+}}\) and some measure \(\mu^{\circ}\) on \(V(\mathfrak{q}^{+})\). We get as above \[d_{\mathrm{GHP}}(\varepsilon\cdot\underline{\mathfrak{q}}^{+},\varepsilon \cdot\underline{\mathfrak{q}}^{\circ})\leqslant r_{\mathrm{GH}}+d_{\mathrm{P }}^{(V(\mathfrak{q}^{\circ}),ed_{\mathfrak{q}})}(\mu^{\circ},\nu_{\mathfrak{q }^{\circ}}). \tag{42}\] Comparing the uniform measures on vertices of \(\mathfrak{q}\) and \(\mathfrak{q}^{+}\).Observe that \(\nu_{\mathfrak{q}^{+}}\) is the counting measure on \(V(\mathfrak{q}^{+})\) renormalized to a probability distribution, while \(\mu^{+}\) is the renormalized version of the same counting measure but with additional mass \[m:=|V(\mathfrak{q})\setminus V(\mathfrak{q}^{+})|-2,\] the latter being split equally on the endpoints of the root-edge of \(\mathfrak{q}^{+}\). Elementarily, this yields a total variation bound, as follows \[d_{\mathrm{TV}}(\mu^{+},\nu_{\mathfrak{q}^{+}})\leqslant\frac{2m}{V(\mathfrak{ q})}\leqslant\frac{2|V(\mathfrak{q})\setminus V(\mathfrak{q}^{+})|}{V( \mathfrak{q})}=:r_{\mathrm{P}}.\] Since the Prokhorov distance is bounded by the total variation distance, we have \[d_{\mathrm{P}}^{(V(\mathfrak{q}^{+}),\varepsilon d_{\mathfrak{q}})}(\mu^{+}, \nu_{\mathfrak{q}^{+}})\leqslant r_{\mathrm{P}}. \tag{43}\] Comparing the uniform measures on vertices of \(\mathfrak{q}^{+}\) and \(\mathfrak{q}^{\circ}\).From the definition of \(\pi_{\mathfrak{q}^{\circ}}^{\mathfrak{q}^{+}}\) and from the partitioning \[V(\mathfrak{q}^{+})=V(\mathfrak{q}^{\circ})\bigsqcup_{e\in E(\mathfrak{q}^{ \circ})}V(\mathfrak{q}[e])\setminus\{e^{+},e^{-}\},\] observe that the measure \(\mu^{\circ}\) obtained from the correspondence \(B_{2}\) above decomposes as follows \[\mu^{\circ}=\tfrac{|V(\mathfrak{q}^{\circ})|}{|V(\mathfrak{q}^{+})|}\nu_{ \mathfrak{q}^{\circ}}+\tfrac{|V(\mathfrak{q}^{+})|-|V(\mathfrak{q}^{\circ})|} {|V(\mathfrak{q}^{+})|}\pi_{\mathfrak{q}^{\circ}}^{\mathfrak{q}^{+}}.\] In particular, we obtain from (25) that \[d_{\mathrm{P}}^{(V(\mathfrak{q}^{\circ}),\varepsilon d_{\mathfrak{q}})}(\mu^{ \circ},\nu_{\mathfrak{q}^{\circ}})=\left(1-\tfrac{|V(\mathfrak{q}^{\circ})|} {|V(\mathfrak{q}^{+})|}\right)d_{\mathrm{P}}^{(V(\mathfrak{q}^{\circ}),ed_{ \mathfrak{q}})}\Big{(}\pi_{\mathfrak{q}^{\circ}}^{\mathfrak{q}^{+}},\nu_{ \mathfrak{q}^{\circ}}\Big{)}. \tag{44}\] Concluding the proof.By the triangle inequality, we have \[d_{\mathrm{GHP}}(\varepsilon\cdot\underline{\mathfrak{q}},\varepsilon\cdot \underline{\mathfrak{q}}^{\circ})\leqslant d_{\mathrm{GHP}}(\varepsilon\cdot \underline{\mathfrak{q}},\varepsilon\cdot\underline{\mathfrak{q}}^{+})+d_{ \mathrm{GHP}}(\varepsilon\cdot\underline{\mathfrak{q}}^{+},\varepsilon \cdot\underline{\mathfrak{q}}^{\circ}).\] Using (41) and (43) to bound the first term, and (42) and (44) to bound the second one, we get the claimed inequality. #### 5.3.4 Exchangeable decorations We aim to use Addario-Berry & Wen's argument for [1, Lemma 6.2] which tells that for exchangeable attachments of mass on edges of \(Q_{n}\), a quadrangulation with \(n\) faces sampled uniformly, the resulting measure on \(Q_{n}\) is asymptotically close to the uniform measure on vertices, in the sense of the Prokhorov distance on \(n^{-1/4}\cdot Q_{n}\). They use the following ingredients: 1. A concentration inequality [1, Lemma 5.2] which compares the measure with exchanges attachments of mass on edges, to the degree-biased measure on vertices. 2. A Prokhorov comparison [1, Lemma 5.1] between the degree-biased and uniform measure on vertices of a quadrangulation. 3. GHP convergence of \(n^{-1/4}\cdot\underline{Q}_{n}\) to the Brownian sphere. 4. Properties of the Brownian sphere such as compacity and re-rooting invariance. The first ingredient is rather general and actually stated for any graph in [1, Lemma 5.3]. We will ever-so-slightly adapt its proof since there is a double edge in their setting which we do not have, and the mass is not projected on vertices in the exact same way. The second ingredient is specific to quadrangulations and one may need different arguments to compare the degree-biased and uniform measures for other classes of maps. Let us state which result we extract for our purpose from Addario-Berry & Wen's paper. For \(\mathbf{n}=(\mathbf{n}(e))_{e\in E(G)}\) a family of nonnegative numbers indexed by edges of a graph \(G\), we denote its \(p\)-norm for \(p\geqslant 1\) by \[|\mathbf{n}|_{p}:=\left(\sum_{e\in E(G)}(\mathbf{n}(e))^{p}\right)^{1/p}.\] Then define the following measure on \(V(G)\): \[\mu_{G}^{\mathbf{n}}:=\frac{1}{|\mathbf{n}|_{1}}\sum_{e\in E(G)}\mathbf{n}(e) \big{(}\tfrac{1}{2}\delta_{e^{+}}+\tfrac{1}{2}\delta_{e^{-}}\big{)},\] with \(\{e^{+},e^{-}\}\) the set of endpoints of the edge \(e\). Notice that this definition is slightly different from that of \(\nu_{G}^{\mathbf{n}}\) in [1, Section 5], because the mass of an edge is projected uniformly and independently on either of its enpoints in their case, while we deterministically split this mass on both endpoints. This does not change much except that we find it easier to work with. One of their results translates as the following. **Proposition 24** ([1, Corollary 6.2]).: _Let \(Q_{k}=B_{k}^{\mathrm{quad}}\), which is a simple quadrangulation with \(k\) faces, sampled uniformly. Consider for each \(k\geqslant 1\), a random family \(\mathbf{n}_{k}=(\mathbf{n}_{k}(e))_{e\in E(Q_{k})}\) of nonnegative numbers, such that conditionally on \(Q_{k}\) it is an exchangeable family. Assume that \(|\mathbf{n}_{k}|_{2}/|\mathbf{n}_{k}|_{1}\to 0\) in probability as \(k\to\infty\). Then there holds the convergence in probability_ \[d_{\mathrm{P}}^{(V(Q_{k}),\varepsilon_{k}d_{Q_{k}})}\Big{(}\mu_{B_{k}}^{ \mathbf{n}_{k}},\nu_{Q_{k}}\Big{)}\xrightarrow[k\to\infty]{\mathbb{P}}0,\] _where \(\nu_{Q_{k}}\) is the uniform measure on vertices of \(Q_{k}\) and \(\varepsilon_{k}=k^{-1/4}\)._ This is the statement of [1, Corollary 6.2], adapted to our setting. The proof goes _mutatis mutandi_, except for an adjlestment in the concentration inequality [1, Lemma 5.3], which we adapt below in Lemma 25. **Lemma 25** ([1, Lemma 5.3]).: _Let \(G\) be a graph and \(\mathbf{n}=(\mathbf{n}(e))_{e\in G}\) a random and exchangeable family of nonnegative numbers with \(|\mathbf{n}|_{2}>0\) almost surely. Then for any \(V\subset V(G)\), and any \(t>0\),_ \[\mathbb{P}\left(\left|\mu_{G}^{\mathbf{n}}(V)-\nu_{G}^{\mathrm{B}}(G)\right|> \frac{2t}{|\mathbf{n}|_{1}}\ \middle|\ |\mathbf{n}|_{2}\right)\leqslant 2\exp\left(- \frac{2t^{2}}{|\mathbf{n}|_{2}^{2}}\right).\] The proof goes the same way as that of [1, Lemma 5.3], except that we do not have a double edge here, and the mass on edges is projected deterministically on vertices in our case, instead of randomly. The reader may notice that there is an extra term inside the probability in their lemma. This term accounts for the double edge, which we do not have here. The same line of arguments still works though. Indeed, we have \[\mu_{G}^{\mathbf{n}}(V)=\sum_{e\in E(G[V])}\frac{\mathbf{n}(e)}{|\mathbf{n}|_ {1}}+\frac{1}{2}\sum_{e\in\partial_{e}V}\frac{\mathbf{n}(e)}{|\mathbf{n}|_{1}},\] with \(G[V]\) the induced-graph on \(V\) by \(G\), and \(\partial_{e}V\) the subset of the edges of \(V\) who have only one endpoint which belongs to \(V\). By exchangeability, we have the expectation \[\mathbb{E}\left[\sum_{e\in E(G[V])}\mathbf{n}(e)+\frac{1}{2}\sum_{e\in\partial _{e}V}\mathbf{n}(e)\ \middle|\ |\mathbf{n}|_{1}\right]=|\mathbf{n}|_{1}\frac{|E(G[V])|}{|E(G)|}+| \mathbf{n}|_{1}\frac{\tfrac{1}{2}|\partial_{e}V|}{|E(G)|}=|\mathbf{n}|_{1}\nu _{G}^{\mathrm{B}}(V).\] The last equality holds because the degree biased-measure counts twice each edge of \(G[V]\), since this edge appears in the degree of both its endpoints, while the edges of \(\partial_{e}V\) are only counted once, in the degree on the only one of its endpoints which is in \(V\). Then one concludes as in the proof of [1, Lemma 5.3], by a Hoeffding-type bound for exchangeable vectors. #### 5.3.5 Proof of Theorem 6 Scaling limit of the biggest block.By Proposition 3 and Proposition 4, the biggest block of \(\mathbf{Q}_{n,u}\), whose size we denote \(C(n,u)\), is a simple quadrangulation sampled uniformly with size \(C(n,u)\). Also by Theorem 2, this size is asymptotically in probability, \[C(n,u)=(1-E(u))n+O_{\mathbb{P}}(n^{2}/3)=\frac{9-5u}{3(3+u)}n+O_{\mathbb{P}}( n^{2}/3).\] By conditioning on \(C(n,u)\) and using Proposition 22, we therefore get the following GHP scaling limit for the biggest block \[\left(\frac{3}{8C(n,u)}\right)^{1/4}\cdot\underline{\mathbf{Q}}_{n,u}^{ \circ}\xrightarrow[n\to\infty]{(d),\mathrm{GHP}}\mathcal{S},\] which by the preceding equivalent in probability for \(C(n,u)\) reduces to \[\left(\frac{9(3+u)}{8(9-5u)}\right)^{1/4}n^{-1/4}\cdot\underline{\mathbf{Q}} _{n,u}^{\circ}\xrightarrow[n\to\infty]{(d),\mathrm{GHP}}\mathcal{S}.\] GHP comparison of \(\mathbf{Q}_{n,u}\) with its biggest block.By the preceding scaling limit, and the use of Lemma 23 with \(\mathfrak{q}=\mathbf{Q}_{n,u}\) and \(\varepsilon=n^{-1/4}\), the proof of the theorem reduces to showing the convergence to \(0\) in probability of the following quantities \[r_{\mathrm{GH}} :=\frac{2}{n^{1/4}}H(\mathbf{T}_{n,u})\max_{v\neq v^{\circ}} \operatorname{diam}(\mathfrak{b}_{v}^{\mathbf{Q}_{n,u}})\] \[r_{\mathrm{P}} :=\frac{2\big{|}V(\mathbf{Q}_{n,u})\setminus V(\mathbf{Q}_{n,u}^ {+})\big{|}}{|V(\mathbf{Q}_{n,u})|}\] \[d_{P} :=d_{\mathrm{P}}^{(V(\mathbf{Q}_{n,u}^{\circ}),\varepsilon_{n}d_{ \mathbf{Q}_{n,u}})}\Big{(}\pi_{\mathbf{Q}_{n,u}^{\circ}}^{\mathbf{Q}_{n,u}^{+} },\nu_{\mathbf{Q}_{n,u}^{\circ}}\Big{)},\] where \(\varepsilon_{n}=n^{-1/4}\). Bounding \(r_{\mathrm{GH}}\).By Theorem 2, the second-biggest block of \(\mathbf{Q}_{n,u}\) has size \(O(n^{2/3})\) in probability. Combining this with Corollary 11, one gets for all \(\delta>0\) the bound in probability \[\max_{v\neq v^{\circ}}\operatorname{diam}(\mathfrak{b}_{v}^{\mathbf{Q}_{n,u}} )=o\left(n^{(1+\delta)/6}\right).\] Also, by Theorem 1, \(\mathbf{T}_{n,u}\) is a _non-generic subcritical_ Galton-Watson tree conditioned to have \(2n+1\) vertices, in the terminology of [13]. We may therefore use [13, Theorem 4] to get for all \(\delta>0\) the bound in probability \[H(\mathbf{T}_{n,u})=o\left(n^{\delta}\right).\] Combining the two preceding estimates, we get in probability \[r_{\mathrm{GH}}=o\left(n^{-\tfrac{1}{4}+\delta+\tfrac{(1+\delta)}{6}}\right) \xrightarrow[n\to\infty]{}0,\] provided that we chose \(\delta>0\) small enough so that \(\delta+(1+\delta)/6<1/4\). Bounding \(r_{\mathrm{P}}\).First, notice that since \(\mathbf{Q}_{n,u}\) is a quadrangulation we have \[|V(\mathbf{Q}_{n,u})|=|E(\mathbf{Q}_{n,u})|=2n,\] and by the block-tree decomposition which puts in correspondence edges of \(\mathbf{Q}_{n,u}\) and edges of \(\mathbf{T}_{n,u}\), we also have \[\big{|}V(\mathbf{Q}_{n,u})\setminus V(\mathbf{Q}_{n,u}^{+})\big{|}=\big{|}E( \mathbf{Q}_{n,u})\big{|}-\big{|}E(\mathbf{Q}_{n,u}^{+})\big{|}=\big{|}E( \mathbf{T}_{n,u})\big{|}-\big{|}E(\mathbf{T}_{n,u}^{+})\big{|}=\big{|}E( \mathbf{T}_{n,u}\setminus\mathbf{T}_{n,u}^{+})\big{|}.\] Therefore we have to bound the size of the subtree \(\mathbf{T}_{n,u}\setminus\mathbf{T}_{n,u}^{+}\). A moment of thought shows that it is bounded by \[U_{\to}(\mathbf{T}_{n,u})+U_{\leftarrow}(\mathbf{T}_{n,u}),\] where \(U_{\to}(\mathfrak{t})\) is the index in lexicographical order of the vertex with largest degree of the tree \(\mathfrak{t}\), and \(U_{\leftarrow}(\mathfrak{t})\) is the index in reverse lexicographical order of that same vertex. Now, [13, Theorem 2] shows that \((U_{\to}(\mathbf{T}_{n,u}))_{n\geqslant 1}\) is a tight sequence. Since \(U_{\leftarrow}(\mathbf{T}_{n,u})\) has the same law as \(U_{\to}(\mathbf{T}_{n,u})\), the respective sequence is also tight. All in all, we get in probability \[r_{\mathbf{P}}=O(1/n)\xrightarrow[n\to\infty]{}0.\] Bounding \(d_{\mathrm{P}}\).Notice that \[d_{P}=d_{\mathrm{P}}^{(V(\mathbf{Q}_{n,u}^{\circ}),\varepsilon_{n}d_{\mathbf{Q}_{ n,u}})}\Big{(}\pi_{\mathbf{Q}_{n,u}^{\circ}}^{\mathbf{Q}_{n,u}^{\perp}},\nu_{ \mathbf{Q}_{n,u}^{\circ}}\Big{)}=d_{\mathrm{P}}^{(V(\mathbf{Q}_{n,u}^{\circ}),\varepsilon_{n}d_{\mathbf{Q}_{n,u}})}\Big{(}\mu_{\mathbf{Q}_{n,u}}^{\mathbf{ n}},\nu_{\mathbf{Q}_{n,u}^{\circ}}\Big{)},\] where \(\mathbf{n}=\mathbf{n}_{n,u}\) is the family of nonnegative numbers defined by \[\forall e\in E(\mathbf{Q}_{n,u}^{\circ}),\quad\mathbf{n}(e)=|V(\mathbf{Q}_{n,u}[e])|-2.\] Let us argue that conditionally on \(\mathbf{Q}_{n,u}^{\circ}\), this family \(\mathbf{n}\) is exchangeable. Recall that \(\mathbf{Q}_{n,u}\) has the law of \(\mathbf{Q}\) under \(\mathbb{P}_{u}\), conditioned to the event \(\{|\mathbf{Q}|=n\}\). By the symmetries of the Galton-Watson law and Proposition 7, the family \[(|V(\mathbf{Q}[e])|-2)_{e\in E(\mathbf{Q}^{\circ})}=(|E(\mathbf{T}[v_{e}])|-2 )_{e\in E(\mathbf{Q}^{\circ})}\] is i.i.d. conditionally on \(\mathbf{Q}^{\circ}\), where \(v_{e}\) is the child of \(v^{\circ}\) that the block-tree decomposition associates to \(e\). In particular, this family is exchangeable. Since the event \(\{|\mathbf{Q}|=n\}\) is invariant by each permutation of the subtrees attached to the node \(v^{\circ}\) with their respective blocks, the above family stays exchangeable when conditioning by this event. Therefore \(\mathbf{n}\) is indeed exchangeable. Now, [13, Corollary 1] tells that the subtrees \((\mathbf{T}[v_{e}])_{e\in\mathbf{Q}_{n,u}^{\circ}}\) have size \(O(n^{2/3})\) in probability, uniformly in the edge \(e\). We thus get that \[|\mathbf{n}|_{2}=O\left(\sqrt{n^{5/3}}\right).\] On the other hand, we have in probability \[|\mathbf{n}|_{1}=|V(\mathbf{Q}_{n,u})\setminus V(\mathbf{Q}_{n,u}^{+})|\sim cn,\] for some constant \(c>0\). Hence, in probability \[\frac{|\mathbf{n}|_{2}}{|\mathbf{n}|_{1}}=O\left(n^{-1/6}\right)\xrightarrow[ n\to\infty]{}0.\] All the hypotheses of Proposition 24 have been checked, so that we may apply it, after conditioning by the size of \(\mathbf{Q}_{n,u}^{\circ}\), since conditionally on its size \(k\) it is a uniform simple quadrangulation of size \(k\). We obtain in probability \[d_{\mathrm{P}}^{(V(\mathbf{Q}_{n,u}^{\circ}),\varepsilon_{n}d_{\mathbf{Q}_{n,u}})}\Big{(}\mu_{\mathbf{Q}_{n,u}}^{\mathbf{n}},\nu_{\mathbf{Q}_{n,u}^{\circ }}\Big{)}\xrightarrow[n\to\infty]{}0.\] Hence, \(d_{\mathrm{P}}\) also tends to \(0\) in probability and this concludes the proof. ## 6 Concluding remarks and perspectives We have exhibited a phase transition phenomenon for two closely related models of random maps with a weight \(u>0\) per block. The phase transition occurs at \(u=9/5\), and we have established the existence of three regimes, regarding the size of the largest block, and regarding the scaling limit (and the order of magnitude of distances). Extension to other models.Our method can be generalised to other models which can be decomposed into appropriate blocks with an underlying tree structure, for example the models described in [1, Table 3], which is partially reproduced in Table 2. A _triangulation_ is a map where all faces have degree \(3\). It is _irreducible_ if every \(3\)-cycle defines a face. In this section, we use the same notation for the various models as in the rest of the article. Models described in [1, Table 3] where maps are decomposed into blocks weighted with a weight \(u>0\) undergo a phase transition at the critical value \(u_{C}\) written down in Table 3. More precisely, Theorems 1 to 4 hold for these models with the constants of Table 3. Notice that for the decomposition of general maps into \(2\)-connected maps (_i.e._ the schema linking \(\mathcal{M}_{1}\) and \(\mathcal{M}_{4}\)) -- which is the case studied in this paper -- we get results consistent with Theorem 1. Moreover, the values of \(u_{C}\) and \(E(u)\) are consistent since it always holds that \(E(u_{C})=1\). Furthermore, for \(u=1\), we retrieve the results of [1, Table 4]: indeed, our \(1-E(1)\) equals their \({\alpha_{0}}\)3. Footnote 3: It is not obvious at first glance that this should be the case for the case of simple triangulations decomposed into irreducible cores, because each node of the Galton-Watson tree corresponds to a sequence of blocks. However, an extreme condensation phenomenon occurs and the mass is concentrated in only one element of the sequence, so the behaviour remains similar. Models from [1, Table 3] are amenable to computations similar to this article's in order to get the values above. We show in Table 3 the most obvious results and models requiring more care will be described in a separate note. In the cases of Table 2, there is \(d\in\mathbb{Z}_{>0}\) such that \(H(z)=z(1+M)^{d}\), and the corresponding law \(\mu^{u}\) (except for triangulations) comes naturally as: \[\mu^{u}(dm)=\frac{\mathbb{1}_{m\neq 0}ub_{m}y(u)^{m}+\mathbb{1}_{m=0}}{uB(y(u) )+1-u},\qquad\mu^{u}(m)=0\quad\text{when}\quad d\nmid m.\] The cases dealing with triangulations require more care as the series are counted by vertices but the substitution is done on edges in one case, and on internal faces in the other; but keeping this in mind, the same methods can be applied. For all models, we expect to get similar regimes as in Table 1 (assuming the convergence of the family of blocks is known, as well as diameter estimates). However, conditioning is more difficult for some models, as the size of the map is not always immediately deduced from the size of the Galton Watson tree (_e.g._ for simple triangulations (\(\mathcal{T}_{2}\)) decomposed into irreducible triangulations (\(\mathcal{T}_{3}\)), the size is the number of leaves of the Galton-Watson tree). Perspectives.We plan to study similar models in the context of decorated planar maps (_e.g._ tree-rooted maps or Schnyder woods), where the generating series exhibit different singular behaviours. In future work, we also want to investigate more closely the rate of the phase transition at the critical value \(u=9/5\), in analogy to the study of the largest component for the Erdos-Renyi random graph [13]. Finally, as mentioned in the introduction, the model of maps with a weight \(u\) per \(2\)-connected block has been studied as encoding certain discrete spaces of dimension larger than \(2\), with motivations from theoretical physics [1, 2]. The metric properties are however modified via the correspondence, and it would be interesting to determine if the scaling limits remain the same. ## Acknowledgments The authors wish to thank Marie Albenque, Eric Fusy and Gregory Miermont for their supervision throughout this work, and for all the invaluable comments and discussions. \begin{table} \begin{tabular}{l l l l} maps, \(M(z)\) & cores, \(C(z)\) & submaps, \(H(z)\) \\ \hline loopless, \(M_{2}(z)\) & simple, \(M_{3}(z)\) & \(z(1+M)\) \\ all, \(M_{1}(z)\) & 2-connected, \(M_{4}(z)\) & \(z(1+M)^{2}\) \\ 2-connected \(M_{4}(z)-z\) & 2-connected simple, \(M_{5}(z)\) & \(z(1+M)\) \\ \hline bipartite, \(B_{1}(z)\) & bipartite simple, \(B_{2}(z)\) & \(z(1+M)\) \\ bipartite, \(B_{1}(z)\) & bipartite 2-connected, \(B_{4}(z)\) & \(z(1+M)^{2}\) \\ bipartite 2-connected, \(B_{4}(z)\) & bipartite 2-connected simple \(B_{5}(z)\) & \(z(1+M)\) \\ \hline loopless triangulations, \(T_{1}(z)\) & simple triangulations, \(z+zT_{2}(z)\) & \(z(1+M)^{3}\) \\ simple triangulations, \(T_{2}(z)\) & irreducible triangulations, \(T_{3}(z)\) & \(z(1+M)^{2}\) \\ \end{tabular} \end{table} Table 2: Partial reproduction of [12, Table 3], which describes composition schemas of the form \(\mathcal{M}=\mathcal{C}\circ\mathcal{H}\) except the last one where \(\mathcal{M}=(1+\mathcal{M})\times\mathcal{C}\circ\mathcal{H}\). The parameter \(z\) counts vertices (up to a fixed shift) in the case of triangulations, edges otherwise. Some terms have been changed to correspond to the conventions used in this article. \begin{table} \begin{tabular}{c c|c c c} Maps & Cores & \(u_{C}\) & \(E(u)\) & \(1-E(1)\) \\ \hline \(\mathcal{M}_{2}\) & \(\mathcal{M}_{3}\) & \(\frac{81}{17}\) & \(\frac{32u}{3(5u+27)}\) & \(\frac{2}{3}\) \\ \(\mathcal{M}_{1}\) & \(\mathcal{M}_{4}\) & \(\frac{9}{5}\) & \(\frac{8u}{3(u+3)}\) & \(\frac{1}{3}\) \\ \(\mathcal{M}_{4}-\mathcal{Z}\) & \(\mathcal{M}_{5}\) & \(\frac{135}{7}\) & \(\frac{32u}{5(5u+27)}\) & \(\frac{4}{5}\) \\ \hline \(\mathcal{B}_{1}\) & \(\mathcal{B}_{2}\) & \(\frac{36}{11}\) & \(\frac{20u}{9(u+4)}\) & \(\frac{5}{9}\) \\ \(\mathcal{B}_{1}\) & \(\mathcal{B}_{4}\) & \(\frac{52}{27}\) & \(\frac{40u}{13(u+4)}\) & \(\frac{5}{13}\) \\ \(\mathcal{B}_{4}\) & \(\mathcal{B}_{5}\) & \(\frac{68}{3}\) & \(\frac{20u}{17(u+4)}\) & \(\frac{13}{17}\) \\ \hline \(\mathcal{T}_{1}\) & \(\mathcal{Z}+\mathcal{Z}\times\mathcal{T}_{2}\) & \(\frac{16}{7}\) & \(\frac{9u}{2(u+8)}\) & \(\frac{1}{2}\) \\ \(\mathcal{T}_{2}\) & \(\mathcal{T}_{3}\) & \(\frac{64}{37}\) & \(\frac{27u}{2(32-5u)}\) & \(\frac{1}{2}\) \\ \end{tabular} \end{table} Table 3: Values of \(u_{C}\), \(E(u)\) when \(u\leqslant u_{C}\) and \(1-E(1)\) for all the decomposition schemes of Table 2.
2301.03464
Swampland Revisited
The transcendental expectation of string theory is that the nature of the fundamental forces, particle spectra and masses, together with coupling constants, is uniquely determined by mathematical and logical consistency, non-empirically, that is by pure reason. However pluralism triumphed with the explosive emergence of the multiverse. String theorists have extended a long-sought dream (their unique and final theory) to a landscape or a happy caparnaum. Proponents of string theory try to qualify their arguments via swampland conjectures while cosmologists retreat to their telescopes. We review the current status of the string theory swampland.
Michel Cass\e', Joseph Silk
2023-01-03T19:00:01Z
http://arxiv.org/abs/2301.03464v1
# Swampland Revisited ###### Abstract The transcendental expectation of string theory is that the nature of the fundamental forces, particle spectra and masses, together with coupling constants, is uniquely determined by mathematical and logical consistency, non-empirically, that is by pure reason. However pluralism tri-unmbed with the explosive emergence of the multiverse. String theorists have extended a long-sought dream (their unique and final theory) to a landscape or a happy capranneum. Proponents of string theory try to qualify their arguments via swampland conjectures while cosmologists retreat to their telescopes. We review the current status of the string theory swampland. We live near a star called the Sun in a republic of stars called the Milky Way, one of the billions of galaxies composing the Universe, where the vacuum energy is positive. This should not be the case according to the superstring ansatz embodied by Cumrum Vafa and others [1, 2, 3, 4]. Why? Lets begin with basics. We will critically review the swampland conjecture for the benefit of our astrophysics colleagues ## 1 Swampland conjectures: out of the mud Instead of modeling particles as zero-dimensional points, string theory envisions them as tiny twigs undergoing diverse modes of vibrations corresponding to mass and charge. The String Theory _landscape_ or abstract panorama exhibits a huge number of semiclassical field theories. The numbers of possible theories have exploded. The problem is how to find, from this huge number, the most suitable theory, if any, that is consistent with our world. To anticipate our conclusion, this vision that _seems consistent but possibly not_ illustrates the fuzziness of the concepts held by landscape and swampland advocates. In the following, for the sake of clarity, we split the problem into digestible slices. Firstly, what is the Swampland? Its more of a program than a geographical topology. The Swampland program aims at distinguishing effective theories which can be extended into the realm of quantum gravity at high energy (in the ultraviolet) from those which cannot. Secondly, what is a semi-classical theory? In this restricted category, advocated by Stephen Hawking, matter is quantified, but not spacetime. In this uncertain, non-empirical and debatably oxymoric framework, as emphasized elsewhere [5, 6] the impossible occurs: black holes shine. This is one of the most profound predictions in physics, hitherto unverified. Next, what is an effective theory? We all work in terms of effective theories. We find descriptions that match what we actually see, interact with, and measure. Newton's laws are approximations that work at relatively low velocities and for large macroscopic objects. The combination of General Relativity and Quantum Field Theory along with the \(\Lambda\)CDM cosmological model are effective theories or approximations of quantum gravitation and quantum cosmology, respectively. A good effective theory should tells us its limitations-the conditions and values of parameters for which the theory breaks down. This notion is practical and valuable. It is valuable up to a certain point or more exactly to a certain energy, a kind of Heisenberg cut but in cosmology. The laws of the effective theory succeed until we reach its limits when these assumptions are no longer true or our measurements or requirements become increasingly precise. What lies beyond might be a more fundamental truth. And beyond, or within, lies an even more fundamental truth. Then, what indeed is fundamental? This is a crucial ssue but would be too long to discuss in this pedagogical perspective, where we adopt a reductionist stance rather than an holistic/collective one. As a quantum description, string theory has the potential to incorporate gravity, at the expense, however, of increasing the number of space dimensions. In its extended version, 11-dimensional M theory is still a vibrant area of research but it lacks experimental or observational support and therefore it searches for justification in cosmological observations. Here success is not guaranteed, on the contrary. So what is right? String theory or cosmology? The greatest difficulty with this type of theory is being able to test it with observations. Then one could circumvent the problem, or at least avoid it, by studying the effective field theories which are semi-classical approximations. These theories can be tested experimentally, at least up to energy regimes that are so extreme as to require passage to a real String Theory. The Swampland Conjecture is based on the assumption of finding general characteristics that a semi-classical effective field theory must satisfy to be considered a truly coherent theory. These are restrictions on the finiteness of the scalar field or lower limits that the derivatives of the potentials present in these theories must satisfy. ### First swampland criterion This first criterion serves to select those models at very small distance, (extreme energies) compatible with early cosmological times, close to the Big Bang, which must be stable and converging near the singularity. We are obviously dealing with effective field theories in which the variation in absolute module of the field itself must not exceed a certain value which in Planck units is of the order of unity. ### Second swampland criterion This serves to select models suitable for describing late times, our time, and it is a restriction on the slope of the potential with respect to the value of the potential itself. The criterion purports that this scenario does not derive from the presence of a positive cosmological constant, but from the existence of a potential dependent on a scalar field that is rolling along its profile, such as the inflationary potentials that we will consider in our analysis. Imposing that \(|V\Delta V|\) when \(V>0\) has a lower bound of the order of 1 in Planck units means that the potential has a small slope when the contribution of Dark Energy is equally small. ### de Sitter denial A positive vacuum energy can be realized by a scalar field potential with a local minimum leading to a long-lived de Sitter universe with an entropy proportional to the surface of the cosmological horizon. However it could be that the potential is positive but the scalar field does not rest at a minimum. This is the case in quintessence models, as long as the absolute value of the gradient of the potential is small and of the order of the potential itself [7]. Such an option is adopted by C.Vafa and others, to confront the difficulty in producing metastable de Sitter vacua in string theory. One consequence is that dS space, the attached cosmological constant \(\Lambda\), and the standard model of cosmology \(\Lambda\)CDM, all sink in the swampland of failed theories. Given this extraordinary claim, it is legitimate to ask if the condemnation of dS is definitive and irrevocable. The main argument for anti-\(\Lambda\) is probably the strangeness of dS, which is maximally symmetric but not supersymmetric [8]. But one must admit that it is not strictly possible to construct SUSY theories in de Sitter space. Formally, dS superalgebra cannot be justified unless the action is constructed with matter coupled to the wrong sign of the kinetic term. KKLT overcome this objection by uplifting AdS vacua into dS ones. and the proposal that there might be no dS in superstring theory is fervently debated [9, 10, 11]. ## 2 String tourism and ecology The eloquence of the String Theory Landscape masks the confrontation of the practitioners (not necessarily believers) before the proliferation of false vacua. The contents are not necessarily strings, but more general higher dimensional dynamical objects called branes. The formalism is not yet a theory, rather a paradigm or a fashion, but rich in universes [12]. When we refer to a de Sitter vacuum of string theory, we mean a vacuum which is a local minimum of the scalar potential where the potential takes a positive value. The choices for reconciliation of the huge initial value of the cosmological constant required by inflation range from an abrupt phase transition to a slowly rolling quintessence field. The bulk of observations is consistent with a dark energy component of current energy density \(10^{-120}M_{Pl}\) where \(M_{Pl}=1/\sqrt{8\pi G}\) is the reduced Planck mass. The simplest version of dark energy is indeed that of a positive cosmological constant (CC), and so it is natural to ask if this can be incorporated within string theory. An alternative to the CC is the existence of a time-dependent scalar potential energy associated with a scalar field dubbed _Quintessence_, probably in honor of Aristotle. Although appealing, it appears highly problematic. Indeed it is plagued by two severe fine tunings: that of its energy density and that of its time variation. Why so small and why now? Nevertheless, the observed acceleration of the Universe is customarily associated with the value of the scalar potential at present due to an interesting coincidence between two fundamental mass scales: the scale of neutrino masses and the energy scale of dark energy. Neutrino oscillations and cosmological data give constraints on the sum of and the (squared) neutrino mass differences, \(\Sigma m_{\nu}\lower 2.15pt\hbox{$\;\buildrel<\over{\sim}\;$}0.1\)eV and \(\Delta m_{\nu}^{2}=10^{-3}-10^{-5}\)eV\({}^{2}.\) This range of masses is remarkably close to the dark energy scale, while the origin of both quantities is very different. The landscape metaphor became particularly powerful in relation to the Cosmological Constant problem by offering a framework that can potentially realize anthropic selection of the cosmological constant. Indeed the multiverse equation is M = SS + EI + AP. Let us dissect this symbolic equation. The multiverse, M, (pictured as a landscape) rests on three pillars: superstring (SS) theory, which proposes many different universes, eternal inflation (EI), which disposes (realizes) them and from which universes flow, and the Anthropic Principle (AP), which selects and stamps the universe as good for life as we know it. Those universes with a small positive cosmological constant allow the formation of galaxies [13, 14], the crucial precursor for our existence. ### Dimensional reductions Once awakened from the dream world of anti-de Sitter vacua, we may ask how string theory in ten dimensions, M theory in eleven and supergravity in eleven, can have any prospect of describing the four-dimensional universe that we perceive, with three dimensions of space and one of time. A possible answer is that some of the dimensions are so small and compact that they escape detection even at CERN, by high energy experiments. This sleight of hand is known as Kaluza-Klein compactification [15]. Although compact dimensions can solve the problem of unseen ones, they also give rise to a dilemma, namely, that of choosing a compactification mode (typically a Calabi-Yau manifold) among a quasi-infinity of possibilities. When compactifying string/M-theory to four dimensions, one obtains a low-energy effective theory which depends on the specific mode of folding. The number of vacua is estimated usually to be of order \(10^{500}\) and possibly to \(10^{272000}\)[16]. It is then wise to ask whether any compelling effective field theory coupled to gravity can be obtained as a low-energy limit of an M-theory vacuum [17]. Anyway, the landscape of possible four-dimensional low-energy effective theories arising from reduction of string/M-theory is vast. This entertains the possibility that any attractive effective field theory married to gravity can be obtained as a low-energy limit of string theory. However, a growing herd of swampland conjectures suggests that this is not the case and that there is an even larger ensemble of low-energy field theories that cannot be obtained in this way. In particular, the AdS instability swampland conjecture asserts that non-supersymmetric anti-de Sitter vacua are unstable and, more dramatically the (no)-dS conjecture claims, provocatively, that the de Sitter space is non-existing, thereby posing an existential threat to Einstein's cosmological constant. It has been shown that simple compactifications do not lead to any de Sitter vacua and that a highly restrictive inequality on the gradient of the 4-dimensional potential \(V(\phi)\) could be derived, namely that \(|\Delta V|>cV/M_{Pl}\) everywhere in field space. If true, this implies that ordinary models of early universe slow-roll inflation would be in the swampland. In terms of late-time acceleration, this precludes a positive cosmological constant, but does not forbid some form of quintessence in which acceleration is driven by a very light rolling scalar field. ### A de Sitter landscape? String theory opens up a _landscape_ of solutions (space-time vacua) with negative cosmological constant (anti-de Sitter spaces) while most observations show that our universe undergoes an accelerated expansion, and hence is conspicuous by the presence of a positive cosmological constant, corresponding to a de Sitter space. Obtaining such de Sitter spaces remains one of the major open problems in string theory. What seemed impossible for Vafa is not impossible for Akrami, Kalosh et al. [18]. They start from any solution in the landscape of anti-de Sitter vacua, and transform it into a long-lived metastable de Sitter space by inserting appropriate anti-branes, or extended dynamical objects that generalize particles and strings.This mechanism gives rise to a huge landscape of string theory de Sitter vacua. Astrophysicists should not remain agnostic but need only take account of the data. If the cosmological constant imposes itself by observations, the decline of the preferred model advocated by anti-deSitter theorists will just be a faded memory. ### The de Sitter wars It has proven difficult to construct de Sitter vacua in superstring theory due to the fact that the Sitter space is non-supersymmetric. Paradoxically, the dS space which is maximally symmetric does not lend itself to supersymmetry. In practice, a de Sitter vacuum requires stabilization of all the dimension scales, or moduli, this is in general a difficult problem. Taking the technical difficulties as a hint that there is a deep resistance to constructing de Sitter vacua in string theory, Vafa and collaborators relegate de Sitter vacua to the waste basket. But the exclusion of dS solutions from the platonic string sky [9] is not appreciated by all members of the string tribe. The possibility that string theory does not allow for de Sitter vacua is not new and it would be equally dangerous to think that there is a theorem that string theory has no de Sitter vacua [19]. The swampland attack has been countered [20, 21, 18]. Indeed the war between dS supporters and dS deniers has not ceased since the release of Susskind's article _The anthropic landscape of String Theory_ in 2003. This ignited a controversy over KKLT and the landscape, which is not even close to being extinguished. The significance of the KKLT mechanism is that it produces not one dS solution, but a wide series, thereby feeding the credo that string theory offers an anthropic solution to the fine tuning of \(\Lambda\). The banishment of Einstein gravity and of the de Sitter space may at first appear absurd to most cosmologists since they have good evidence that the Universe is entering a phase of recent acceleration, most likely driven by a positive cosmological constant. However inflation is also a phase of acceleration that the Universe most likely has traversed, and it was not due to a cosmological constant. Indeed, it was most likely driven by a scalar field rolling down a potential. It is therefore not completely absurd, to consider that the late-time acceleration is also due to such a mechanism. Its motor is termed quintessence [22, 23]. The possibility that de Sitter vacua are in the swampland is therefore not ruled out by observation. It also does not mean that the anthropic selection solution to the cosmological constant problem is out of question. Fine tunings and adjustments are now concentrated on the scalar potential(s). ## 3 Marshy landscapes Contrary to the little warm pond dear to darwinians, the swampland of string theorists extends beyond measure, with unavoidably negative connotations. It is indeed far from established that there exists a landscape of de Sitter vacua in string theory. On the contrary, there is mounting evidence that string theory abhors de Sitter space. There is not a single rigorous 4D de Sitter vacuum in string theory, let alone \(10^{500}\), which is quite embarrassing for the string advocates. There is an increasing fear that the majority, and perhaps all effective field theories, are intoletent to gravity or _do not possess a sensible UV completion into a quantum theory of gravity_ in the string jargon. Such theories are valid up to a given energy. General Relativity and the standard model of particle physics are two of them, since they cease to be relevant above the Planck energy (\(\sim 10^{19}\) GeV) For the tribe of string theorists, such deficient effective theories -Quantum Field Theories in curved spacetime- are said to fall in the _Swampland_ of lost theories. An astute but still conjectural way of delineating the space of inconsistent theories is in the form of the famous _Swampland Conjectures_. For example, the no-global symmetry conjecture avoids overclosure of the Universe by black hole remnants of the Planck mass, the _distance conjecture_ holds that scalar fields cannot exhibit field excursions much larger than the Planck scale, while the _weak gravity conjecture_ states that the lightest particles of a theory cannot carry a mass larger than their charge in Planck units. The trans-planckian Swampland conjecture forbids trans-planckian quantum fluctuations from becoming classical. And above all, the de Sitter conjecture is especially restrictive: an issue of fundamental importance is whether effective theories that admit de Sitter vacua can be embedded within quantum gravity, or whether they sink into the Swampland. This is a topic of crucial importance since observations show that the expansion of space is accelerating under the influence of an operator resembling Einstein's cosmological constant with an equation of state very close to \(p=-D\). A major objection to quintessence is that the cosmological constant problem is not resolved, since it does not prevent large contributions to the cosmological constant from quantum effects much larger that the observed dark energy density. Additionally, it exacerbates the dark energy problem by requiring that the responsible field is slowly rolling. In short, this seems to replace a single fine-tuning by two: namely both \(V\sim 10^{-120}M_{Pl}^{4}\) and \(|\Delta V|\sim 10^{-120}M_{Pl}^{3}\) need to be extremely small. However, this effective field theory-based reasoning may be too naive. In particular, if \(|\Delta V|=cV/M_{Pl}\), this additional fine-tuning can be avoided [24]. This warns us that in general the speculative swampland conjectures, are endangered by various loopholes and exceptions. ### Swampland cosmology The implications of swampland conjectures have been studied in the context of cosmology, based, for instance, on the Swampland Conjecture known as the refined de Sitter conjecture which states that the effective low-energy potential \(V(\Phi)\)) for scalar fields \(\Phi\) must satisfy \(\Delta V>\;\lower 2.0pt\hbox{$\buildrel>\over{\sim}$}\;\Delta VcM_{Pl}\) and the minimum value \(\Delta^{2}V\lower 2.0pt\hbox{$\buildrel<\over{\sim}$}\;-c^{\prime}/M_{Pl}^{2}\), for universal positive constants c and c\({}^{\prime}\) of order 1, in any consistent theory of quantum gravity. If true, this would imply that the state of the universe is unstable. The refined de Sitter conjecture has been applied to single-field inflation models[25]. In particular, the ratio between scalar and tensor modes in primordial fluctuations, r, and the scalar spectral index, ns, parametrising the scale-dependence of density fluctuations, have been considered. For consistency between observational data and the model, it is found that \(c~{}\sim 0.1\) is intertwined with the raw dS conjecture, and \(c^{\prime}\sim 0.01\) is in tension with the refined version.The validity of this application has however been questioned. ### Observational tensions There are interesting tensions in cosmological data. The most significant is at around a \(5\sigma\) difference between early and late epoch determinations of the Hubble constant. Early epoch refers to the distant Universe, most notably use of the temperature fluctuations in the cosmic microwave background as a distance calibrator, while late epoch refers to local distance calibrators, most notably a laddered combination of Cepheid variable stars, the most luminous red giant stars, and Type 1a supernovae. Of course, similar observational issues have a long history in observational cosmology, spanning the past half century or more. Following the triumph of Hubble in establishing the expansion of the Universe, the rate of recession of the distant galaxies remained uncertain by some fifty percent as Sandage and de Vaucouleurs, along with their collaborators, vigorously debated the distance scale throughout the 1980s. Only with the advent of large ground-based telescopes, and especially the Hubble Space Telescope, could one resolve Cepheid variable stars in distant galaxies and use Type Ia supernovae to establish a distance ladder to galaxies at redshifts of 0.1 or larger. However the distance scale controversy remains, admittedly now reduced to roughly ten percent uncertainty offsets between the rival teams that apply complementary distance calibrators. This can be distilled either into a case for seeking systematic errors, or for modifying the physics of the expansion at early or late epochs. Some authors have argued for swampland-motivated explanations [26] or use these tensions to set constraints on string models [27] in order to probe dark energy [28, 29] or the Hubble tension [30], or even to constrain the Swampland [33, 31] and to favor quintessence explanations of dark energy [32], to give selected examples. Here we simply note that Hubble constant determinations are themselves a sort of astrophysical bog where systematic and observaational errors are uncertain. Although up to a 1% determination of \(H_{0}\) is claimed by Riess et al. [34] using Cepheid-based indicators, or by other competitive methods, most notably CMB-based [35] and those using red giant branch calibrators [36], the competing results differ by far more than the quoted errors. While new physics remains one possible explanation that most notably appeals to a difference between early and late dark energy, it seems premature to take any such interpretations as robust until the observational issues are clarified and resolved. A decisive argument would employ high precision geometrical distance measures, such as galaxy cluster lensing-induced quasar time delays or gravitational wave sources with or without optical counterparts, but these techniques currently remain under development. For example, expected improvements in standard siren cosmology should yield better than 1% determinations of the Hubble constant with a year of data from the Einstein telescope [37]. There are other cosmological tensions most notably from cosmic shear data, but these are currently at the \(2\sigma\) level from the Dark Energy Survey [38] and KiDS-1000 [39]. Future surveys, especially by the Euclid, Rubin and Roman telescopes, should significantly improve this latter constraint by a factor of two or more [40]. ## 4 Comments and Conclusions Recent studies show that it is difficult, if not impossible, to realize de Sitter space (with a positive cosmological constant, CC) in string theory, whereas observations converge towards the existence of something like a cosmological constant playing the role of cosmic accelerator. And the balance sheet does not lean towards strings : there are no hints of extra dimensions, nor SUSY, no clear variation of physical constants, and above, there is strong empirical dominance of the despised cosmological constant, contrary to the expectations of most string aficionados. Confronted with this situation, the more extremist string theorists practice denial of reality and throw the CC into their waste basket. The objective is to kill the CC to save a vision. But the CC is still alive. Decades of attempts to explain its tiny positive energy have left little hope for success. Observational prospects for establishing a fundamental challenge to \(\Lambda\)CDM cosmology are currently indecisive. This situation will certainly improve within a decade [41]. For the moment however there are certainly hints of late epoch tensions, in determinations of the Hubble constant, in the amplitude of density fluctuations, and even in the isotropy of the Universe.. But there is so far no robust evidence that favors new physics in cosmology, especially at or even before the epoch of inflation. The question of fine-tuning remains more acute than ever. Transferring fine-tuning from the domain of the physical constants and initial cosmological conditions to the potentials of the scalar fields raises more questions than answers. One remaining loophole may be to relate the CC to the lightest neutrino mass, of meV scale, and through the see-saw mechanism to the electroweak symmetry breaking energy. Another may lie in compacting the standard model onto a circle. Generous but fuzzy, our understanding of string theory is still in a state of flux. It is not yet a mature field, lacking underlying principles from which the results are rigorously derived, while shortcomings, exceptions, counter-examples and loopholes abound. Any derivations, for instance, are sensitive to the UV structure of the theory under test and to the swampland criteria. Nonetheless, the interest in the Swampland has not diminished, since it not only maintains vigorous discussion, controversy and hope for string theory advocates, but also, fortunately, progressively reveals links between the different conjectures. String theorists may possibly reach a picture where there are observable consequences of an underlying new principle of order. This at least is the hope and the challenge. The swampland criteria have implications for particle physics and cosmology and astrophysics. This is why astroparticle physicists, at least by curiosity, should be aware of such speculations, taking them however with a grain of salt, and not using them in the present situation as really discriminatory. Indeed, the lack of a complete, non-perturbative definition of M-theory is a significant obstruction to any proof. Instead, the conjectures are motivated by idiosyncratic examples from string theory and black hole physics. So far so good, but, what is more surprising to us, is that swampland advocates, on the basis of what perceive to be fragmented lines of thought, enact definitive judgement in and on cosmology, condemning, for instance, the cosmological constant to non-existence. Cosmological data in the next decade may falsify their attempts, or even justify them, giving information that may be not on the constraints on quantum gravity and string theory, but rather on the validity of certain swampland conjectures that exile the cosmological constant into an uninhabitable wasteland. Our preferred conjecture is that perhaps it is just another constant of nature, to join such worthy physics compatriots as the electron mass and charge, Planck's constant, the velocity of light in vacuum and the fine-structure constant. It is inevitable that the various swampland conjectures will be modified, some ruled out by counter-examples, others refined by epicycles, and some, just conceivably, refined by future observations.
2302.04883
Pure non-Markovian evolutions
Non-Markovian dynamics are characterized by information backflows, where the evolving open quantum system retrieves part of the information previously lost in the environment. Hence, the very definition of non-Markovianity implies an initial time interval when the evolution is noisy, otherwise no backflow could take place. We identify two types of initial noise, where the first has the only effect of degrading the information content of the system, while the latter is essential for the appearance of non-Markovian phenomena. Therefore, all non-Markovian evolutions can be divided into two classes: noisy non-Markovian (NNM), showing both types of noise, and pure non-Markovian (PNM), implementing solely essential noise. We make this distinction through a timing analysis of fundamental non-Markovian features. First, we prove that all NNM dynamics can be simulated through a Markovian pre-processing of a PNM core. We quantify the gains in terms of information backflows and non-Markovianity measures provided by PNM evolutions. Similarly, we study how the entanglement breaking property behaves in this framework and we discuss a technique to activate correlation backflows. Finally, we show the applicability of our results through the study of several well-know dynamical models.
Dario De Santis
2023-02-09T19:00:02Z
http://arxiv.org/abs/2302.04883v2
# Pure non-Markovian evolutions ###### Abstract Non-Markovian dynamics are characterized by information backflows, where the evolving open quantum system retrieves part of the information previously lost in the environment. Hence, the very definition of non-Markovianity implies an initial time interval when the evolution is noisy, otherwise no backflow could take place. We identify two types of initial noise, where the first has the only effect of degrading the information content of the system, while the latter is essential for non-Markovian phenomena. Hence, all non-Markovian evolutions can be divided into two classes: noisy non-Markovian (NNM), showing both types of noise, and pure non-Markovian (PNM), implementing solely essential noise. We make this distinction through a timing analysis of fundamental non-Markovian phenomena. First, we prove that all NNM dynamics can be simulated through a Markovian pre-processing of a PNM core. We quantify the gains in terms of information backflows and non-Markovianity measures provided by PNM evolutions. Similarly, we study how the entanglement breaking property behaves in this framework and we discuss a technique to activate correlation backflows. Finally, we show the applicability of our results through the study of several well-know dynamical models. Open quantum system dynamics describe the evolution of quantum systems interacting with an external system, typically represented by the surrounding environment. The unavoidable nature of this interaction made this topic of central interest in the field of quantum information [1; 2]. This reciprocal action may lead to two different regimes for the information initially stored into our system. An evolution is called Markovian whenever there are no memory revivals and therefore the system shows a monotonic information degradation. On the contrary, non-Markovian evolutions are those showing information backflows, where partial information stored into the system is first lost into the environment and then retrieved at later times (for reviews on this topic see [3; 4; 5; 6]). Hence, the very definition of these evolutions implies the existence of an initial time interval when the dynamics is noisy, otherwise no backflow from the environment could be possible. In this work, we address the question of whether all the initial noise applied by an evolution is necessary for the following non-Markovian phenomena. We identify two noise types. While the first, that we call _useless_, is not necessary for information backflows, only the information lost with _essential_ noise takes part to the characteristic non-Markovian phenomena. Starting from this observation, we classify non-Markovian evolutions as _noisy_ or _pure_, where the first have both types of noise, while the second implements essential noise only. Hence, the information initially lost with pure non-Markovian (PNM) evolutions always take part to a later backflow, which occur even in time intervals starting immediately after the beginning of the interaction with the environment. Instead, the useless noise of noisy non-Markovian (NNM) evolutions has the sole purpose of damping the information content of the open system and therefore backflows. This classification is in close analogy with the structure of quantum states, where mixed states can be obtained through noisy operations on pure states. Similarly, NNM evolutions can be obtained via a Markovian pre-processing, corresponding to the useless noise, of PNM evolutions, which we call PNM _cores_. Moreover, as well as pure states allow the best performances in several scenarios and protocols, PNM evolutions are characterized by the largest information revivals and non-Markovianity measures. The interest in considering PNM cores of known NNM evolutions resides in the possibility to isolate a dynamics with the same non-Markovian qualitative features and at the same time with the largest possible non-Markovian phenomena. For instance, in case of an experimental setup where the visible non-Markovian phenomena generated by a target evolution are not significant due to various additional noise sources in the laboratory (preparation, measurements, thermal noise, cec...), the possibility to isolate and implement the corresponding PNM core may allow to appreciate the same non-Markovian phenomena that we failed to detect with the noisy version. The first main goal of this work is to identify the initial useless noise of generic non-Markovian evolutions. While doing so, we propose a structure for the timing of the fundamental non-Markovian phenomena happening in finite and infinitesimal time intervals. This framework provides a straightforward and natural approach to discriminate Markovian, NNM and PNM evolutions. We follow by showing how to isolate the PNM core of a generic NNM evolution and, conversely, how any PNM evolution can generate a whole class of NNM evolutions. Later, we explain how and to what extent PNM evolutions are characterized by larger information backflows and non-Markovianity measures. In particular, we focus on backflows of state distinguishability. Finally, we show how the entanglement breaking property behaves in this scenario and we discuss a technique to activate correlation backflows that cannot be observed in presence of useless noise. Later we apply our results to several models, such as depolarizing and dephasing evolutions. ## I Quantum evolutions We define \(S(\mathcal{H})\) to be the set of density matrices of a generic \(d\)-dimensional Hilbert space \(\mathcal{H}\). The time evolution of any open quantum system can be represented by a one-parameter family \(\mathbf{\Lambda}=\{\Lambda_{t}\}_{t\geq 0}\) of quantum maps, namely com pletely positive and trace preserving (CPTP) superoperators. We define \(\mathbf{\Lambda}\) to be the _evolution_ of the system, while \(\Lambda_{t}\) is the corresponding _dynamical map_ at time \(t\). Hence, the transformation of an initial state \(\rho(0)\in S(\mathcal{H})\) into the corresponding evolved state at time \(t\) is \(\rho(t)=\Lambda_{t}(\rho(0))\in S(\mathcal{H})\). We consider \(\mathbf{\Lambda}\) as a collection of dynamical maps _continuous_ in time. This is because any open quantum system evolution obtained through the physical interaction with an environment, even in case of non-continuous Hamiltonians, are continuous [7]. Secondly, we assume _divisibility_, namely the existence of an _intermediate map_ for any time interval. More precisely, for all \(0\leq s\leq t\) we assume the existence of a linear map \(V_{t,s}\) such that \(\Lambda_{t}=V_{t,s}\circ\Lambda_{s}\). Invertible evolutions are an instance of divisible evolutions. We call an evolution invertible if, for all \(t\geq 0\), there exists the operator \(\Lambda_{t}^{-1}\) such that \(\Lambda_{t}^{-1}\circ\Lambda_{t}=I\), where \(I\) is the identity map on \(S(\mathcal{H})\). Indeed, in these cases \(V_{t,s}=\Lambda_{t}\circ\Lambda_{s}^{-1}\). While divisibility makes all the steps of the following sections easier, in Section III.3 we show how to generalize our results to non-divisible evolutions. We say that an evolution is _CP-divisible_ if and only if between any two times it is represented by a quantum channel. Hence, this property corresponds to require that the intermediate maps \(V_{t,s}\) are CPTP for all \(0\leq s\leq t\). Remember that any implementable quantum operation is represented by a CPTP operator, which is the reason why dynamical maps \(\Lambda_{t}\) are required to be CPTP at all times. In case of non-CP-divisible evolutions, there exist \(s\leq t\) such that \(V_{t,s}\) is not CPTP, while at the same time \(\Lambda_{t}=V_{t,s}\circ\Lambda_{s}\) must be CPTP. In this case the transformation acting in the time interval \([s,t]\) cannot be applied independently from the transformation applied in \([0,s]\). We define Markovian evolutions as those being CP-divisible. Thanks to the Stinespring-Kraus representation theorem [8; 9], such a definition adheres with the impossibility of the system to recover any information that was previously lost. Indeed, as we better explain later, CPTP operators degrades the information content of the system. Therefore, an evolution is non-Markovian if and only if there exists at least one time interval \([s,t]\) when the evolution is not described by a CPTP intermediate map. Indeed, whenever this is the case, it is possible to obtain an information backflow during the same time interval [10; 11], even for non-invertible evolutions [12; 13]. Once that we have an explicit formulation of \(\mathbf{\Lambda}\), the corresponding dynamical maps \(\Lambda_{t}\) and intermediate maps \(V_{t,s}\) for all \(0\leq s\leq t\), we can define \(\mathcal{P}^{\Lambda}\): the collection of those time pairs such that \(V_{t,s}\) is CPTP. This set can be obtained by considering the smallest eigenvalue \(\lambda_{t,s}\) of the operator obtained by applying the Choi-Jamiolkowski isomorphism [14; 15] to \(V_{t,s}\). Indeed, \(V_{t,s}\) is CPTP if and only if \(\lambda_{t,s}\geq 0\). \[\mathcal{P}^{\Lambda}=\{0\leq s\leq t\,|\,V_{t,s}\text{ CPTP}\}=\{0\leq s \leq t\,|\,\lambda_{t,s}\geq 0\}. \tag{1}\] Similarly, we define \(\mathcal{N}^{\Lambda}\) as the pairs of times for which \(V_{t,s}\) is non-CPTP (\(\lambda_{t,s}<0\)) and therefore it is complementary to \(\mathcal{P}^{\Lambda}\). \[\mathcal{N}^{\Lambda}=\{0\leq s\leq t\,|\,V_{t,s}\text{ not CPTP}\}=\{0\leq s \leq t\,|\,\lambda_{t,s}<0\}. \tag{2}\] Notice that \(\mathbf{\Lambda}\) is Markovian if and only if \(\mathcal{P}^{\Lambda}=\{s,t\}_{0\leq s\leq t}\). We prove that \(\mathcal{P}^{\Lambda}\) is closed and \(\mathcal{N}^{\Lambda}\) is open in Appendix A. Moreover, the border of \(\{s,t\}_{0\leq s\leq t}\) always belongs to \(\mathcal{P}^{\Lambda}\): the vertical line \(\{0,t\}_{t\geq 0}\) corresponds to the (CPTP) dynamical maps \(\Lambda_{t}\) and the diagonal line \(\{t,t\}_{t\geq 0}\) corresponds to the trivial intermediate (identity) maps \(V_{t,s}=I\), for which \(\lambda_{t,s}=0\). Notice that pairs \(\{s,t\}\) infinitesimally close to \(\{t,t\}_{t\geq 0}\) correspond to the infinitesimal intermediate maps \(V_{t+s,s}\), where their CPTP/non-CPTP nature can be studied through the rates of the corresponding master equation [16; 17]. We show this feature in Section VIII. Instead, any point in the interior of \(\{s,t\}_{0\leq s\leq t}\) can either belong to \(\mathcal{P}^{\Lambda}\) or \(\mathcal{N}^{\Lambda}\). At the same time, not every open set is allowed for \(\mathcal{N}^{\Lambda}\): these sets have to satisfy some reciprocal constraints deriving from fundamental map composition rules [18]. Below, we show several representations of \(\mathcal{P}^{\Lambda}\) and \(\mathcal{N}^{\Lambda}\). ## II Timing of information backflows In this section we show how the timing of the main non-Markovian phenomena of an evolution are always ruled by the three times: \(T^{\Lambda}\), \(\tau^{\Lambda}\) and \(t^{\Lambda}\). Before giving the corresponding mathematical definitions, we anticipate their intuitive meaning. It is possible to observe an information backflows during a time interval if and only if it starts later than \(T^{\Lambda}\). Hence, there exist intervals \([T^{\Lambda}+\epsilon,t]\), where the corresponding intermediate maps are not CPTP for infinitesimal \(\epsilon>0\)[19]. Among these time intervals, \([T^{\Lambda}+\epsilon,t^{\Lambda}]\) is the shortest, namely \(t=t^{\Lambda}\) is the earliest final time such that \(V_{t^{\Lambda},T^{\Lambda}+\epsilon}\) is not CPTP for infinitesimal \(\epsilon>0\). Instead, \(\tau^{\Lambda}\) is the earliest time when an instantaneous backflow can be observed, namely the earliest \(t\) such that \(V_{t+\epsilon,t}\) is not CPTP for infinitesimal \(\epsilon>0\). Hence, \([T^{\Lambda}+\epsilon,t^{\Lambda}]\) (\([\tau^{\Lambda},\tau^{\Lambda}+\epsilon]\)) is the shortest time interval with the earliest _initial (final)_ time when the corresponding intermediate map is not CPTP. For these reasons, we call the noise applied by the evolution in \([0,T^{\Lambda}]\) as useless for non-Markovianity, while the noise applied later than \(T^{\Lambda}\) is essential for non-Markovian phenomena. We represent the typical role of these three times in Fig. 1. Given a generic evolution \(\mathbf{\Lambda}\), we define: \[T^{\Lambda}=\max\left\{\begin{array}{ll}T\,\left|\begin{array}{ll}(\text{A} )&V_{t,s}\text{ CPTP for all}&s\leq t\leq T\\ (\text{B})&V_{t,T}\text{ CPTP for all}&T\leq t\\ (\text{C})&\Lambda_{T}\text{ not unitary}&T>0\end{array}\right.\right\}. \tag{3}\] We briefly discuss conditions (A), (B) and (C). Condition (A) requires the evolution to be CP-divisible before \(T^{\Lambda}\): _no non-Markovian effects can take place in \([0,T^{\Lambda}]\)_. Condition (B) requires the evolution following \(T^{\Lambda}\) to be physical _by its own_, namely such that the composition with the initial noise \(\Lambda_{T^{\Lambda}}\) is not needed for the intermediate maps \(\{V_{t,T^{\Lambda}}\}_{t\geq T^{\Lambda}}\) to be CPTP [20]. Finally, condition (C) is imposed because a unitary transformation is not detrimental for the information content of our system and we cannot consider it useless noise: it is "useless" but not noisy. We remember that evolutions with dynamical maps that are unitary at all times can be simulated with closed quantum systems, and therefore we do not focus on cases where condition (C) is necessary. In Appendix A we show that Eq. (3) is indeed a maximum and not a supremum. An evolution \(\mathbf{\Lambda}\) is Markovian if and only if \(T^{\Lambda}=+\infty\). Indeed, all the noise applied by the evolution is not necessary for the later evolution to be physical. Markovian evo lutions can be interpreted as sequences of noisy independent operations. Indeed, the dynamics between any two times the evolution is represented by a (noisy) CPTP map. We consider as trivial those evolutions that are unitary at all times, namely that can be simulated with a closed quantum system. Below we show that a finite value of \(T^{\Lambda}\geq 0\) implies the evolution to have non-CPTP intermediate maps and therefore to be non-Markovian. From now on, by \(T^{\Lambda}\geq 0\) we mean \(T^{\Lambda}\in[0,\infty)\). Hence, the time \(T^{\Lambda}\) can be used to classify quantum evolutions: \(T^{\Lambda}=+\infty\): Markovian (M); \(T^{\Lambda}\in[0,+\infty)\): non-Markovian (NM); \(T^{\Lambda}\in(0,+\infty)\): noisy non-Markovian (NNM) and \(T^{\Lambda}=0\): pure non-Markovian (PNM). The last class is the main topic studied in this work. Finally, the following results clarify the role of \(T^{\Lambda}\) (proofs in Appendix B): **Lemma 1**.: _Conditions (A), (B) and (C) are simultaneously satisfied at time \(T\) if and only if \(T\in[0,T^{\Lambda}]\). If (A) is violated at time \(T\), (B) is violated at a strictly earlier time._ **Lemma 2**.: _Any non-Markovian evolution \(\mathbf{\Lambda}\) has non-CPTP intermediate maps \(V_{T,T^{\Lambda}+\epsilon}\) for one or more final times \(T>T^{\Lambda}\) and infinitesimal values of \(\epsilon>0\). If \(V_{t,s}\) is a non-CPTP intermediate map of \(\mathbf{\Lambda}\), then \(T^{\Lambda}<s\)._ We want \(\tau^{\Lambda}\) to represent the first time when the instantaneous information flow inverts its direction back to the system. Hence, we focus on the earliest time such that the intermediate map \(V_{T+\epsilon,T}\) is non-CPTP for infinitesimal \(\epsilon>0\): \[\tau^{\Lambda}=\lim_{\epsilon\to 0^{\star}}\inf\left\{T\,|\,V_{T+\epsilon,T} \text{ not CPTP }\right\}. \tag{4}\] We can say that \(\tau^{\Lambda}\) defines when instantaneous non-Markovian phenomena begins as it is the earliest time when an infinitesimal intermediate map is non-CPTP and information is instantaneously retrieved from the environment. The time intervals with the earliest initial time such that the corresponding intermediate maps are non-CPTP are of the form \([T^{\Lambda}+\epsilon,t]\) (see Lemma 2). We define \(t^{\Lambda}\) to be the earliest final time \(t\) such that this intermediate map is non-CPTP: \[t^{\Lambda}=\lim_{\epsilon\to 0^{\star}}\inf\left\{T\,|\,V_{T,T^{\Lambda}+ \delta}\text{ not CPTP }\right\}. \tag{5}\] The timing of the earliest information backflows is therefore dictated by the values of \(T^{\Lambda}\), \(\tau^{\Lambda}\) and \(t^{\Lambda}\), which have a definite value for all non-Markovian evolutions. These three characteristic times satisfy the following reciprocal relation (proof in Appendix C): \[0\leq T^{\Lambda}\leq\tau^{\Lambda}\leq t^{\Lambda}\leq\infty. \tag{6}\] We briefly discuss the possible equalities that can hold in the above equation and later propose exemplary models. PNM evolutions (\(T^{\Lambda}=0\)) are largely analysed throughout this work. A divergent \(t^{\Lambda}=\infty\) is allowed only if \(T^{\Lambda}<\tau^{\Lambda}<t^{\Lambda}\). For what concerns the possible equalities between \(T^{\Lambda}\), \(\tau^{\Lambda}\) and \(t^{\Lambda}\), we have that \(T^{\Lambda}=\tau^{\Lambda}\) implies \(T^{\Lambda}=\tau^{\Lambda}=t^{\Lambda}\) (proof in Appendix C), namely \(T^{\Lambda}=\tau^{\Lambda}<t^{\Lambda}\) is forbidden. On the contrary, \(T^{\Lambda}<\tau^{\Lambda}=t^{\Lambda}\) is allowed. We mention some examples for the above-mentioned patterns of \(T^{\Lambda}\), \(\tau^{\Lambda}\) and \(t^{\Lambda}\). Concerning the difference between \(T^{\Lambda}=0\) and \(T^{\Lambda}>0\), we show how to obtain a PNM evolution (\(T^{\Lambda}=0\)) from any NNM evolution (\(T^{\Lambda}>0\)) and vice-versa in Section III. We propose and in-depth study of the case \(T^{\Lambda}<\tau^{\Lambda}<t^{\Lambda}<\infty\) for depolarizing evolutions in Section VII. A well-known instance of \(0=T^{\Lambda}=\tau^{\Lambda}=t^{\Lambda}\) is given by the eternal non-Markovian model [21, 22, 23], while \(T^{\Lambda}<\tau^{\Lambda}<t^{\Lambda}=\infty\) and \(T^{\Lambda}<\tau^{\Lambda}=t^{\Lambda}<\infty\) can be obtained with quasi-eternal non-Markovian evolutions [24]. These models are studied in Section VIII. The following generalizes Lemma 2: **Proposition 1**.: _Let \(\mathbf{\Lambda}\) be a generic non-Markovian evolution. If \(T^{\Lambda}<t^{\Lambda}\), then \(V_{t^{\Lambda},s}\) is non-CPTP for all \(s\in(T^{\Lambda},t^{\Lambda})\). If \(T^{\Lambda}=t^{\Lambda}\), for all \(T>T^{\Lambda}\) the infinitesimal intermediate map \(V_{t+\epsilon,s}\) is non-CPTP for an infinite number of times \(t\in(T^{\Lambda},T)\)._ Hence, not only each non-Markovian evolution must have a non-CPTP intermediate map for time intervals starting immediately after \(T^{\Lambda}\) (Lemma 2), but whenever \(T^{\Lambda}<t^{\Lambda}\) there is a whole continuum of non-CPTP intermediate maps \(V_{t^{\Lambda},s}\) for \(s\in(T^{\Lambda},t^{\Lambda})\). Additionally, if \(V_{t,T^{\Lambda}+\epsilon}\) is not CPTP for \(t>t^{\Lambda}\), all the intermediate maps \(V_{t,s}\) with \(s\in(T^{\Lambda},t^{\Lambda})\) are non-CPTP. In case of \(T^{\Lambda}=t^{\Lambda}\), the infinitesimal intermediate maps \(V_{t+\epsilon,t}\) are non-CPTP either for all the times inside a time interval of the type (\(T^{\Lambda},T\)) for some \(T>T^{\Lambda}\) or for infinite times that do not constitute an interval for any \(T>T^{\Lambda}\). We propose an example of the latter pathological case in Appendix B. A special case is given by the eternal NM model, which has non-CPTP intermediate maps \(V_{t,s}\)_for all_\(0<s<t\) (see Section VIII): \(T^{\Lambda}=t^{\Lambda}\) and it enjoys both properties described in Proposition 1. Figure 1: Typical information content of an open quantum system evolving under a NNM evolution. An increase, or backflow, of information, is a typical sign of non-Markovianity, namely of non-CPTP intermediate maps. Blue/red regions represent times when the infinitesimal intermediate map \(V_{t+\epsilon,s}\) is CPTP/non-CPTP. The time \(T^{\Lambda}\) is the largest such that the preceding dynamics is CP-divisible and \(V_{t,T^{\Lambda}}\) is CPTP for all \(t\geq T^{\Lambda}\). Indeed, for \(t\geq T^{\Lambda}\), the information content of the system never exceeds the level at \(T^{\Lambda}\) (green area). The information lost in \([0,T^{\Lambda}]\) is never recovered (useless noise), while the noise applied in \([T^{\Lambda},\tau^{\Lambda}]\) is essential for the following backflows. \(\tau^{\Lambda}\) is the earliest time after which we have an instantaneous backflow. We have finite backflows in intervals \([s,t]\) with \(s>T^{\Lambda}\) and \(t^{\Lambda}\) is the earliest \(t\) such that we have a backflow in \([T^{\Lambda}+\epsilon,t]\). ## III Pure non-Markovian evolutions We show that the initial noise that any non-Markovian evolution \(\mathbf{\Lambda}\) induces in the time interval \([0,T^{\Lambda}]\) is useless for the following non-Markovian effects to happen. By doing so, we better explain the role of PNM evolutions, namely those one-parameter continuous family of CPTP maps \(\mathbf{\Lambda}=\{\Lambda_{t}\}_{t\geq 0}\) such that \(T^{\Lambda}=0\). We prove that any NNM evolution can be simulated by a Markovian pre-processing of the input states followed by a PNM evolution. Finally, we prove that, if an evolution perfectly retrieve the initial information of the system, then it must be PNM. ### Simulation of NNM evolutions with PNM evolutions We start by simulating NNM evolutions \(\mathbf{\Lambda}\) with the subsequent interaction of the system with two different environments. We consider the Stinespring-Kraus representation theorem [8; 9], which allows us to describe continuous family of CPTP maps through the interaction of the system with an initially uncorrelated environment. Hence, we consider the system in contact with a first environment \(E_{1}\) in the time interval \([0,T^{\Lambda}]\) and at later times \(t>T^{\Lambda}\) is in contact with \(E_{2}\). Hence, consider the following two-step scenario: * \(t\in[0,T^{\Lambda}]\) (**Markovian pre-processing**): the evolution is simulated by the interaction with \(E_{1}\). A unitary transformation \(U^{\prime}_{t}\) evolves the system-environment state, which at time \(0\) is in a product state (no initial system-environment correlations): \[\rho_{S}(0)\rightarrow\rho_{S}(t)=\mathrm{Tr}_{E_{1}}[U^{\prime}_{t}(\rho_{S} (0)\otimes\omega_{E_{1}})U^{\prime}_{t}].\] (7) This simulation is possible because \(\Lambda_{t}\) is CPTP for all \(t\in[0,T^{\Lambda}]\). The phenomenology during this time interval is Markovian as Eq. (3) requires \(\mathbf{\Lambda}\) to be CP-divisible in \([0,T^{\Lambda}]\). This stage represents the useless noise of \(\mathbf{\Lambda}\). * \(t=T^{\Lambda}\): we discard \(E_{1}\) and let the system interact with a second environment \(E_{2}\). The system-environment state is given by \(\rho_{S\,E_{2}}(T^{\Lambda})=\rho_{S}(T^{\Lambda})\otimes\omega_{E_{2}}\) (no initial system-environment correlations); * \(t\geq T^{\Lambda}\) (**PNM core**): the evolution is simulated by the interaction with the environment \(E_{2}\). A unitary transformation \(U^{\prime\prime}_{\tau}\) evolves the system-environment state: \[\rho_{S}(T^{\Lambda})\rightarrow\rho_{S}(t)=\mathrm{Tr}_{E_{2}}[U^{\prime \prime}_{\tau}(\rho_{S}(T^{\Lambda})\otimes\omega_{E_{2}})U^{\prime\prime}_{ \tau}]\,,\] (8) where \(\tau=t-T^{\Lambda}\geq 0\). This simulation is possible because \(V_{t,T^{\Lambda}}\) is CPTP for all \(t\geq T^{\Lambda}\). The phenomenology during this time interval is non-Markovian. As we already noticed, no information backflow can be observed in \([0,T^{\Lambda}]\): the phenomenology _in this time interval_ is Markovian. Now, thanks to this two-stage simulation, we can state that the information involved in the backflows was originally lost later than \(T^{\Lambda}\). Indeed, the non-Markovian effects of \(\mathbf{\Lambda}\) do not depend on the behaviour of the dynamics in the time interval \([0,T^{\Lambda}]\), when \(\mathbf{\Lambda}\) is CP-divisible. This is the reason why we call the first stage a _Markovian pre-processing_ and we say that \(\Lambda_{t}\), for \(t\in[0,T^{\Lambda}]\), generates the _useless noise_ of the evolution. Conversely, the (CPTP) intermediate maps \(V_{t,s}\) for \(T^{\Lambda}\leq s\leq t\) generate the _essential noise_ needed for non-Markovian phenomena. We define \(\overline{\mathbf{\Lambda}}=\{\overline{\Lambda}_{\tau}\}_{\tau\geq 0}\) to be the evolution that represents the interaction with the second environment, where \[\overline{\Lambda}_{\tau}(\cdot)=\mathrm{Tr}_{E_{2}}[U^{\prime\prime}_{\tau} (\cdot\otimes\omega_{E_{2}})U^{\prime\prime}_{\tau}]\,. \tag{9}\] From the above definitions it is easy to see that the dynamical and intermediate maps \(\overline{V}_{t,s}\) of the evolution \(\overline{\mathbf{\Lambda}}\) are connected with the intermediate maps \(V_{t,s}\) of \(\mathbf{\Lambda}\) as follows: \[\begin{array}{l}\overline{\Lambda}_{t}=V_{t+T^{\Lambda},T^{\Lambda}}\\ \overline{V}_{t,s}=V_{t+T^{\Lambda},s+T^{\Lambda}}\end{array}\quad\text{for} \quad 0\leq s\leq t\,. \tag{10}\] The map \(\overline{\Lambda}_{t}\) is CPTP for all \(t\geq 0\) and therefore \(\overline{\mathbf{\Lambda}}=\{\overline{\Lambda}_{t}\}_{t\geq 0}\) is a valid evolution by itself (see Eq. (3)). It is immediate to check that \(T^{\overline{\mathbf{\Lambda}}}=0\): the evolution \(\overline{\mathbf{\Lambda}}\) is PNM and we call it the _PNM Figure 2: Open quantum systems are physically represented by a system \(S\) interacting with a surrounding environment \(E\). This interaction leads to information losses (blue arrows) and, in case of non-Markovian evolutions, backflows (red arrows). NNM evolutions \(\mathbf{\Lambda}\) can be simulated with the following two-stage scenario. First stage (\(t\in[0,T^{\Lambda}]\)): the system interacts with a first environment \(E_{1}\) and information is lost monotonically (Markovian pre-processing). The dynamics during this first stage corresponds to the useless noise of \(\mathbf{\Lambda}\). Second stage (\(t>T^{\Lambda}\)): \(E_{1}\) is discarded, the system evolves while interacting with \(E_{2}\) and we have information backflows. The dynamics during this second stage corresponds to the PNM core \(\overline{\mathbf{\Lambda}}\) of \(\mathbf{\Lambda}\). core of \(\Lambda\)_. Moreover, the relation between the characteristic times of \(\Lambda\) and \(\overline{\Lambda}\) is (see Eqs. (4), (5) and (10)): \[T^{\overline{\Lambda}}=0\,,\;\;\;\;\tau^{\overline{\Lambda}}=\tau^{\Lambda}-T^{ \Lambda}\,,\;\;\;\;t^{\overline{\Lambda}}=t^{\Lambda}-T^{\Lambda}\,. \tag{11}\] We conclude that NNM evolutions can be simulated via a Markovian pre-processing (physically represented by Eq. (7)) followed by the action of the corresponding PNM core (physically represented by Eq. (9)): \[\Lambda=\left\{\begin{array}{ll}\Delta_{t}&t<T^{\Lambda}\;\;\;\;\text{( Markovian pre-processing)}\\ \overline{\Lambda}_{t-T^{\Lambda}}\circ\Lambda_{T^{\Lambda}}&t\geq T^{ \Lambda}\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \ Noteworthy, the initial Markovian pre-processing (useless noise) of NNM evolutions may not affect some types of information contained in specific initializations for which the information is instead completely restored. Now, we describe a NNM evolution \(\mathbf{\Lambda}\) that completely recovers the information to distinguish two states. Consider the elements of the previous example, where now \(\Lambda_{t_{1}}=\Lambda_{z}\), \(V_{t_{2},t_{1}}=\Lambda_{x}\) and \(V_{t_{3},t_{2}}=\Lambda_{x}^{-1}\). The initial states \(|0\rangle\!\langle 0|\) and \(|1\rangle\!\langle 1|\) get closer during the time interval \([t_{1},t_{2}]\) and later they recover their (maximal) initial distance during the time interval \([t_{2},t_{3}]\). Nonetheless, this evolution is NNM because the initial noise \(\Lambda_{t}\) for \(t\in[0,t_{1}]\) is useless (\(T^{\Lambda}=t_{1}\)). Indeed, the initial Markovian pre-processing \(\Lambda_{T^{\Lambda}}=\Lambda_{t_{1}}=\Lambda_{z}\), although it reduces the distance of several pairs of states, e.g. \(|+\rangle\!\langle+|\) and \(|-\rangle\!\langle-|\), leaves \(|0\rangle\!\langle 0|\) and \(|1\rangle\!\langle 1|\) untouched. Hence, given a NNM evolution \(\mathbf{\Lambda}\), the corresponding Markovian pre-processing may not affect the information content of some initializations. Nonetheless, the corresponding PNM core typically provides larger backflows for generic initializations. For instance, the PNM core \(\overline{\mathbf{\Lambda}}\) of the previous example satisfies Proposition 3: it corresponds to the identity map: \(\overline{\Lambda}_{t}=I_{S}\) at \(t=t_{3}-t_{1}\). ### Non-divisible evolutions We briefly approach the case of non-divisible evolutions, namely those for which the intermediate map \(V_{t,s}\) cannot be written for all \(0<s<t\). Great part of the results presented in this work are connected with \(T^{\Lambda}\), which in turn is strictly connected with the properties of \(V_{t,s}\). Hence, studying the PNM core of a non-divisible NNM evolution sounds problematic. We start by reminding that we are considering continuous evolutions (see Section I). Moreover, continuous non-divisible evolutions must have an initial time interval \([0,T^{NB})\) when an inverse \(\Lambda_{t}^{-1}\) exists [7]. Hence, we can consider intermediate maps of the form \(V_{t,s}=\Lambda_{t}\circ\Lambda_{s}^{-1}\) for all \(s<T^{NB}\). Notice that this is possible for all final times \(t\). Moreover, we remember that invertibility implies divisibility, but the inverse is not true. Therefore, any non-divisible evolution is characterized by a time \(T^{ND}\), in general larger than \(T^{NB}\), such that \(V_{t,s}\) can be defined for all \(s<T^{ND}\). As a result, even for non-invertible evolutions there is always a finite time interval inside which we can look for \(T^{\Lambda}\). We can replace Eq. (3) with \[T^{\Lambda}\!=\!\max\left\{\begin{array}{ll}T<T^{ND}\left|\begin{array}{ll} \text{(A)}&V_{t,s}&\text{CPTP for all}\ \ \ s\leq t\leq T\\ \text{(B)}&V_{t,T}&\text{CPTP for all}\ \ \ \ T\leq t\\ \text{(C)}&\Lambda_{T}&\text{not unitary}\ \ \ \ \ \ \ \ T>0\end{array}\right. \right\}, \tag{14}\] where we set \(T^{ND}=\infty\) for divisible evolutions. We study an example where we obtain the PNM core \(\overline{\mathbf{\Lambda}}\) of a non-invertible NNM depolarizing evolution \(\mathbf{\Lambda}\) in Appendix F. In this example we show that, although technically \(T^{\Lambda}\) should be defined by Eq. (14), in practice its difference from Eq. (3) does not play a fundamental role in many cases. In other words, the additional condition \(T<T^{ND}\) is not stringent for many relevant cases. ## IV Distinguishability backflows In this section we study the relation between NNM and PNM evolutions under the point of view of information backflows, as measured by the distinguishability between pairs of evolving states. Hence, we analyse the potential of non-Markovian evolutions to make two states more distinguishable in a time interval when the corresponding intermediate map is not CPTP. Consider the scenario where we are given one state chosen randomly between \(\rho_{1}\) and \(\rho_{2}\), and, through measurements, we have to guess which state we received. The maximum probability to correctly distinguish the two states is called the _guessing probability_ and it corresponds to \(P_{g}(\rho_{1},\rho_{2})=(2+\|\rho_{1}-\rho_{2}\|_{1})/4\), where \(\|\cdot\|_{1}\) is the trace norm. The maximum value \(1\) is obtained for perfectly distinguishable, i.e. orthogonal, states. Instead, the minimum value \(1/2\) is obtained if and only if \(\rho_{1}\) and \(\rho_{2}\) are identical. For the sake of simplicity, in the following we study \(\|\rho_{1}-\rho_{2}\|_{1}\), which we call the _distinguishability_ of \(\rho_{1}\) and \(\rho_{2}\). Consider a composite system \(SA\) with Hilbert space \(S(\mathcal{H}_{S}\otimes\mathcal{H}_{A})\), where \(S\) is evolved by \(\mathbf{\Lambda}\) and \(A\) is an ancillary system. Hence, a generic initialization \(\rho_{SA}(0)\in S(\mathcal{H}_{S}\otimes\mathcal{H}_{A})\) evolves as \(\rho_{SA}(t)=\Lambda_{t}\otimes I_{A}(\rho_{SA}(0))\). Take two states \(\rho_{SA,1}(t)\) and \(\rho_{SA,2}(t)\) evolving under the same evolution. Any increase of \(\|\rho_{SA,1}(t)-\rho_{SA,2}(t)\|_{1}\) represents a recovery of the missing information needed to distinguish the two states and is a signature of non-Markovianity [10; 25]. Indeed, this quantity is contractive under quantum channels: CPTP intermediate maps \(V_{t,s}\) imply a distinguishability degradation \(\|\rho_{SA,1}(s)-\rho_{SA,2}(s)\|_{1}\geq\|\rho_{SA,1}(t)-\rho_{SA,2}(t)\|_{1}\), while a _distinguishability backflow_\(\|\rho_{SA,1}(s)-\rho_{SA,2}(s)\|_{1}<\|\rho_{SA,1}(t)-\rho_{SA,2}(t)\|_{1}\) implies that \(V_{t,s}\) is not CPTP. For this reason, Markovian evolutions are characterized by monotonically decreasing distinguishes, while non-Markovian evolutions can provide distinguishability backflows. We remember that for all bijective (or almost-always bijective) evolutions there exists a constructive method for an initial pair \(\rho_{SA,1}(0),\rho_{SA,2}(0)\) that provides a distinguishability backflow in \([s,t]\) if and only if the corresponding intermediate map is not CPTP [10]. We proceed by studying in which cases and to what extent NNM evolutions damp distinguishability backflows if compared with their corresponding PNM cores. We saw that the initial noise \(\Lambda_{T^{\Lambda}}\) of NNM evolutions is useless for non-Markovian phenomena. Now, we quantify how much \(\Lambda_{T^{\Lambda}}\) suppresses backflows for each specific initialization. **Proposition 4**.: _Consider a NNM evolution \(\mathbf{\Lambda}\) providing a distinguishability backflow in \([s,t]\) by evolving \(\rho_{SA,1}(0)\) and \(\rho_{SA,2}(0)\), where \(V_{t,s}\) is not CPTP. If \(\rho_{SA,1}(T^{\Lambda})\) and \(\rho_{SA,2}(T^{\Lambda})\) are not orthogonal, the corresponding PNM core \(\overline{\mathbf{\Lambda}}\) provides a larger backflow in \([s-T^{\Lambda},t-T^{\Lambda}]\) by evolving a pair of orthogonal states, where \(V_{t,s}=\overline{V_{t-T^{\Lambda},s-T^{\Lambda}}}\). The proportionality factor between the backflows is \(2/\|\rho_{SA,1}(T^{\Lambda})-\rho_{SA,2}(T^{\Lambda})\|_{1}>1\)._ Proof.: We define \(\Delta(T^{\Lambda})=\rho_{SA,1}(T^{\Lambda})-\rho_{SA,2}(T^{\Lambda})\), which is hermitian and traceless at all times. This operator corresponds to a segment in the state space with direction and length (as measured by \(\|\cdot\|_{1}\)) defined by \(\rho_{SA,1}(T^{\Lambda})\) and \(\rho_{SA,2}(T^{\Lambda})\). Since \(\rho_{SA,1}(T^{\Lambda})\) and \(\rho_{SA,2}(T^{\Lambda})\) are not orthogonal, \(\|\Delta(T^{\Lambda})\|_{1}<2\). We define \(\overline{\Delta}(0)=2\Delta(t)/\|\Delta(T^{\Lambda})\|_{1}\), namely a segment with the same direction of \(\Delta(T^{\Lambda})\) but with \(\|\overline{\Delta}(0)\|_{1}=2\). Since \(\overline{\Delta}(0)\) is hermitian and traceless, it can be diagonalized with a unitary \(U\), namely \(\overline{\Delta}(0)=U\overline{\Delta}_{D}(0)U^{\dagger}\), where \(\overline{\Delta}_{D}(0)=\mathrm{diag}(\delta_{1},\delta_{2},\cdots,\delta_{d})\), \(\delta_{i}\in[-1,1]\) and \(d=d_{S}d_{A}\) is the dimension of \(\mathcal{H}_{S}\otimes\mathcal{H}_{A}\). Moreover, we have that \(\|\overline{\Delta}(0)\|_{1}=\|\overline{\Delta}_{D}(0)\|_{1}=\sum_{i}|\delta _{i}|=2\) and \(\mathrm{Tr}[\overline{\Delta}(0)]=\mathrm{Tr}[\overline{\Delta}_{D}(0)]=\sum_ {i}\delta_{i}=0\). Therefore, we can write \(\overline{\Delta}_{D}(0)=\sigma^{+}-\sigma^{-}\), where \(\sigma^{+}\) (\(\sigma^{-}\)) is a diagonal density matrix obtained by replacing the negative diagonal elements of \(\overline{\Delta}_{D}(0)\) (\(-\overline{\Delta}_{D}(0)\)) with a zero. We define \(\overline{\rho}_{SA,1}(0)=U\sigma_{1}U^{\dagger}\) and \(\overline{\rho}_{SA,2}(0)=U\sigma_{2}U^{\dagger}\). Notice that \(\overline{\rho}_{SA,1}(0)-\overline{\rho}_{SA,2}(0)=\overline{\Delta}(0)\): the two states \(\overline{\rho}_{SA,1}(0)\) and \(\overline{\rho}_{SA,2}(0)\) are orthogonal and their difference \(\overline{\Delta}(0)\) is proportional to \(\Delta(T^{\Lambda})=\rho_{SA,1}(T^{\Lambda})-\rho_{SA,2}(T^{\Lambda})\). Hence, if \(\mathbf{\Lambda}\) provides a distinguishability backflow in \([s,t]\) by evolving \(\rho_{SA,1}(0)\) and \(\rho_{SA,2}(0)\), \(\overline{\mathbf{\Lambda}}\) provides a (larger) backflow in \([s-T^{\Lambda},t-T^{\Lambda}]\) by evolving \(\overline{\rho}_{SA,1}(0)\) and \(\overline{\rho}_{SA,2}(0)\), where the latter backflow is larger than the former by the factor \(2/\|\Delta(T^{\Lambda})\|_{1}>1\). Notice that in this proof we provide a constructive method to build the pairs of states \(\overline{\rho}_{SA,1}(0)\), \(\overline{\rho}_{SA,2}(0)\) which, if evolved with the corresponding PNM core, provide larger backflows. The meaning of this proposition matches the information-theoretic interpretation of PNM evolutions gave above. Indeed, consider a NNM evolution \(\mathbf{\Lambda}\) and those pairs of states that provide the largest distinguishability backflows in a time interval \([s,t]\). It can be proven that the corresponding initial states \(\rho_{SA,1}(0)\), \(\rho_{SA,2}(0)\) are orthogonal. If at time \(T^{\Lambda}\) the two states \(\rho_{SA,1}(T^{\Lambda})\) and \(\rho_{SA,2}(T^{\Lambda})\) are still orthogonal when evolved by \(\mathbf{\Lambda}\), it means that the noise \(\Lambda_{T^{\Lambda}}\) did not dissipate the information to distinguish this particular pair of states, as discussed in Section III.2. On the contrary, if \(\rho_{SA,1}(T^{\Lambda})\) and \(\rho_{SA,2}(T^{\Lambda})\) are no more orthogonal, \(\Lambda_{T^{\Lambda}}\) dissipated part of the information useful to distinguish the two states unnecessarily: this information does not take part to later backflows. Hence, in these cases the corresponding PNM cores provide larger backflows. The results of Proposition 4 are independent from \(s\), \(t\) and the magnitude of the corresponding distinguishability backflow. Indeed, it solely depends on the information lost after the Markovian pre-processing, namely the distinguishability at time \(T^{\Lambda}\). Hence, the results of Proposition 4 can be directly extended to all the backflows that the same pair shows, even without knowing their magnitude and when they take place **Corollary 2**.: _Consider a NNM evolution \(\mathbf{\Lambda}\) and its corresponding PNM core \(\overline{\mathbf{\Lambda}}\). For any pair of states \(\rho_{SA,1}(0)\), \(\rho_{SA,2}(0)\) that are not orthogonal at time \(T^{\Lambda}\) and provide one or more distinguishability backflows when evolved by \(\mathbf{\Lambda}\), there exists a corresponding pair of orthogonal states such that, if evolved by \(\overline{\mathbf{\Lambda}}\), each backflow is larger by a factor \(2/\|\rho_{SA,1}(T^{\Lambda})-\rho_{SA,2}(T^{\Lambda})\|_{1}>1\). The intermediate maps generating the backflows of the two evolutions are the same and the corresponding time intervals differ by a shift of \(T^{\Lambda}\)._ ## V Non-Markovianity Measures A non-Markovianity measure \(M(\mathbf{\Lambda})\) quantifies the non-Markovian content of evolutions, where Markovianity implies \(M(\mathbf{\Lambda})=0\), while \(M(\mathbf{\Lambda})>0\) implies \(\mathbf{\Lambda}\) to be non-Markovian. As we see below, those measures that are connected with the actual time evolution of one or more states are influenced by the initial noisy action that precedes non-Markovian phenomena. Hence, in these cases PNM cores provide higher non-Markovianity measures \(M(\mathbf{\Lambda})\leq M(\overline{\mathbf{\Lambda}})\): the largest values that any Markovianity measure of this type can assume can be obtained with PNM evolutions and any value assumed with a NNM evolution can be matched or outperformed by a PNM evolution. The main representatives of this class of measures are defined through the collection of the information backflows obtainable with \(\mathbf{\Lambda}\), where this quantity is maximized with respect all the possible system initializations [13; 25; 26; 27]. In the following, we refer to these cases as flux measures. Note that there are non-Markovianity measure satisfying \(M(\mathbf{\Lambda})\leq M(\overline{\mathbf{\Lambda}})\) while not being flux measures, e.g. [28]. On the contrary, those measures \(M\) that solely depend on the features of intermediate maps, without considering the action of the preceding dynamics, imply \(M(\mathbf{\Lambda})=M(\overline{\mathbf{\Lambda}})\). Indeed, a NNM evolution \(\mathbf{\Lambda}\) and its corresponding PNM evolution \(\overline{\mathbf{\Lambda}}\) have the same non-CPTP intermediate maps. The main representatives of this second class are the Rivas-Huelga-Plenio [16] measure and the \(k\)-divisibility hierarchy [29]. We underline that, while flux measures represent the amplitude of phenomena that can be observed, this is not true for this class. ### Flux measures Flux measures quantify the non-Markovian content of evolutions as follow. Pick an information quantifier and maximize the sum of all the corresponding backflows that \(\mathbf{\Lambda}\) shows with respect to all the possible initializations. More precisely, consider a functional \(W(\rho_{SA}(t))=W(\Lambda_{t}\otimes I_{A}(\rho_{SA}))\geq 0\) which represents the amount of information as measured by \(W\) contained in the evolving state. We can also consider quantifiers taking multiple states as input, e.g. state distinguishability. In order to consider \(W\) an information quantifier for \(S\), we require it to be contractive under quantum channels on \(S\), namely \(W(\rho_{SA}(0))\geq W(\Lambda\otimes I_{A}(\rho_{SA}))\) for all \(\rho_{SA}\) and CPTP \(\Lambda\). We define the _information flux_ as \[\sigma(\Lambda_{t}\otimes I_{A}(\rho_{SA}))=\frac{d}{dt}W(\Lambda_{t}\otimes I_ {A}(\rho_{SA}))\,. \tag{15}\] Since Markovianity corresponds to CP-divisibility, Markovian evolutions imply non-positive fluxes. Instead, if \(\sigma(\Lambda_{t}\otimes I_{A}(\rho_{SA}))>0\), we say that the evolution of \(\rho_{SA}\)_witnesses_ the non-Markovian nature of \(\mathbf{\Lambda}\) through a backflow of \(W\). Flux measures consist of the greatest amount of \(W\) that an evolution can retrieve during the evolution with respect to any initialization, namely: \[M^{W}(\mathbf{\Lambda})=\max_{\rho_{SA}}\int_{t\geq 0,\sigma>0}\sigma_{t}(\Lambda_{t} \otimes I_{A}(\rho_{SA}))dt\,, \tag{16}\] where the maximization is performed over the whole system-ancilla state space [30]. Being \(\overline{\mathbf{\Lambda}}\) the PNM core of \(\mathbf{\Lambda}\) and \(\mathrm{Im}(\Lambda_{t}\otimes I_{A})\) the image of the evolution at time \(t\), namely those system-ancilla states that can be obtained as output of the map \(\Lambda_{t}\otimes I_{A}\), we obtain: \[M^{W}(\mathbf{\Lambda}) =\max_{\rho_{SA}}\int_{t\geq T^{\Lambda},\sigma>0}\sigma(\Lambda_ {t}\otimes I_{A}(\rho_{SA}))\] \[=\max_{\rho_{SA}\in\mathbbm{Im}(\Lambda_{T^{\Lambda}}\otimes I_{ A})}\int_{t\geq T^{\Lambda},\sigma>0}\sigma(V_{t,T^{\Lambda}}\otimes I_{A}( \rho_{SA}))\] \[=\max_{\rho_{SA}\in\mathbbm{Im}(\Lambda_{T^{\Lambda}}\otimes I_{ A})}\int_{t\geq 0,\sigma>0}\sigma(\overline{\Lambda}_{t}\otimes I_{A}(\rho_{SA}))\] \[\leq\max_{\rho_{SA}}\int_{t\geq 0,\sigma>0}\sigma(\overline{ \Lambda}_{t}\otimes I_{A}(\rho_{SA}))=M^{W}(\overline{\mathbf{\Lambda}})\,, \tag{17}\] where the first equality is justified by the fact that backflows can only happen for \(t\geq T^{\Lambda}\) (any NNM evolution is CP-divisible in \([0,T^{\Lambda}]\)), the second equality is a simple consequence of \(\Lambda_{t}=V_{t,T^{\Lambda}}\circ\Lambda_{T^{\Lambda}}\), the third equality follows from Eq. (10) and the inequality follows from the enlargement of the maximization space. It is interesting to understand when \(M^{W}(\mathbf{\Lambda})<M^{W}(\overline{\mathbf{\Lambda}})\). Consider the information quantifier \(D(\rho_{SA,1}(t),\rho_{SA,2}(t))=||\rho_{SA,1}(t)-\rho_{SA,2}(t)||_{1}\) for a fixed ancilla \(A\), where in this case we consider the evolution \(\mathbf{\Lambda}\). We call \(\{\rho_{SA,1}^{j},\rho_{SA,2}^{j}\}\) those pairs that allow to obtain the maximum of Eq. (16). Notice that these pairs of states are always initially orthogonal: \(D(\rho_{SA,1}^{j}(0),\rho_{SA,2}^{j}(0))=2\) for all \(i\). Hence, if \(\sigma^{D}(\Lambda_{t}\otimes I_{A}(\rho_{SA,1}),\Lambda_{t}\otimes I_{A}( \rho_{SA,2}))\) is the flux associated to \(D(\rho_{SA,1}(t),\rho_{SA,2}(t))\) as in Eq. (15), we have \[M^{D}(\mathbf{\Lambda})=\max_{\rho_{SA,1},\rho_{SA,2}\leq\lambda} \int_{t\geq 0,\sigma>0}\sigma^{D}(\Lambda_{t}\otimes I_{A}(\rho_{SA,1}), \Lambda_{t}\otimes I_{A}(\rho_{SA,2}))\,dt\] \[=\int_{t\geq 0,\sigma>0}\sigma^{D}(\Lambda_{t}\otimes I_{A}( \rho_{SA,1}^{j}),\Lambda_{t}\otimes I_{A}(\rho_{SA,2}^{j}))\,dt\quad\text{ for all }i. \tag{18}\] Thanks to Corollary 2, we can prove that: \[M^{D}(\mathbf{\Lambda})\leq M^{D}(\overline{\mathbf{\Lambda}})=\max_{i}\frac{2M^{D}( \mathbf{\Lambda})}{||\rho_{SA,1}^{j}(T^{\Lambda})-\rho_{SA,2}^{j}(T^{\Lambda})||_{ 1}}\,, \tag{19}\] where \(\rho_{SA,1}^{j}(T^{\Lambda})=\Lambda_{T^{\Lambda}}\otimes I_{A}(\rho_{SA,1}^{ j}(0))\) and \(\rho_{SA,2}^{j}(T^{\Lambda})=\Lambda_{T^{\Lambda}}\otimes I_{A}(\rho_{SA,2}^{j} (0))\). Hence, if the information content of the pairs \(\{\rho_{SA,1}^{j},\rho_{SA,2}^{j}\}\) at time \(T^{\Lambda}\) is lower than at the initial time when evolved by \(\mathbf{\Lambda}\), then \(M^{D}(\mathbf{\Lambda})<M^{D}(\overline{\mathbf{\Lambda}})\). Moreover, the proportionality factor between the two measures is given by the states \(\{\rho_{SA,1}^{j},\rho_{SA,2}^{j}\}\) that get the closest at time \(T^{\Lambda}\). In Section VII we explicitly evaluate \(M^{D}(\mathbf{\Lambda})\) and \(M^{D}(\overline{\mathbf{\Lambda}})\) in case of depolarizing evolutions and we show that, even without ancillary systems, \(M^{D}(\mathbf{\Lambda})<M^{D}(\overline{\mathbf{\Lambda}})\) is always verified. A second measure of non-Markovianity similar to \(M^{W}\) is given by [31] \[M^{W,max}(\mathbf{\Lambda})=\max_{s<t,\rho_{SA}}\{0,W(\rho_{SA}(t))-W(\rho_{SA}(s ))\}, \tag{20}\] which corresponds to the largest backflow of \(W\) that the dynamics is able to show in a single time interval. Finally, a third measure is [31] \[M^{W,max}(\mathbf{\Lambda})=\max_{r>0,\rho_{SA}}\{0,W(\rho_{SA}(t))-\langle W(\rho_ {SA}(t))\rangle\}, \tag{21}\] where \(\langle W(\rho_{SA}(t))\rangle=t^{-1}\int_{0}^{t}W(\rho_{SA}(s))ds\). This measure corresponds to the largest difference, with respect to \(t\), between the information \(W\) at time \(t\) and its average in the time interval \([0,t]\). Moreover, \(M^{W,max}\) has a precise operational meaning connected with the probability to store and faithfully retrieve information, as measured by \(W\), by state preparation and measurement, where an attack performed by an eavesdropper may occur. It can be proven [31] that, for any evolution and information quantifier, we have \(M^{W,max}(\mathbf{\Lambda})\leq M^{W,max}(\mathbf{\Lambda})\leq M^{W}(\mathbf{\Lambda})\). Similarly to Eq. (17), it is possible to demonstrate that \[M^{W,max}(\mathbf{\Lambda})\leq M^{W,max}(\overline{\mathbf{\Lambda}})\ \ \text{ and }\ \ M^{W,max}(\mathbf{\Lambda})\leq M^{W,max}(\overline{\mathbf{\Lambda}})\,. \tag{22}\] ### Incoherent mixing measure A second type of non-Markovianity measure corresponds to the minimal incoherent Markovian noise needed to make a non-Markovian evolution \(\mathbf{\Lambda}\) Markovian [28]. In order to describe this measure, we first consider an evolution obtained as convex combination of \(\mathbf{\Lambda}\) and a generic Markovian evolution \(\mathbf{\Lambda}^{M}\). We consider the mixed evolution \(\mathbf{\Lambda}_{\rho}^{mix}=(1-p)\mathbf{\Lambda}+p\mathbf{\Lambda}^{M}\), and define a non-Markovianity measure by looking for the minimal value of \(p\), hence the minimal amount of Markovian noise, such that \(\mathbf{\Lambda}_{p}^{mix}\) is Markovian, namely: \[M^{mix}(\mathbf{\Lambda})=\min\{p\,|\,\exists\mathbf{\Lambda}^{M}\text{ s.t. }\mathbf{\Lambda}_{p}^{mix}\text{ is Markovian}\}\,. \tag{23}\] In Appendix E we prove that: \[M^{mix}(\mathbf{\Lambda})\leq M^{mix}(\overline{\mathbf{\Lambda}})\,. \tag{24}\] In Section VII we show that \(M^{mix}(\mathbf{\Lambda})<M^{mix}(\overline{\mathbf{\Lambda}})\) for all NNM depolarizing evolutions \(\mathbf{\Lambda}\). ### RHP measure and \(k\)-divisibility Consider a generic PNM evolution \(\overline{\mathbf{\Lambda}}\) and all the corresponding NNM evolutions \(\mathbf{\Lambda}\) that can be obtained from \(\overline{\mathbf{\Lambda}}\) with a Markovian pre-processing. As we saw, \(\overline{\mathbf{\Lambda}}\) and all its corresponding \(\mathbf{\Lambda}\) have the same non-CPTP intermediate maps. Therefore, the non-Markovianity measures that solely depend on the properties of non-CPTP intermediate maps, since they are not influenced by the particular (useless) noise that precedes their action, assume the same value for \(\overline{\mathbf{\Lambda}}\) and all its corresponding \(\mathbf{\Lambda}\). This is the case of the RHP measure \(\mathcal{I}(\mathbf{\Lambda})\) (see Eq. (4) from Ref. [16]), and the \(k\)-divisibility non-Markovian degree \(\mathrm{NMD}[\mathbf{\Lambda}]\) (see Ref. [29]): \[\mathcal{I}(\overline{\mathbf{\Lambda}})=\mathcal{I}(\mathbf{\Lambda})\ \ \text{ and }\ \ \mathrm{NMD}(\overline{\mathbf{\Lambda}})=\mathrm{NMD}(\mathbf{\Lambda})\,. \tag{25}\] Entanglement breaking property We call \(C(\rho_{AB})\) a correlation measure for the bipartite system \(AB\) if: (i) \(C(\rho_{AB})\geq 0\) for all \(\rho_{AB}\), (ii) \(C(\rho_{AB})=0\) for all product states \(\rho_{A}\otimes\rho_{B}\), (iii) \(C(\Lambda_{A}\otimes I_{B}(\rho_{AB}))\leq C(\rho_{AB})\) and \(C(I_{A}\otimes\Lambda_{B}(\rho_{AB}))\leq C(\rho_{AB})\) for all \(\rho_{AB}\) and CPTP maps \(\Lambda_{A}\) and \(\Lambda_{B}\). Entanglement measures, denoted here by \(E\), capture only non-classical correlations. Indeed, they satisfy the additional property of being non-increasing under local operations, namely (iii), assisted by classical communication (LOCC). As a consequence, \(E(\rho_{AB})=0\) for all separable states, namely those that can be written as statistical mixtures of product states: \(\rho_{AB}=\sum_{i}p_{i}\rho_{A}\otimes\rho_{B}\), where \(\{p_{i}\}_{i}\) is a probability distribution. We discuss how the link between NNM and PNM evolutions behaves with respect to the entanglement breaking (EB) property. A quantum channel \(\Lambda_{S}\) is EB if it destroys the entanglement of any input state, namely if \(\Lambda_{S}\otimes I_{A}(\rho_{SA})\) is separable for all \(\rho_{SA}\). Consider a generic \(\mathbf{\Lambda}\). We say that it is EB if there exists a time \(t^{EB,\Lambda}>0\) such that \(\Lambda_{t}\) is EB for all \(t\geq t^{EB,\Lambda}\). Take a PNM evolution \(\overline{\mathbf{\Lambda}}\) and one of the possible NNM evolutions \(\mathbf{\Lambda}\) that can be obtained with a Markovian pre-processing. This Markovian pre-processing cannot increase the amount of entanglement of any state. Hence, if \(\overline{\mathbf{\Lambda}}\) is EB, then \(\mathbf{\Lambda}\) is EB. Nonetheless, in case of \(\overline{\mathbf{\Lambda}}\) and \(\mathbf{\Lambda}\) EB, there is no general order for the corresponding EB times: \(t^{EB,\Lambda}>t^{EB,\overline{\Lambda}}\) and \(t^{EB,\Lambda}<t^{EB,\overline{\Lambda}}\) are both possible. We have to take in mind that, if a generic NNM evolution \(\mathbf{\Lambda}\) is EB, we cannot immediately say anything about the EB nature of \(\overline{\mathbf{\Lambda}}\) and we must study the particular dynamics more in detail. Indeed, it is easy to find NNM evolutions \(\mathbf{\Lambda}\) with EB useless noises \(\Lambda_{T^{\Lambda}}\), where the corresponding PNM core \(\overline{\mathbf{\Lambda}}\) is not EB. Also, there exist cases where the Markovian pre-processing \(\Lambda_{T^{\Lambda}}\) is not EB, the PNM core \(\overline{\mathbf{\Lambda}}\) is not EB, but the corresponding NNM evolution \(\mathbf{\Lambda}\) is EB. ### Activation of correlation backflows We now discuss a technique focused on entanglement revivals which can be easily generalized to other correlation measures. The isolation of the PNM core of a NNM evolution may lead to the _activation_ of entanglement backflows. Take a bipartite system \(SA\), where \(S\) is evolved by \(\mathbf{\Lambda}\) and \(A\) is an ancilla. Whenever we have a backflow of \(E\), the same backflow can also observed with \(\overline{\mathbf{\Lambda}}\), namely the corresponding PNM evolution. Moreover, as we saw for the corresponding flux non-Markovianity measure, \(M^{E}(\mathbf{\Lambda})\leq M^{E}(\overline{\mathbf{\Lambda}})\). What is interesting is the possibility to activate backflows of entanglement through the isolation of the PNM core, namely when \(M^{E}(\mathbf{\Lambda})=0\) and \(M^{E}(\overline{\mathbf{\Lambda}})>0\). This scenario is made possible when \(\mathbf{\Lambda}\) is EB, the corresponding non-CPTP intermediate maps \(V_{t,s}\) take place only for \(t^{EB,\Lambda}\leq s<t\) and the corresponding PNM core \(\overline{\mathbf{\Lambda}}\) is not EB. In this case, when an entangled state is evolved by \(\mathbf{\Lambda}\) and a non-CPTP intermediate map take place, all the entanglement has already been destroyed and no backflows are possible. Instead, for a system evolving under \(\overline{\mathbf{\Lambda}}\), when the (same) non-CPTP intermediate map takes place entanglement can be non-zero and backflows are allowed. Whenever a non-Markovian evolution does not provide correlation backflows, additional ancillary degrees of freedom can activate the possibility to observe backflows. This phenomena has already been studied for entanglement [32; 33] and gaussian steering [33]. For instance, instead of evaluating entanglement among \(S\) and \(A\), we would need to evaluate it among \(SA^{\prime}\) and \(A\), where \(A^{\prime}\) is an additional ancilla. Hence, our construction allows a different strategy to obtain correlation backflows for those situations where an \(SA\) setup does not show any: instead of adding additional ancillary system, which in some cases could be experimentally more demanding to handle, we could simply consider the PNM core of the studied evolution. ## VII Depolarizing model We apply our results to a simple model called depolarizing. Starting from a generic NNM depolarizing evolution \(\mathbf{\Lambda}\), we show how to find \(T^{\Lambda}\), \(\tau^{\Lambda}\) and \(t^{\Lambda}\), the corresponding PNM evolution \(\overline{\mathbf{\Lambda}}\) and we calculate the gains in terms of information backflows and non-Markovianity measures that \(\overline{\mathbf{\Lambda}}\) provides with respect to \(\mathbf{\Lambda}\). We conclude by applying our technique to an explicit toy model. Moreover, we show how our approach can be directly applied to non-bijective depolarizing evolutions in Appendix F. We define a generic depolarizing evolution \(\mathbf{\Lambda}\) for a \(d\)-dimensional system \(S\) through the corresponding dynamical map, namely \[\Lambda_{t}(\,\cdot\,)=f(t)I_{S}(\,\cdot\,)+(1-f(t))\mathrm{Tr}\left[\,\cdot\, \right]\frac{\mathbb{I}_{S}}{d}\,, \tag{26}\] where \(I_{S}\) is the identity map and \(\mathbb{I}_{S}/d\) is the maximally mixed state [28]. The behaviour of the evolution is determined by the _characteristic function_\(f(t)\). The dynamical maps \(\Lambda_{t}\) are CPTP, continuous in time and such that \(\Lambda_{0}=I_{S}\) if and only if \(f(t)\in[-1/(d^{2}-1),1]\) is a continuous function such that \(f(0)=1\). For the sake of simplicity, from now on we restrict our attention to depolarizing evolutions with \(f(t)\in[0,1]\) for all \(t\geq 0\). Those cases of \(f(t)\) assuming negative values necessitate a simple generalization of the techniques used here. An in-depth analysis of depolarizing evolutions with \(f(t)\in[-1/(d^{2}-1),1]\) can be found in Ref. [28]. The evolution \(\mathbf{\Lambda}\) is invertible if and only if \(f(t)>0\) at all times. Indeed, \(f(t^{NB})=0\) implies that every initial state is mapped into the same (maximally mixed) state: \(\Lambda_{t^{\prime}sa}(\rho_{S}(0))=\mathbb{I}_{S}/d\). In this case \(\Lambda_{t^{\prime}sa}\) is non-invertible and we cannot define the intermediate maps \(V_{t,s^{\prime}sa}\) with \(t>t^{NB}\). The interpretation of depolarizing evolutions is straightforward: at time \(t\) each state is mixed with the maximally mixed state \(\mathbb{I}_{S}/d\) with a ratio given by \(f(t)\). The larger is \(f(t)\), the closer is \(\rho_{S}(t)\) form its initial state \(\rho_{S}(0)\). Moreover, this contraction towards the maximally mixed state is symmetric in the state space. Indeed, for any two initial states \(\rho_{S,1}(0)\) and \(\rho_{S,2}(0)\) evolving under \(\mathbf{\Lambda}\) we have: \[\|\rho_{S,1}(t)-\rho_{S,2}(t)\|_{1}=f(t)\|\rho_{S,1}(0)-\rho_{S,2}(0)\|_{1}\,. \tag{27}\] The intermediate map corresponding to the depolarizing evolution during a generic time interval \([s,t]\) assumes the same form of a depolarizing dynamical map \[V_{t,s}(\,\cdot\,)=\frac{f(t)}{f(s)}\,I_{S}(\,\cdot\,)+\left(1-\frac{f(t)}{f(s)} \right)\mathrm{Tr}\left[\,\cdot\,\right]\frac{1}{d}\,. \tag{28}\] Hence, the CPTP condition [34] for \(V_{t,s}\) coincides with \(f(t)/f(s)\in[0,1]\), that is \(f(s)\geq f(t)\) for \(s\leq t\). Similarly, the intermediate map \(V_{t+s}\) is CPTP for infinitesimal \(\epsilon>0\) if and only if \(f^{\prime}(t)\leq 0\). Indeed, Markovian depolarizing evolutions are characterized by \(f^{\prime}(t)\leq 0\) at all times. The Choi state of \(V_{t,s}\) is \(V_{t,s}\otimes I_{S}(\phi_{SA}^{+})=f(t)/f(s)\phi_{SA}^{+}+(1-f(t)/f(s))\mathbbm {1}_{SA}/d^{2}\) and the corresponding smallest eigenvalue is \(\lambda_{t,s}=(1-f(t)/f(s))/d^{2}\). Since \(V_{t,s}\) is CPTP if and only if \(\lambda_{t,s}\geq 0\), thanks to the evaluation of \(\lambda_{t,s}\) we are able to obtain \(\mathcal{P}^{\Lambda}\) and \(\mathcal{N}^{\Lambda}\), the collection of time pairs \(\{s,t\}\) such that \(V_{t,s}\) is respectively CPTP and non-CPTP (see Eqs. (1) and (2)). Non-Markovian depolarizing evolutions have non-monotonic characteristic functions. An increase of \(f(t)\) in a given time interval corresponds to a corresponding non-CPTP intermediate map. Moreover, in the same time interval the trace distance between any two states increases, namely a distinguishability backflow. The largest distinguishability backflows are provided by initially orthogonal states, for which the trace distance is equal to \(2f(t)\) (see Eq. (27)). We consider the flux non-Markovianity measure \(M^{D}\) in case of no ancillary systems (see Eq. (18)): \[M^{D}(\mathbf{\Lambda}) =\int_{\sigma^{0}>0}\sigma^{D}(\Lambda_{t}(\rho_{S,1}),\Lambda_{t }(\rho_{S,2}))dt=2\int_{f^{\prime}>0}f^{\prime}(t)dt\] \[=2\sum_{i}f(t_{fin,i})-f(t_{in,i})=2\Delta\,, \tag{29}\] where \(\rho_{S,1}\), \(\rho_{S,2}\) are any two orthogonal states, \((t_{in,i},t_{fin,i})\) is the \(i\)-th time interval when \(f^{\prime}(t)>0\) and \(\Delta>0\) is the sum of all the revivals of \(f(t)\). Finally, the non-Markovianity measure given in Eq. (23) is equal to \(M^{mix}(\mathbf{\Lambda})=\Delta/(1+\Delta)\)[28]. ### Backflows timing and PNM core We are ready to evaluate \(T^{\Lambda}\), \(\tau^{\Lambda}\) and \(t^{\Lambda}\). We can rewrite Eqs. (3), (4) and (5) in terms of \(f(t)\) and \(f^{\prime}(t)\) as follows: \[T^{\Lambda} =\max\left\{\,T\,\left|\begin{array}{ccc}(\mathrm{A})&f^{\prime }(t)\leq 0&\text{for all}&t\leq T,\\ (\mathrm{B})&f(T)\geq f(t)&\text{for all}&T\leq t,\\ (\mathrm{C})&f(T)\neq 1&T>0.\end{array}\right.\right\}, \tag{30}\] \[\tau^{\Lambda} =\inf\left\{\,T\,\left|\,f^{\prime}(T)>0\right.\right\},\] (31) \[t^{\Lambda} =\min\left\{\,T\,\left|\,f(T)=f(T^{\Lambda})\text{ for }T>T^{\Lambda}\right.\right\}, \tag{32}\] where the last equality holds because Eq. (30) implies that \(f(T^{\Lambda})>f(T^{\Lambda}+\epsilon)\) for infinitesimal \(\epsilon>0\). As expected, condition (A) of Eq. (30) implies that \(\mathbf{\Lambda}\) behaves as a Markovian depolarizing in the time interval \([0,T^{\Lambda}]\). Secondly, by considering (A) and (B) together, we can state that \(f(T^{\Lambda})\in(0,1)\). As discussed in Section III.3, in case \(\mathbf{\Lambda}\) is non-invertible and \(t^{NB}\) is the earliest time when \(f(t^{NB})=0\), we should add to Eq. (30) the constraint \(T^{\Lambda}<t^{NB}\). Anyway, as we show in Appendix F, even without imposing such a constraint, \(T^{\Lambda}<t^{NB}\). We proved that for a generic evolution \(0\leq T^{\Lambda}\leq\tau^{\Lambda}\leq t^{\Lambda}\) (see Eq. (6)). Nonetheless, depolarizing evolutions are always characterized by \(0\leq T^{\Lambda}<\tau^{\Lambda}<t^{\Lambda}\leq\infty\). The first equality is obtained for PNM depolarizing evolutions, while the last in case \(f(T^{\Lambda})>f(t)\) for all \(t>T^{\Lambda}\) and \(\lim_{t\to\infty}f(t)=f(T^{\Lambda})\). We obtain the PNM core of a NNM depolarizing evolution by exploiting the method presented in Section III.1. Hence, if we apply Eq. (10) to the intermediate maps of a NNM depolarizing evolution \(\mathbf{\Lambda}\) characterized by \(f(t)\), we obtain the PNM depolarizing evolution \(\overline{\Lambda}\) characterized by \(\overline{f}(t)=f(t+T^{\Lambda})/f(T^{\Lambda})\). We can easily verify that it is a valid characteristic function (\(\overline{f}(t)\in[0,1]\) and \(\overline{f}(0)=1\)) and \(T^{\overline{\Lambda}}=0\). Therefore, the corresponding dynamical maps are: \[\overline{\Lambda}_{\Lambda}(\,\cdot\,) =\overline{f}(t)I_{S}(\,\cdot\,)+\left(1-\overline{f}(t)\right) \mathrm{Tr}\left[\,\cdot\,\right]\frac{1}{d} \tag{34}\] \[=\frac{f(t+T^{\Lambda})}{f(T^{\Lambda})}I_{S}(\,\cdot\,)+\left(1- \frac{f(t+T^{\Lambda})}{f(T^{\Lambda})}\right)\mathrm{Tr}\left[\,\cdot\, \right]\frac{1}{d}\,.\] The NNM evolution \(\mathbf{\Lambda}\) can be expressed as a first time interval of Markovian pre-processing, expressed by \(\Lambda_{t}\) for \(t\in[0,T^{\Lambda}]\), followed by the action of the PNM evolution \(\overline{\mathbf{\Lambda}}\) (see Eq. (12)). As we explained, \(\overline{\mathbf{\Lambda}}\) is nothing but \(\mathbf{\Lambda}\) without the resultant of its Markovian pre-processing \(\Lambda_{T^{\Lambda}}\), which, not only is useless for the appearance of non-Markovian phenomena but damps information backflows. Indeed, we can apply Corollary 2 and conclude that whenever we can obtain a distinguishability backflow with \(\mathbf{\Lambda}\) in a time interval \([s,t]\), we can observe a backflow with \(\overline{\mathbf{\Lambda}}\) in the time interval \([s-T^{\Lambda},t-T^{\Lambda}]\), where the proportionality factor between the two revivals is \(1/f(T^{\Lambda})>1\). As expected, \(\overline{\mathbf{\Lambda}}\) is characterized by larger non-Markovianity measures than \(\mathbf{\Lambda}\): \[M^{D}(\overline{\mathbf{\Lambda}}) =\frac{2\Delta}{f(T^{\Lambda})}>M^{D}(\mathbf{\Lambda})=2\Delta\,, \tag{35}\] \[M^{mix}(\overline{\mathbf{\Lambda}}) =\frac{\Delta}{f(T^{\Lambda})+\Delta}>M^{mix}(\mathbf{\Lambda})=\frac{ \Delta}{1+\Delta}\,. \tag{36}\] It can be proven that similar results holds true for the measures \(M^{W,max}\) and \(M^{W,art}\) (see Eqs. (20) and (21)). We conclude by noticing that all PNM depolarizing evolutions \(\overline{\mathbf{\Lambda}}\) completely retrieve the initial information of the system at time \(t^{\overline{\Lambda}}\). In particular, all PNM depolarizing evolutions satisfy the conditions of Proposition 3, where \(\overline{\Lambda}_{s,\overline{\mathbf{\Lambda}}}=I_{S}\). This result follows from the observation that \(f(t^{\Lambda})=f(T^{\Lambda})\), and therefore all PNM depolarizing evolutions are such that \(\overline{f}(t^{\overline{\Lambda}})=\overline{f}(0)=1\). Notice that \(t^{\overline{\Lambda}}\) may be divergent. ### Example We show how to apply our results to a simple characteristic function \(f(t)\) representing a NNM depolarizing evolution \(\mathbf{\Lambda}\). The toy model considered here is given by \(f(t)=(1-3t+2t^{2}+2t^{3})/(1+t^{2}+t^{3}+3t^{5})\) (see Figure 3): a continuous function with a single time interval of increase and an infinitesimal asymptotic behaviour. We start by calculating the times \(T^{\Lambda}\), \(\tau^{\Lambda}\) and \(t^{\Lambda}\). Hence, we consider the sets \(\mathcal{P}^{\Lambda}\) and \(\mathcal{N}^{\Lambda}\), the sets containing the pairs of times \(\{s,t\}\) such that \(V_{t,s}\) is respectively CPTP and non-CPTP (see Eqs. (1) and (0)). We can obtain these sets by noticing that the smallest eigenvalue \(\lambda_{t,s}=(1-f(t)/f(s))/d^{2}\) of the Choi state of \(V_{t,s}\) is non-negative if and only if \(V_{t,s}\) is CPTP. The same analysis is performed for the corresponding PNM core \(\overline{\Lambda}\). We start with a technical analysis of \(f(t)\). Standard numerical methods lead to \(T^{\Lambda}\simeq 0.275\), \(\tau^{\Lambda}=0.495\) and \(t^{\Lambda}\simeq 1.040\). It is possible to have increases of \(f(t)\) only in time intervals \([s,t]\) starting later than \(T^{\Lambda}\). Moreover, as explained by Proposition 1, these increases take place for a continuum of initial times: \(f(t^{\Lambda})-f(s)>0\) for all \(s\in(T^{\Lambda},t^{\Lambda})\). Instead, if we consider an initial time \(s\) sooner than \(T^{\Lambda}\), the characteristic function cannot increase: \(f(t)-f(s)<0\) for \(s<T^{\Lambda}\) and \(s<t\). The time \(\tau^{\Lambda}\) is the first time after which \(f^{\prime}(t)>0\). Moreover, \(f^{\prime}(t)>0\) only for \(t\in(t_{\text{zz}},t_{\text{zz}})=(\tau^{\Lambda},t^{\Lambda})\), where the total revival is \(\Delta=f(t^{\Lambda})-f(\tau^{\Lambda})\simeq 0.164\). We now analyse \(f(t)\) from the point of view of information backflows. The characteristic function \(f(t)\) is directly connected with the time-dependent distinguishability \(D(\rho_{S,1}(t),\rho_{S,2}(t))\) of two states evolving under \(\mathbf{\Lambda}\) (see Eq. (27)). In the first time interval \([0,T^{\Lambda}]\) information is lost and never recovered. Indeed, we called this noise _useless_ for non-Markovian phenomena and the resultant noise \(\Lambda_{T^{\Lambda}}\) represents a Markovian pre-processing. As discussed above, the damping of the initial Markovian pre-processing is quantified by \(f(T^{\Lambda})\simeq 0.334\). In the time interval \([T^{\Lambda},\tau^{\Lambda}]\) the system keeps losing information. Differently from the noise in \([0,T^{\Lambda}]\), this noise is _essential_ for the following non-Markovian phenomena. Indeed, we have increases \(f(t^{\Lambda})-f(s)>0\) for all the intervals \([s,t^{\Lambda}]\) with \(s\in(T^{\Lambda},\tau^{\Lambda})\). The maximum information backflow is obtained in \([\tau^{\Lambda},t^{\Lambda}]\), when the system recovers information from the environment at all times (\(f^{\prime}(t)>0\)). Moreover, at time \(t^{\Lambda}\), the system goes back to the state assumed at time \(T^{\Lambda}\) (\(f(t^{\Lambda})=f(T^{\Lambda})\)), namely when useless noise ended and the essential noise started. The characteristic function of the corresponding PNM core \(\overline{\Lambda}\) is \(\overline{f}(t)=f(t+T^{\Lambda})/f(T^{\Lambda})\) (see Figure 3). We use Eq. (11) and get the characteristic times \(\tau^{\overline{\Lambda}}\simeq 0.220\) and \(t^{\overline{\Lambda}}\simeq 0.765\) (\(T^{\overline{\Lambda}}=0\) because \(\overline{\Lambda}\) is PNM). The total increase of \(\overline{f}(t)\) is \(\overline{\Delta}=\Delta/f(T^{\Lambda})\simeq 0.491\). If we compare the non-Markovian effects of \(\Lambda\) and \(\overline{\Lambda}\), any distinguishability backflow is amplified by a factor \(1/f(T^{\Lambda})\simeq 2.990\) (see Corollary 2) and through Eqs. (35) and (36) we can evaluate the values of the corresponding non-Markovianity measures: \(M^{D}(\overline{\Lambda})\simeq 0.983>M^{D}(\mathbf{\Lambda})\simeq 0.328\) and \(M^{mix}(\overline{\Lambda})\simeq 0.329>M^{mix}(\mathbf{\Lambda})\simeq 0.141\). The main qualitative difference between \(\mathbf{\Lambda}\) and the corresponding PNM core \(\overline{\Lambda}\) is the presence of a time when all the initial information is recovered. If the system is evolved by \(\overline{\Lambda}\), any possible type of information is _completely recovered_ to its original value at time \(t^{\overline{\Lambda}}\). Indeed, \(\overline{f}(t^{\overline{\Lambda}})=1\) and the dynamical map at this time is equal to the identity, namely \(\Lambda_{\overline{\Lambda}}=I_{S}\). For instance, any pair of initially orthogonal states \(\{\overline{\Lambda}_{t}(\rho_{S,1}),\overline{\Lambda}_{t}(\rho_{S,2})\}\) goes from being perfectly distinguishable, to non-perfectly distinguishable for any \(t\in(0,t^{\overline{\Lambda}})\) and then back to perfectly distinguishable at time \(t^{\overline{\Lambda}}\). As noticed above, all PNM depolarizing evolutions completely restore the initial information content of the system at time \(t^{\overline{\Lambda}}\), namely \(\overline{f}(t^{\overline{\Lambda}})=1\) for all PNM depolarizing \(\overline{\Lambda}\). Finally, we can see how the initial noise in this dynamics is essential for the following non-Markovian phenomena to happen. Indeed, as soon as we take a non-zero time \(s\in(0,t^{\Lambda})\), we have a distinguishability backflow in the time interval \([s,t^{\Lambda}]\). ## VIII Quasi-eternal non-Markovian model We briefly introduce a qubit model to show the existence of evolutions with \(T^{\Lambda}<\tau^{\Lambda}<t^{\Lambda}=\infty\) and \(T^{\Lambda}=\tau^{\Lambda}=t^{\Lambda}\). The dynamics in question are called quasi-eternal non-Markovian [24], which generalize the well-known qubit eternal non-Markovian model [21; 22; 23]. First, we define Pauli evolutions as those having dynamical maps with the following form \[\Lambda_{t}(\,\cdot\,)=\sum_{i=0,x,y,z}p_{i}(t)\sigma_{i}(\,\cdot\,)\sigma_{i}\,, \tag{37}\] where \(\sigma_{x,y,z}\) are the Pauli operators, \(\sigma_{0}=\mathbb{1}\), and \(p_{0}(t)=1-p_{z}(t)-p_{y}(t)-p_{z}(t)\). The Pauli map is CPTP if and only if \(p_{0,x,y,z}(t)\geq 0\). The easiest way to appreciate the non-Markovian features of Pauli evolutions is given by studying the corresponding master equation, namely the first-order differential equation defining the evolution of the corresponding system density matrix: \[\frac{d}{dt}\rho_{S}(t)=\sum_{i=x,y,z}\gamma_{i}(t)(\sigma_{i}\rho_{S}(t) \sigma_{i}-\rho_{S}(t))\,, \tag{38}\] where \(\gamma_{i}(t)\) are time-dependent real functions. It can be proven that \(\gamma_{i}(t)\geq 0\) for all \(i=x,y,z\) and \(t\geq 0\) if and only if the corresponding evolution \(\mathbf{\Lambda}\) is Markovian [17]. Moreover, if \(\gamma_{i}(t)+\gamma_{j}(t)\geq 0\) for all \(i\neq j\) and \(t\geq 0\), the evolution is P-divisible, namely \(V_{t,s}\) is at least P (but not necessarily CP) for all \(s\leq t\). The probabilities and the rates that define the quasi-eternal model are: \[p_{x}(t)=p_{y}(t)=\frac{1-e^{-2\alpha t}}{4}\,,\] \[p_{z}(t)=\frac{1}{4}\left(1+e^{-2\alpha t}-\frac{2e^{-\alpha t} \cosh^{\alpha}(t-t_{0})}{\cosh^{\alpha}(t_{0})}\right)\,, \tag{39}\] \[\left\{\gamma_{x}(t),\gamma_{y}(t),\gamma_{z}(t)\right\}=\frac{ \alpha}{2}\{1,1,-\tanh(t-t_{0})\}\,, \tag{40}\] where these time-dependent parameters generate maps \(\Lambda_{t}\) that are CPTP at all times if and only if \(\alpha>0\) and \[t_{0}\geq t_{0,\alpha}=\max\{0,(\log(2^{1/\alpha}-1))/2\},\] where \(t_{0,\alpha}>0\) for \(\alpha\in(0,1)\) and \(t_{0,\alpha}=0\) for \(\alpha\geq 1\)[24]. We call quasi-eternal non-Markovian the Pauli evolution defined by the probabilities (39), or equivalently the solution of the master equation (38) with rates (40), where \(t_{0}\geq t_{0,\alpha}\). These evolutions are P-divisible and, since \(\gamma_{z}(t)<0\) for \(t>t_{0}\), the infinitesimal intermediate maps are non-CPTP for all \(t>t_{0}\). The intermediate map of a Pauli evolution assumes the Pauli form, namely \(V_{t,s}(\,\cdot\,)=\sum_{i=0,x,y,z}p_{i}(s,t)\sigma_{i}(\,\cdot\,)\sigma_{i}\), where: \[p_{x}(s,t) = p_{y}(s,t)=\frac{1-e^{-2\alpha(t-s)}}{4}\,,\] \[p_{z}(s,t) = \frac{1}{4}\left(1+e^{-2\alpha(t-s)}-\frac{2e^{-\alpha(t-s)}\cosh ^{\alpha}(t-t_{0})}{\cosh^{\alpha}(s-t_{0})}\right)\,, \tag{41}\] Notice that, as any Pauli channel, the intermediate map \(V_{t,s}\) is CPTP if and only if \(p_{0,x,y,z}(s,t)\geq 0\). The lowest eigenvalue of the Choi state of \(V_{t,s}\) is \(\lambda_{t,s}=p_{z}(s,t)\). In Figure 4 we represent \(\mathcal{P}^{\Lambda}\) and \(\mathcal{N}^{\Lambda}\) for three PNIM evolutions from this family, namely the collection of time-pairs \(\{s,t\}\) such that \(V_{t,s}\) is repectively CPTP and non-CPTP. We see that for \(\alpha\in(0,1)\) we have \(T^{\Lambda}<\tau^{\Lambda}=t^{\Lambda}\), while for \(\alpha\geq 1\) we have \(T^{\Lambda}=\tau^{\Lambda}=t^{\Lambda}\). We prove that \(T^{\Lambda}=t_{0}-t_{0,\alpha}\) and \(\tau^{\Lambda}=t_{0}\) (see Appendix G). The latter result is a direct consequence of the form of the master equation, which has negative rates if and only if \(t>t_{0}\). Indeed, \(V_{t+\epsilon,t}\) is CPTP for infinitesimal \(\epsilon\) if and only if \(\gamma_{x,y,z}(t)\geq 0\). Interestingly, we can appreciate a peculiar scenario for \(\alpha>1\), where we obtain a CPTP map through the composition of non-CPTP maps. Without loss of generality, we fix \(t_{0}=t_{0,\alpha}=0\). There exist initial times \(s^{\prime}>0\) such that, \(V_{t,s^{\prime}}\) is non-CPTP for all \(t\in(s,t^{\prime})\), while \(V_{t,s^{\prime}}\) is CPTP for all \(t\geq t^{\prime}\). Notice that, since \(\gamma_{z}(t)<0\) for all \(t>0\), \(V_{t+\epsilon,t}\) is non-CPTP for infinitesimal \(\epsilon>0\) and that \(t>0\). Therefore, if we consider \(t_{1}<t^{\prime}<t_{2}\), we have that \(V_{t_{1},s}\) is non-CPTP and \(V_{t_{2},s}\) is CPTP. The latter map can be obtained via the composition of \(V_{t_{1},s}\) with infinitesimal intermediate maps as follows \(V_{t_{2},s}=V_{t_{2},s-\epsilon}\circ\cdots\circ V_{t_{1}+\epsilon,t_{2}}\circ V _{t_{1},s}\), namely the CPTP map is obtained by composing infinitesimal non-CPTP maps \(V_{t+\epsilon,t}\) with the non-CPTP intermediate map \(V_{t_{1},\epsilon}\). The composition of infinitesimal intermediate maps that we wrote corresponds to \(V_{t_{2},t_{1}}=V_{t_{2},t_{2}-\epsilon}\circ\cdots\circ V_{t_{1}+\epsilon,t_{ 1}}\), which, depending on \(t_{1}\) and \(t_{2}\), can be either CPTP or not. Finally, a simple variation of this model leads to a trivial example of \(T^{\Lambda}<\tau^{\Lambda}=t^{\Lambda}\), where we exploit condition (C) of Eq. (3). Consider an evolution that is unitary in an initial time interval, namely \(\Lambda_{t}=U_{t}\) is unitary for \(t\in[0,t_{U}]\), and later it behaves as an eternal PNM evolution with \(\alpha>1\) and \(t_{0}=0\). Such an evolution would, for instance, be given by integrating the master equation (38) with rates \(\{\gamma_{x}(t),\gamma_{y}(t),\gamma_{z}(t)\}=\theta(t-t_{U})\{1,1,-\text{tanh} (t-t_{U})\}\), where \(\theta(x)=1\) for \(x\geq 0\) and it is zero-valued otherwise. Indeed, in \([0,t_{U}]\) the evolution would correspond to the identity and \(0=T^{\Lambda}<\tau^{\Lambda}=t^{\Lambda}=t_{U}\). ## IX Discussion We studied the difference between two types of initial noise in non-Markovian evolutions, where essential noise makes the system lose the same information that takes part during later backflows, while the information lost with useless noise is never recovered. Indeed, this last type of noise can be compared to a Markovian pre-processing of the system. We identified as PNM those evolutions showing only essential noise, while NNM evolutions have both type of noises. We proved that any NNM evolution can be simulated as a Markovian pre-processing, which generates the useless noise, followed by a PNM evolution, which represents the (pure) non-Markovian core of the evolution. In order to distinguish between PNM and NNM, we introduced a temporal framework that aims to describe the timing of fundamental non-Markovian phenomena. We identified the most distinguishable classes arising from this framework, where PNM and NNM evolutions fit naturally. Thus, we identified several mathematical features connected with this classification, generalized this approach to non-divisible evolutions and later focused on the phenomenological side of this topic. Indeed, we addressed the problem of finding which backflows and non-Markovian measures are amplified when PNM evolutions are compared with their corresponding noisy versions, proposing constructive and measurable results within the context of state distinguishability. We studied how the entanglement breaking property is lost/preserved when we compare PNM cores and their corresponding NNM evolutions. Moreover, we discussed the possibility to activate correlation backflows when we extract the PNM core out of NNM evolutions. Finally, we studied several examples in order to show how to extract PNM cores, clarify the possible scenarios concerning the timings of non-Markovian phenomena and explain why useless noise has the only role of suppressing the backflows generated by the PNM core. We claimed that some classes of evolutions, such as dephasing and amplitude damping, satisfy the conditions of Proposition 3, namely the corresponding dynamical map goes from being non-unitary to unitary. Nonetheless, as we showed with an explicit example in Section III.2, not all the PNM evolutions satisfy this property. It would be interesting to understand which are the minimal conditions under which a given class of NM evolutions has its PNM representative satisfying Proposition 3. A reasonable class could be given by the one-parameter evolutions, as described in Ref. [24], namely those with a single rate in the corresponding Lindblad mas ter equation. More in general, concerning the possibility to lose and completely recover some type of information during the evolution, it would be interesting to understand whether PNM evolutions always enjoy this property for at least one quantifier, namely _"If an evolution is PNM, then there exist an information quantifier and an initialization that during the dynamics loses and then recovers the initial information"_. We analysed when and to what extent distinguishability backflows are amplified by PNM cores (see Section IV). Moreover, we gave a constructive method to build the states that provide the largest backflows. It would be interesting to generalize this approach to other information quantifiers, such as the distinguishability of state ensembles [12], the Fisher information [35; 36] or correlations [32; 24; 33]. We saw that PNM evolutions have non-Markovianity measures that cannot be smaller than the associated NNM evolutions in Section V. Moreover, we gave conditions under which PNM evolutions have strictly larger non-Markovianity measures connected with the flux of state distinguishability. Understanding in which other cases and to what extent this strict inequality can be obtained with other information quantifiers and other non-Markovianity measures is interesting. Finally, another interesting topic would be to understand whether the extraction of the PNM core can lead to the activation of some non-Markovian phenomena, such as we discussed in the context of correlation backflows. ## Acknowledgements The author would like to thank Antonio Acin and Giulio De Santis for illuminating discussions. This work was supported by the research project "Dynamics and Information Research Institute - Quantum Information, Quantum Technologies" within the agreement between UniCredit Bank and Scuola Normale Superiore di Pisa (CI14_UNICREDIT_MARMI), the Spanish Government (FIS2020-TRANQI and Severo Ochoa CEX2019-000910-S), the ERC AdG CERQUTE, the AXA Chair in Quantum Information Science, Fundacio Cellex, Fundacio Mir-Puig and Generalitat de Catalunya (CERCA, AGAUR SGR 1381).
2306.16970
RF plugging of multi-mirror machines
One of the main challenges of fusion reactors based on magnetic mirrors is the axial particle loss through the loss cones. In multi-mirror (MM) systems, the particle loss is addressed by adding mirror cells on each end of the central fusion cell. Coulomb collisions in the MM sections serve as the retrapping mechanism for the escaping particles. Unfortunately, the confinement time in this system only scales linearly with the number of cells in the MM sections and requires an unreasonably large number of cells to satisfy the Lawson criterion. Here, it is suggested to reduce the outflow by applying a traveling RF electric field that mainly targets the particles in the outgoing loss cone. The Doppler shift compensates for the detuning of the RF frequency from the ion cyclotron resonance mainly for the escaping particles resulting in a selectivity effect. The transition rates between the different phase space populations are quantified via single-particle calculations and then incorporated into a semi-kinetic rate equations model for the MM system, including the RF effect. It is found that for optimized parameters, the confinement time can scale exponentially with the number of MM cells, orders of magnitude better than a similar MM system of the same length but without the RF plugging, and can satisfy the Lawson criterion for a reasonable system size.
Tal Miller, Ilan Be'ery, Eli Gudinetsky, Ido Barth
2023-06-29T14:23:40Z
http://arxiv.org/abs/2306.16970v2
# RF plugging of multi-mirror machines ###### Abstract One of the main challenges of fusion reactors based on magnetic mirrors is the axial particle loss through the loss cones. In multi-mirror (MM) systems, the particle loss is addressed by adding mirror cells on each end of the central fusion cell. Coulomb collisions in the MM sections serve as the retrapping mechanism for the escaping particles. Unfortunately, the confinement time in this system only scales linearly with the number of cells in the MM sections and requires an unreasonably large number of cells to satisfy the Lawson criterion. Here, it is suggested to reduce the outflow by applying a traveling RF electric field that mainly targets the particles in the outgoing loss cone. The Doppler shift compensates for the detuning of the RF frequency from the ion cyclotron resonance mainly for the escaping particles resulting in a selectivity effect. The transition rates between the different phase space populations are quantified via single-particle calculations and then incorporated into a semi-kinetic rate equations model for the MM system, including the RF effect. It is found that for optimized parameters, the confinement time can scale exponentially with the number of MM cells, orders of magnitude better than a similar MM system of the same length but without the RF plugging, and can satisfy the Lawson criterion for a reasonable system size. ## I Introduction Axial confinement is one of the main challenges for sustainable fusion in linear magnetically confined systems based on the mirroring effect. These open trap systems are advantageous for their simplicity, high-\(\beta\) (the ratio of plasma to magnetic pressures), and continuous mode of operations but suffer from radial instabilities and axial losses. Radial instabilities can be mitigated via passive, [1; 2; 3] radio frequency (RF), [3; 4; 5; 6] or active [7; 8] methods, while solutions for the outgoing flux through the loss-cone commonly involve geometrical modifications of the mirroring magnetic field. The variety of design concepts to reduce the loss-cone flux include tandem plugs with thermal barriers, [9; 10; 11; 12; 13; 14; 15; 16; 17; 18], diamagnetic confinement, [19; 20] multi-mirrors (MM) systems, [21; 22; 23; 24; 25; 26; 27; 28; 29; 30] moving multiple mirrors, [31; 32], helical mirror with rotating plasma, [33; 34; 35; 36; 37], ponderomotive RF plugging, [38; 39; 40; 41; 42; 43; 44; 45; 46; 47] and plugging using field reversed configuration (FRC) at the mirror throats. [48] MM systems are characterized by the two sections of multiple magnetic mirrors attached to each side of the central mirror cell, where fusion occurs. As a particle escapes the central cell into the right or left MM sections, it can be collisionally scattered out of the loss cone of one of the MM cells and later be scattered again back into the central cell. This process can be modeled as a one-dimensional diffusion dynamics, where we expect the axial flux to scale inversely with the number of cells in the MM section. The collisionality, which depends on the temperature and density profiles along the system, determines the number of cells needed for reducing the loss-cone flux to a desired level. Moreover, the collisionality in this scheme is required to be high enough i.e., Coulomb mean free path of the order of the mirror cell length, resulting in a requirement for lower temperatures and higher densities, which are sub-optimal for fusion. [21; 30; 49] MM machines are commonly considered isothermal systems due to fast electron thermalization across the system, [24] resulting in a linear scaling of the confinement time with the system length. In a recent study, we developed a semi-kinetic rate equations model for MM systems that divides the ions in each mirror cell into three populations (trapped, escaping, and returning) and includes the processes of Coulomb scattering within each cell and the transmission between neighboring cells. The steady-state solution of the rate equations yields the density profile within the MM section and the outgoing axial flux. It was found that the scaling of the outgoing flux with the system's size depends on the thermodynamic scenario. The best confinement was obtained for isentropic systems, where the plasma adiabatically cools down with the density decreases within the MM section, and the mean free path drops along the MM section. However, even in the most optimistic thermodynamic scenario, the scaling of the confinement time with the system's length requires an impractically large number of cells to satisfy the Lawson criterion. [49; 50; 51] Therefore, new confinement enhancement methods should be developed in the quest for sustainable fusion in MM machines. RF fields are widely used in magnetically confined plasma systems, including axial plugging of mirror systems by the aforementioned ponderomotive effect [38; 39; 40; 41; 42; 43; 44; 45; 46; 47], radial stabilization of magnetic mirror systems [52; 53; 54; 55; 56; 57; 58; 59], stabilization of toroidal systems [60; 61; 62], plasma heating, [63; 64; 65] and current drive [66; 67]. In this work, we suggest a new RF plugging method for MM systems. The idea is to apply an external RF electric field resonantly coupled with the ion cyclotron frequency in the moving frame of the outgoing particles. To this end, we employ a radially rotating electric field with a frequency slightly detuned from the exact ions' Larmor frequency and with a non-zero axial wave vector such that the Doppler shift compensates for the resonance mismatch only for outgoing particles. As a result, the RF field resonantly increases the perpendicular energy of the escaping particles only so they can be recaptured in the magnetic mirror cell. In contrast, it mildly affects the returning particles with the opposite axial velocities. This selection effect yields a significant confinement enhancement effect, which we study via single particle simulations and the semi-kinetic rate equations model. The structure of the paper is as follows. Sec. II introduces the considered static magnetic mirror field and the external RF field. Sec. III studies the effect of the RF fields on the time evolution of particles in phase space via single particle simulations. Sec. IV integrates the simulation results over the distribution of the fuel particles and evaluates the transition rates between different populations in phase space. Sec. V incorporates the effect of the RF fields into an extended rate equations model for MM systems and calculates the confinement enhancement resulting from the RF fields. Sec. VI summarizes the findings. ## II Fields configurations The full MM system composes one central fusion cell and two MM sections.[21; 22; 23; 24; 25; 26; 27; 28; 29; 30] To study the confinement improvement of adding the MM sections, we consider only one side of the MM sections, say the right one, and model it by a periodic magnetic field with an axial component of the form[21] \[B_{z}=B_{0}\left[1+\left(R_{m}-1\right)\exp\left(-5.5\sin^{2}\frac{\pi z}{l} \right)\right] \tag{1}\] where \(B_{0}\) is the minimal magnetic field, \(R_{m}=B_{max}/B_{0}\) is the mirror ratio, and \(l\) is the length of each MM cell. The other magnetic field components in cylindrical coordinates, \(\left(r,\theta,z\right)\), are \(B_{\theta}=0\) and \(B_{r}=-r\frac{\partial}{\partial z}B_{z}\), to satisfy \(\nabla\cdot\mathbf{B}=0\). Fig. 1 illustrates the MM system and the axial component of the magnetic field, \(B_{z}\). The magnetic mirror trapping criterion is \(\left(v_{\perp}/v\right)^{2}>R_{m}^{-1}\), where \(v=\sqrt{v_{z}^{2}+v_{\perp}^{2}}\) is the total velocity with axial (\(v_{z}\)) and transverse (\(v_{\perp}\)) components that are measured in the minimum of \(B_{z}\), i.e., at the center of the magnetic mirror cell \(\left(z=l/2\right)\). Therefore, manipulating the velocities ratio, \(v_{\perp}/v\), for example, by RF fields, is an appealing path to enhance trapping, especially in MM systems where the effect on the pitch angle can be accumulated along the MM section. The simplest approach is to apply an RF field near the ion cyclotron resonance on each MM. This approach acts symmetrically on both right- and left-going particles. However, in MM machines, an asymmetric drive would be preferable since one direction is inward (toward the fusion cell) while the other is outward (outside the system). Moreover, unlike in the central (fusion) cell, plasma heating in the MM sections does not contribute to the fusion rate. It may be even deleterious for confinement as it reduces the collision rate, which is the salient re-trapping mechanism in standard MM systems.[22; 25; 49] Therefore, it is favorable to find ways to deliver transverse energy to outgoing particles while minimizing plasma heating. An avenue that, as far as we know, has not yet been explored in MM systems is to apply traveling RF fields that are resonant only with the outgoing particles due to the combination of frequency detuning and Doppler shift. For each MM cell, we consider an electric field induced by external electrodes of the form \[\mathbf{E}=E_{RF}\left[\cos\left(k_{RF}z-\omega_{RF}t\right)\hat{x}+\sin\left( k_{RF}z-\omega_{RF}t\right)\hat{y}\right] \tag{2}\] where, \(E_{RF}\) is the electric field amplitude, \(\omega_{RF}\) is the field's frequency, and \(k_{RF}\) is the axial wave vector. Since the relevant used parameters obey \(\left(\frac{\omega_{RF}z}{c}\right)^{2}\ll 1\) (see Sec. III), we employ the electrostatic approximation and neglect all higher-order corrections to the electric and magnetic fields. Nonetheless, we have verified this assumption numerically by including the first-order correction to the time-dependent magnetic field for some of the calculated trajectories, which resulted in negligible differences. The rotating electric field travels along the MM cell in velocity \(v_{RF}=\omega_{RF}/k_{RF}\). Therefore, particles with axial velocity \(v_{z}\) experience a Doppler-shifted RF frequency in their rest frame \[\omega_{rest}=\omega_{RF}-k_{RF}v_{z}. \tag{3}\] For \(k_{RF}=0\) (spatially uniform RF field), particles of all velocities experience the same driving frequency. However, applying external fields with \(k_{RF}\neq 0\) allows differentiating between out-going and in-going particles with opposite \(v_{z}\) since the resonance condition, \(\omega_{rest}=\omega_{cyc}\), where \(\omega_{cyc}=qB/m\) is the ion cyclotron frequency, depends on the particle velocity, \(v_{z}\). For example, applying driving field with \(k_{RF}<0\) and \(\omega_{RF}<\omega_{cyc}\) results in Doppler compensation for particles with positive axial velocity, \(v_{z}>0\), such that they can meet the resonance condition, \(\omega_{rest}=\omega_{cyc}\), for suitable values of \(v_{z}\). In contrast, the driving frequency in the rest frame of particles with \(v_{z}<0\) is down-shifted and therefore gets farther away from resonance for the same driving parameters. Similarly, if we pick \(k_{RF}>0\) and \(\omega_{RF}>\omega_{cyc}\), particles with \(v_{z}>0\) approach the resonance, while those with \(v_{z}<0\) get farther away. Figure 1: An illustration of one side of a MM system (top) and the axial magnetic field of two MM cells used in this study (bottom). Naively thinking, both scenarios would exhibit a similar selectivity effect between ingoing and outgoing particles. Nonetheless, in the discussion above, we considered particles in a fixed location, e.g., in the mirror midplane where the magnetic field is minimal. But as a particle moves away from the center, \(\omega_{\rm cyc}\) increases while \(|v_{z}|\) decreases with the amplitude of the axial magnetic field that increases when the particle approaches the mirror throat. Namely, For \(k_{RF}>0\) and \(\omega_{RF}>\omega_{\rm cyc}\), particles with \(v_{z}<0\) also approach resonance, resulting in a reduction in the desired selection effect. On the other hand, when \(k_{RF}<0\) and \(\omega_{RF}<\omega_{\rm cyc}\) only particles with \(v_{z}>0\) can experience Doppler compensation. Therefore, the latter scenario is more favorable for achieving selectivity. In Sec. III, we explore this selectivity effect analytically and numerically. Interestingly, similar compensation between density gradient and frequency detuning for maintaining a selective and directional resonance condition while suppressing unwanted noise amplification was employed in Raman amplifiers.[68] Finally, we note that the form of the rotating RF field defined in Eq. (2) is a simplified model for a real RF field configuration inside the plasma. In future work, we may study more complex yet more realistic electric and magnetic fields, for example, fields generated by the different types of Nagoya coils[41] or helicon antennas[69; 70], including the effects of plasma screening and field penetration. ## III Single particle simulations To quantify the RF effect on particles in the mirror system, we employ the single-particle approximation and perform a Monte-Carlo analysis, where the initial velocities were sampled from a thermal distribution with an ion temperature of \(k_{B}T_{i}=10\)keV and a random direction. We consider one MM cell with axial magnetic field given in Eq. (1), where, \(l=1\)m, \(B_{0}=1\)T, and \(R_{m}=3\). Neglecting collisions and collective effects, we calculate the trajectories of deuterium (D) and tritium (T) ions under the influence of the Lorentz force \(\mathbf{F}=q\left(\mathbf{E}+\mathbf{v}\times\mathbf{B}\right)\). The static magnetic field and the time-dependent RF electric field are given in Eqs. (1) and (2), respectively, while we used a symplectic scheme[71], which preserves phase space volume. We calculate for both fusion species because a specific RF field might affect the two hydrogen isotopes differently due to the mass difference, yet both would exist in a D-T fusion reactor. First, let us demonstrate the dynamics of a charged ion in velocity space under the influence of a traveling rotating electric field and the resulting selectivity effect for four sets of parameters, detailed in Table 1. For all the parameter sets, we picked the RF field amplitude \(E_{RF}=50\rm{kV/m}\), which is large but realistic and results in an appreciable effect, as will be shown below. The values of \(\omega_{RF}\) and \(k_{RF}\) in the different sets were chosen to represent typical scenarios of the diverse dynamical behavior of the system. In Table 1, the frequencies, \(\omega_{RF}\), are normalized by the ion cyclotron frequencies of deuterium and tritium at the mirror midplane, \(\omega_{0,D}=eB_{0}/m_{D}=4.75\cdot 10^{7}\rm{s^{-1}}\) and \(\omega_{0,T}=eB_{0}/m_{T}=3.17\cdot 10^{7}\rm{s^{-1}}\), respectively. The wave numbers, \(k_{RF}\), are given in the units of \(2\pi m^{-1}\). In the simulations, all particles were initialized at the center of a magnetic mirror cell (minimum axial magnetic field). The simulation time was twice the characteristic passage time of the slower isotope, tritium, across one mirror cell, \(2l/v_{th,T}\), where \(v_{th,T}=\sqrt{2k_{B}T/m_{T}}=7.97\cdot 10^{5}\rm{m/s}\) is the thermal velocity of tritium. Since in the absence of the external RF field, the magnetic moment of both bouncing and passing particles is adiabatically conserved, the transverse and axial degrees of freedom exchange energy such that the total energy is conserved. Therefore when the particle experiences a field \(B\) at a position \(z\), its perpendicular and axial velocity components, \(v_{\perp}\) and \(v_{z}\), respectively, can be mapped into those at the center of the mirror, \(\tilde{v}_{\perp}\) and \(\tilde{v}_{z}\), where the magnetic field is \(B_{0}\) via the transformation, \[\tilde{v}_{\perp} = v_{\perp}\sqrt{B_{0}/B}\] \[\tilde{v}_{z} = s\sqrt{v_{z}^{2}+v_{\perp}^{2}\left(1-B_{0}/B\right)}. \tag{4}\] Here, we defined \(s=\rm{sign}\left(v_{z}\right)\) to indicate the initial axial direction. Although in the presence of an external RF field, this transformation is not precise, we use this approximated transformation in Fig. 2 to visualize the particle dynamics in velocity phase space under the influence of the RF. In each panel of the figure, we plot the dynamics in the projected mid-plane, \((\tilde{v}_{z},\tilde{v}_{\perp})\), for many different initial velocities. The different panels are associated with the different sets of RF parameters of Table 1 for both deuterium (D) and tritium (T). The dashed black lines indicate the approximated loss cones boundaries, \(\tilde{v}_{\perp}/\tilde{v}_{z}=(R_{m}-1)^{-1/2}\). The large dots represent the initial velocities \(v_{\perp,0},v_{z,0}\) and the faint lines connected to them depict the time-evolution in the projected mid-plane according to the transformation in Eq. (4). The color indicates how much the RF fields affect each particle during the simulation time, according to the metric defined as the maximal displacement in \begin{table} \begin{tabular}{||c||c c c||c c c c||c||} \hline **set** & \(\frac{\omega_{RF}}{\omega_{0,D}}\) & \(\frac{\omega_{RF}}{\omega_{0,T}}\) & \(\frac{k_{RF}}{2\pi m^{-1}}\) & \(\tilde{N}_{rc}\) & \(\tilde{N}_{lc}\) & \(\tilde{N}_{cr}\) & \(\tilde{N}_{cl}\) & \(\tilde{N}_{cl}\) & \(s=\frac{N_{n}}{N_{c}}\) \\ \hline \hline 1 (D) & 1.12 & 1.68 & 3.0 & 0.31 & 0.39 & 0.02 & 0.02 & 0.8 \\ 1 (T) & & & & 0.63 & 0.16 & 0.02 & 0.03 & 3.8 \\ \hline 2 (D) & 0.80 & 1.20 & -3.0 & 0.64 & 0.12 & 0.01 & 0.02 & 5.3 \\ 2 (T) & & & & 0.38 & 0.40 & 0.03 & 0.02 & 0.9 \\ \hline 3 (D) & 0.56 & 0.84 & -3.0 & 0.68 & 0.08 & 0.02 & 0.02 & 9.0 \\ 3 (T) & & & & 0.59 & 0.14 & 0.02 & 0.03 & 4.2 \\ \hline 4 (D) & 0.44 & 0.66 & -7.0 & 0.50 & 0.05 & 0.02 & 0.02 & 10.7 \\ 4 (T) & & & & 0.36 & 0.06 & 0.01 & 0.02 & 6.2 \\ \hline \hline \end{tabular} \end{table} Table 1: The four sets of RF parameters (frequency and k-vector) in this work. The rows denoted by D and T refer to the isotope-dependent saturation values of the transition probabilities, \(\tilde{N}_{cr,lc,cr,cl}\), and to the selectivity parameter, \(s\), calculated in Sec. IV. the projected velocity plane \(\left(\tilde{v}_{z},\tilde{v}_{\perp}\right)\) \[\Delta v_{max}=\max\left(\sqrt{\left(\tilde{v}_{z}(t)-v_{z,0}\right)^{2}+\left( \tilde{v}_{\perp}(t)-v_{\perp,0}\right)^{2}}\right) \tag{5}\] where the maximum is taken over the simulation time. In the figure, one can see that the maximal effect (red dots) in all figures is focused mainly around vertical lines around a specific axial velocity \(v_{z}\), associated with the Doppler shifted ion-cyclotron resonance. Next, we analytically analyze the Doppler compensation mechanism and develop a simple model for the resonant region of initial conditions in the \(\left(v_{z,0},v_{\perp,0}\right)\) plane. The idea is that particles will be most affected by the RF if they happen to be at a location \(z\) with an axial velocity, \(v_{z}\) such that in their rest frame, the Doppler shifted RF field, \(\omega_{rest}\), which is defined in Eq. (3), meets the resonance condition, \[\omega_{rest}=\omega_{c}\left(B\right) \tag{6}\] where, \(\omega_{c}\left(B\right)=\omega_{c,0}B/B_{0}\) depends on \(z\) through Eq. (1). Here, we neglect the radial components of the magnetic fields that are small for particles near the mirror's axis. As a particle travels down the mirror, its Larmor frequency increases with the axial mirror field within the range \(1\leq B/B_{0}\leq R_{m}\), but the Doppler shifted RF frequency also changes since it depends on the particle velocity. Similarly to the transformation (4), from the conservation of energy and magnetic moment, one finds, \[v_{\perp}\left(B\right) = v_{\perp,0}\sqrt{B/B_{0}} \tag{7}\] \[v_{z}\left(B\right) = s_{0}\sqrt{v_{z,0}^{2}+v_{\perp,0}^{2}\left(1-B/B_{0}\right)} \tag{8}\] where, \(s_{0}=\text{sign}\left(v_{z,0}\right)\). Therefore, the approximated condition for resonance to take place at some field \(B\) reads \[\omega_{RF}-k_{RF}v_{z}\left(B\right)=\omega_{c}\left(B\right)=\omega_{c,0} \frac{B}{B_{0}}. \tag{9}\] Substituting \(v_{z}(B)\) from Eq. (8) and squaring the resonance condition yields a quadratic equation for \(B\). An acceptable solution exists if the determinant is non-negative and at least one of the roots falls in the range \(1\leq B/B_{0}\leq R_{m}\). It is noted that the interaction time with the RF is larger near the mirror center, where the magnetic field gradient is smaller. Therefore, we further constrain the effective resonant region to the range \(1\leq B/B_{0}\leq 1.25\) even though in our simulations \(R_{m}=3\). Note that for a particle with initial velocities in the loss cone, we also require that the resonant axial velocity is in the same direction as the initial axial velocity because such an escaping particle does not have the opportunity to reverse its direction. In other words, the frequency compensation depends on the direction of the escaping particles (inward or outward in the MM system), giving rise to selectivity. In Fig. 2, we colored (in grey) the acceptable resonant regions in mirror mid-plane velocity space. Notably, the theoretical resonant regions fit well with the single-particle simulation results, meaning all the most affected particles (colored red) are inside the theoretical (gray) regions. However, the Figure 2: single particle simulations for 1000 different initial conditions sampled from the Maxwell-Boltzmann distribution, according to Eq. (4). The upper-left text in each panel indicates the set of RF parameters used (see Table 1) and particle type used, deuterium (D) or tritium (T). The color (rainbow colormap going from blue to green to red) of each initial condition point represents the magnitude of the RF effect as defined in Eq. (5). The grey areas are the resonant regions found in Sec. III. magnitude of the RF effect in velocity space, and even more importantly, the RF trapping probability, are not predictable by this simple model and thus will be addressed numerically in the next section. ## IV RF trapping probability At any given time, each particle can be characterized as being inside or outside the loss cones by checking the local loss cone condition \(\left(v_{\perp}/v\right)^{2}<B_{max}/B_{z}\), where \(B_{z}\) is the local axial mirror magnetic field at the particle's location. Here, we assume that the particles are near the vicinity of the mirror axis, so the radial field components are small. Of course, this condition is valid only when there is no external RF, so energy and magnetic moment are conserved. In other words, it correctly determines the particle's final state if the RF were to be instantly turned off. We divide the particles into three populations by their initial conditions being either trapped particles or, right or left, non-trapped particles, where right means escaping outside the MM system, and left means the opposite, i.e., towards the central fusion cell (see Fig. 1). We define the number of particles in each population as \(N_{c}\) for captured, \(N_{r}\) for right-going, and \(N_{l}\) for left-going. Then we track each particle's identity as a function of time i.e., whether or not it crosses one of the loss cone lines, as depicted in Fig. 2 (dashed lines). The critical parameter required to estimate the overall efficiency of RF plugging in MM machines is the number of "converted" particles, say, that originated at the right loss cone but ended up trapped. We define four transitions: \(N_{r\to c}\), \(N_{l\to c}\), \(N_{c\to r}\), and \(N_{c\to l}\). The other two possible transitions, left to right and right to left, were negligible under the influence of the external RF field, and therefore are omitted from the generalized rate equation model (see Sec. V). To get relative transition quantities, we normalize each converted particle quantity by the initial number of particles in that population. These normalized transition quantities are plotted in Fig. 3 for the different RF parameter sets of Table 1. For example, the blue line represents the number of particles that originated in the right lose cone that at time \(t\) were in the capture region in velocity space, i.e., \(N_{r\to c}/N_{c0}\), where \(N_{r,0}\) is the initial number of particles in the right lose cone in the simulation. It can be seen from the figure that, typically, the number of converted particles grows with time at early times but may fluctuate as particles can change their identity multiple times. In most cases, the number of converted particles saturates at some value after the transient increase. Therefore, for each case, we define and calculate the saturated value by averaging over the second half of the simulated time. It is noted that in some of the cases, the saturation has not been reached yet, so the approximation is not as good as in other cases. We will address this caveat in the next section when studying the overall RF plugging effect. We denote the saturation values by \(\bar{N}_{rc},\bar{N}_{lc},\bar{N}_{cr},\bar{N}_{cl}\) and plot them by dashed lines in Fig. 3, where the values are recorded in the legend of each panel. The next step is to study the RF effect for a wide \(k_{RF},\omega_{RF}\) parameter space. We calculate the saturation values of converted particles for the four relevant transitions and plot the results in Fig. 4 for both deuterium and tritium. It can be seen that the maximal transitions from the two loss-cones populations to the captured population, i.e., \(\bar{N}_{rc}\) and \(\bar{N}_{lc}\) (panels (a), (b), (e), and (f)) are concentrated along a straight line in \(k_{RF},\omega_{RF}\) space. These lines correspond to the cyclotron resonance condition that depends on the Doppler compensation Figure 3: The relative number of converted particles as a function of time (solid lines): \(N_{r\to c}/N_{r,0}\) (blue), \(N_{l\to c}/N_{l,0}\) (green), \(N_{c\to r}/N_{c,0}\) (red) and \(N_{c\to l}/N_{c,0}\) (orange). The dashed lines indicate the mean of each curve in the second half of the simulation time. These saturation values, \(\bar{N}_{rc}\), \(\bar{N}_{lc}\), \(\bar{N}_{cr}\), and \(\bar{N}_{cl}\) are recorded in the legend and summarized in Table 1. The upper-right text in each panel indicates the RF parameters set number and the particle type (see Table 1). between the frequency detuning and the RF wave velocity as described in Sec. II. In dashed black lines we overlaid the resonance condition of Eq.(3), where \(v_{z}\) is replaced by the mean axial velocity in the loss cones \[\bar{v}_{z,LC}=\frac{\int_{LC}f_{MB}\left(\mathbf{v}\right)v_{z}d^{3}v}{\int_{ LC}f_{MB}\left(\mathbf{v}\right)d^{3}v}. \tag{10}\] Here, \(f_{MB}\left(\mathbf{v}\right)=\pi^{-3/2}v_{th}^{-3}\exp(-\mathbf{v}^{2}/v_{th} ^{2})\) is the Maxwell-Boltzmann distribution function and the integral is taken only over the loss cone section of the velocity space. One finds \[\bar{v}_{z,LC}=\frac{1+\sqrt{1-\frac{1}{R_{m}}}}{\sqrt{\pi}}\,v_{th}, \tag{11}\] where for the considered mirror ratio, \(R_{m}=3\), the result is \(\bar{v}_{z,LC}=1.025\,v_{th}\). In panels (a,b) of Fig. 4, the thermal velocity, \(v_{th}=\sqrt{2k_{B}T/m}\), is calculated with the deuterium mass, and in panels (e,f) with the tritium mass. Notably, the right and left loss cones have the opposite mean axial velocity directions and therefore the opposite slope of the resonance line in the figure. We can see that the theoretical lines match quite well with the peak transition rates found in simulations. Also, one can see that the results for deuterium (a-d) and tritium (e-h) look very similar, except the latter are downshifted toward lower values of \(\omega_{RF}\). This is because the resonance condition is associated with the cyclotron frequency that is inversely related to the ion's mass, and tritium is heavier than deuterium. Since RF plugging is based on the asymmetric effect of left- and right-going particles, it is elucidating to focus on the ratio, \(\bar{N}_{rc}/\bar{N}_{lc}\). We call this ratio the right-left selectivity, and we draw it for both deuterium and tritium in Fig. 5 for the same parameter space as Fig. 4. One can again see that, due to the mass difference, the results for tritium (Fig. 5b) look similar but downshifted in frequency compared to those of deuterium (Fig. 5a). In the figure, we also indicated by numbered blue circles the locations (in \(k_{RF},\omega_{RF}\) plane) of the parameter sets used in the examples of Table 1 and Figs. 2 and 3. Finally, we repeated the single particle simulations for a half-amplitude RF electric field, i.e., \(25\mathrm{kV}/\mathrm{m}\). We found that although the transition rates in the optimal parameters regions decrease by a factor of two, the selectivity, \(\bar{N}_{rc}/\bar{N}_{lc}\), changed by a few percent only. We conclude that the selectivity effect is not sensitive to the RF amplitude because of its resonant nature. However, the overall plugging efficiency might be more sensitive and should be studied and optimized for a specific system in future research. Figure 5: Right-left selectivity, \(\bar{N}_{rc}/\bar{N}_{lc}\), as a function of the RF parameters \(k_{RF}\) and \(\omega_{RF}\) for deuterium (a) and tritium (b). The locations of the parameter sets of table 1 are indicated by numbered (from one to four) blue dots. Figure 4: The saturation values (as in Fig. 3) for a wide range of \(k_{RF},\omega_{RF}\) values for both deuterium and tritium. \(\bar{N}_{rc}\) in panels (a, e), \(\bar{N}_{lc}\) in panels (b, f), \(\bar{N}_{cr}\) in panels (c, g), and \(\bar{N}_{cl}\) in panels (d, h). Note the color scale is different for the saturation metrics pairs \(\bar{N}_{rc},\bar{N}_{lc}\) and \(\bar{N}_{cr},\bar{N}_{cl}\). The black dashed lines in panels (a,b,e,f) indicate the theoretical resonant lines described in Sec. IV. A generalized rate equations model In previous work, we developed a semi-kinetic rate equations model for the MM system, which includes Coulomb scattering within each cell and transmissions between neighboring cells through the loss cones.[49] Next, we review the rate equation model's definitions and assumptions and introduce new terms for a driven transport induced by the external RF fields. The ion-ion Coulomb scattering rate is denoted by \(\nu_{s}\) and roughly scales with density and temperature as \(\propto n/T^{3/2}\). The inter-cell transmission rate is approximated by \(\nu_{t}=\nu_{th}/l\), where \(\nu_{th}\) is the ion thermal velocity and \(l\) is the mirror cell length. The particles are divided into three populations per cell, captured particles due to the mirroring effect, and right- and left-going particles through the right and left loss cones, respectively. The densities of the three populations in the \(i\)'th cell are denoted by \(n_{c}^{i}\), \(n_{r}^{j}\), and \(n_{l}^{j}\). The steady-state solution is defined by \(\dot{n}=0\) for all cell populations. The outgoing flux between two neighboring cells (e.g., \(i\to i+1\)) is proportional to \(\phi_{i,i+1}\propto v_{th}^{i}n_{r}^{j}-v_{th}^{i+1}n_{l}^{i+1}\). In a steady state, by definition, all inter-cell fluxes are the same and denoted by \(\phi_{ss}\). The overall system confinement time is inversely related to the steady-state flux, namely, \(\tau\propto 1/\phi_{ss}\), which we aim to maximize for satisfying the Lawson criterion in a fusion system.[50; 51] In the absence of RF fields, the confinement time is expected to scale linearly with the number of MM cells \(N\), in agreement with simple one-dimensional diffusion models.[22; 25; 49] However, even for optimized system parameters and optimistic thermodynamical regimes, fusion time scales require an impractical number of MM cells.[49] Therefore, as discussed above, we propose to apply an RF field that asymmetrically affects the particle transport in MM systems, i.e., recaptures more escaping (right-going) than returning (left-going) particles. Here, we study the accumulative plugging effect by including the RF-induced transition terms in an extended rate equation model. In Sec. IV, we quantified the amounts of particles that transport between the different populations of one MM cell as \(\tilde{N}_{rc},\tilde{N}_{lc},\tilde{N}_{cr},\tilde{N}_{cl}\) under the influence of traveling rotating electric field for an extensive parameter range (see Fig. 4). The characteristic time scale for a non-trapped particle to pass across one mirror cell depends on its thermal velocity via \(\tau_{th}=l/\nu_{th}\). Therefore, we estimate the RF conversion rates by the ratio between the relative conversion amounts and the characteristic time scale, i.e., \(\nu_{RF,rc}=\tilde{N}_{rc}/\tau_{th}\), \(\nu_{RF,lc}=\tilde{N}_{lc}/\tau_{th}\), \(\nu_{RF,cr}=\tilde{N}_{cr}/\tau_{th}\), and \(\nu_{RF,cl}=\tilde{N}_{cl}/\tau_{th}\). We have generalized the rate equations model,[49] to include these four induced transition rates such that the generalized model reads \[\dot{n}_{c}^{i} = \nu_{s}^{i}\left[(1-2\alpha)(n_{t}^{i}+n_{r}^{i})-2\alpha n_{c}^{ i}\right] \tag{12}\] \[-(\nu_{RF,cl}+\nu_{RF,cr})n_{c}^{i}+\nu_{RF,lc}n_{l}^{i}+\nu_{ RF,rc}n_{r}^{i}\] \[\dot{n}_{l}^{i} = \nu_{s}^{i}\left[\alpha(n_{r}^{i}+n_{c}^{i})-(1-\alpha)n_{l}^{i} \right]-\nu_{t}^{i}n_{l}^{i}+\nu_{t}^{i+1}n_{l}^{i+1}\] (13) \[-\nu_{RF,lc}n_{l}^{i}+\nu_{RF,cl}n_{c}^{i}\] \[\dot{n}_{r}^{i} = \nu_{s}^{i}\left[\alpha(n_{l}^{i}+n_{c}^{i})-(1-\alpha)n_{r}^{i} \right]-\nu_{t}^{i}n_{r}^{i}+\nu_{t}^{i-1}n_{r}^{i-1}\] (14) \[-\nu_{RF,rc}n_{r}^{i}+\nu_{RF,cr}n_{c}^{i}\] The normalized loss-cone solid angle is \(\alpha=\sin^{2}\left(\theta_{LC}/2\right)\), where the loss-cone angle satisfies \(\sin\theta_{LC}=\nu_{\perp}/\nu=R_{m}^{-1/2}\). Our convention is that the left side of the MM section is connected to the main cell, and the right side is the exit. The boundary conditions that close the rate equations are a constant density in the left cell (\(n_{c}^{1}+n_{l}^{1}+n_{r}^{1}=n_{0}=\)const., which practically reads \(n_{r}^{1}=n_{0}-n_{c}^{1}-n_{l}^{1}\) since \(n_{l}^{1}\) and \(n_{c}^{1}\) are determined by the rate equations themselves), where \(n_{0}\) is the central fusion cell density, and a free flow boundary condition on the right cell (\(\nu_{t}^{N+1}n_{l}^{N+1}=0\)), i.e., no left going flux from outside the system. The Coulomb scattering rate, \(\nu_{s}\), depends on the thermodynamic scenarios along the MM section.[49] Here, for simplicity, we focus on the common isothermal scenario, which can be justified by electron thermalization since electrons are much faster than ions. The classic MM operates best when the Coulomb scattering mean free path (MFP), \(\lambda=v_{th}/\nu_{s}\), is of the order of the mirror cell length, \(l\), where the assumption of one-dimensional diffusion between cells is valid. However, for plasma parameters commonly considered for D-T fusion machines, \(n=10^{21}\,\mathrm{m}^{-3}\) and \(k_{B}T=10\mathrm{keV}\), the MFP is of the order of kilometers, making MM systems impractical. It is possible to tweak the plasma parameters (increasing the density and reducing the temperature) to get a lower MFP for the same magnetic pressure. However, the price is a higher required confinement time to meet the Lawson criterion.[49; 50; 51] It is notable that for symmetric RF transition rates, i.e., when \(\nu_{RF,rc}=\nu_{RF,lc}\), the role of the RF terms in Eqs. (12)\(-\)(14) is equivalent to that of the Coulomb scattering. Therefore, by using RF, one can circumvent the difficulty of choosing suitable plasma parameters (i.e., temperature and density), as the RF transition rates depend on the externally applied field rather than on the plasma parameters. Furthermore, when the RF trapping rates are not symmetrical, we can get a right-left selectivity such that the trapping becomes more favorable for escaping particles than for returning particles. Exploiting this effect, the plugging of the MM section can be based on the RF effect rather than on collisions. As a result, one can choose plasma parameters that are preferable for fusion in the central cell without worrying about the collision rates in the MM sections. We demonstrate the RF plugging effect by solving Eqs. (12-14) for a steady-state flux. In Fig. 6, we present the density profiles and the steady state flux obtained in simulations for different numbers of cells \(N\) and different sets of RF parameters. Each set of RF parameters corresponds to four different values of the RF rates that also vary for deuterium and tritium (see Table 1). We note that in the rate equations model, we can solve only for a single particle type, which does not describe well a D-T system that composes both deuterium and tritium. However, in the cases where the Coulomb scattering rates are negligible compared to the RF, we can assume both species do not interact. We, therefore, solve the rate equations for deuterium and tritium separately. Fig. 6(a) shows the steady state density profiles for MM systems with \(N=30\) cells. Fig. 6(b) shows the steady state flux as a function of the total number of cells, \(N\). The flux in the figure is normalized by the Lawson flux, \[\phi_{Lausson}=\frac{nV}{2\tau_{\text{2,ason}}}, \tag{15}\] where \(V\) is the volume of the central fusion cell and \[\tau_{\text{Lawson}}=\frac{3k_{B}T/n}{\langle\sigma v\rangle E_{ch}/4-C_{B}T^{1/2}} \tag{16}\] is the minimal required confinement time due to Lawson's criterion for ignition. \(E_{ch}\) is the charged fusion products energy (e.g., 3.5 MeV for D-T reactions), \(\langle\sigma v\rangle\) is the Maxwell-Boltzmann averaged fusion reactivity, and \(C_{B}=5.34\cdot 10^{-37}\text{Wm}^{3}\text{keV}^{-1/2}\) is the bremsstrahlung radiation power coefficient assuming fully ionized plasma and equal ion and electron temperatures.[49; 50; 51] The factor of \(1/2\) in Eq. (15) stands for the two sides of the systems. Further higher-order adjustments to the Lawson criterion for RF-plugged MM systems, such as the RF energy deposited in the escaping particles, are left to a future study. For a practical example, if the main cell length is \(100m\) and its diameter is \(D=0.5m\) (plasma volume in the central cell of \(V\approx 20m^{3}\)), then \(\phi_{Larsson}\approx 3\cdot 10^{22}\text{s}^{-1}\). One can see that the flux drops exponentially with \(N\) in the cases with high right-left selectivity. This exponential drop allows the system to reach the fusion relevant regime \(\phi_{ss}\ll\phi_{\text{Lawson}}\) with a reasonable MM section size, i.e., tens of cells and less than \(100m\) total system length. This should be contrasted with Fig. 5 in Ref. [49], which shows that without the RF terms, the steady-state flux drops inversely linear with \(N\), leading to an impractical number of cells for reducing the flux to a required value. Finally, the mass difference between deuterium and tritium results in a frequency shift in the particles' response to the RF fields, as shown in Figs. 4, 5. Consequently, the optimal parameters for RF plugging are also different, as illustrated in the examples of Table 1 and in the rate equations results presented in Fig. 6. Importantly, there is an overlap region in parameter space, so a single RF field can plug both isotopes. For example, in sets 3 and 4, the selectivities for D and T are of the same order (less than a factor of two). The differences in the flux suppression between the two species (solid and dashed lines for D and T, respectively) are minor in these examples (green and red sets in Fig. 6). On the other hand, for set 1 (blue), the flux suppression effect is more dominant in tritium than in deuterium and vice versa for set 2 (black). Interestingly, the scenario of set 1 can be advantageous for fusion reactors fueled and heated by injecting a high-energy neutral deuterium beam. This is because most of the fusion power comes from the reactions between the high-energy, injected deuterium and the thermal tritium. Therefore, it might be desirable in this case to remove the slowed-down thermal deuterium particles while plugging the tritium solely.[72; 15; 18] ## VI Conclusions Axial confinement in MM fusion machines can be significantly enhanced by applying external RF fields that are resonant mostly with the outgoing particles. The considered plugging field is a rotating electric field with a frequency slightly detuned from the ion cyclotron frequency and with a non-zero axial wave vector, such that in the particles' rest frame, the Doppler shift compensates the frequency detuning. The resonant interaction would then increase the transverse energy of the escaping fuel particles and thus move them outside of the loss cone, i.e., recapture them. On the other hand, the incoming particles move in the opposite direction, from the MM section toward the central fusion cell. Therefore, they experience an opposite Doppler shift, i.e., away from resonance with the external field, resulting in an insignificant change in their transverse energy. We studied this asymmetric effect by sampling initial conditions from a thermal distribution and employing single particle simulations to find the trajectories under the mutual influence of the mirror magnetic field and the RF electric field. We developed theoretical criteria for the resonant region in velocity space, which agreed well with the single particle calculations. From the Monte Carlo simulations results, we extracted the net transport rate of particles between the three populations of the rate equations model, trapped, escaping, and returning particles. Because the re-trapping effect depends on the particle's mass and electric charge, we calculate the transition rates for each set of RF parameters for both species of D-T plasma. We incorporated the RF transition rates into an extended semi-kinetic rate equations model for MM and solved it for steady state. The main result is that adding RF plugging significantly Figure 6: Steady-state simulation results of the MM rate equations models, including (a) the ion density profiles as a function of cell number for an MM section with \(N=30\) cells, and (b) flux normalized by the maximal Lawson flux \(\phi_{ss}/\phi_{\text{Lawson}}\) as a function of system size. Presented are different sets of RF parameters and different isotopes (D and T), as described in the legend. changes the scaling of the axial confinement time with the number of MM cells from linear (without RF) to exponential (with RF). Remarkably, this flux reduction could help satisfy the Lawson criterion for a reasonable system size. We also demonstrated that although the plugging effect is different for deuterium and tritium, there are RF parameters for which the results are similar for both species. However, it is noted that a confinement enhancement of tritium alone could be advantageous for fusion machines that include high-energy, neutral deuterium beams. Despite the promising results, a few serious caveats remain for us to address in future studies. First, the rotating electric field, which increases the kinetic energy of the particles, also heats the plasma to temperatures that might be deleterious for confinement and stability in some scenarios. Second, the measure of the external electric field penetration into the plasma, the excitation of various collective modes and waves, and the issue of wave backscattering from the plasma should be carefully examined theoretically and experimentally. Third, the problem of radial stability, which is crucial for linear machines, is out of the scope of this paper. Finally, the amount of plugging that can be induced by RF fields generated by realistic antennas like Nagoya coils [41] or helicon antennas [69; 70] and the influence of interactions between the particles on the RF plugging are also left for future study along with other types of RF fields, such as a rotating magnetic field. ## Data Availability Statement The data that support the findings of this study are available from the corresponding author upon reasonable request. ## Acknowledgments This work was supported by the PAZI Foundation, Grant No. 2020-191.
2308.00466
Could electromagnetism be envisaged as a form of gravity in a metric affine framework?
We revisit the relativistic coupling between gravity and electromagnetism, putting particularly into question the status of the latter; whether it behaves as a source or as a form of gravity on large scales. Considering a metric-affine framework and a simple action principle, we find out that a component of gravity, the so-called homothetic curvature field, satisfies both sets of Maxwell equations. Therefore, we arrive at a gravito-electromagnetic equivalence analogous to the mass-energy equivalence. We raise and discuss some crucial questions implied by the aforementioned finding, refreshing our viewpoint of electromagnetism in curved spacetime.
Panagiotis Mavrogiannis
2023-08-01T11:43:48Z
http://arxiv.org/abs/2308.00466v3
# Could electromagnetism be envisaged as a form of gravity in a metric affine framework? ###### Abstract We revisit the relativistic coupling between gravity and electromagnetism, putting particularly into question the status of the latter; whether it behaves as a source or as a form of gravity on large scales. Considering a metric-affine framework and a simple action principle, we find out that a component of gravity, the so-called homothetic curvature field, satisfies both sets of Maxwell equations. Therefore, we arrive at a gravito-electromagnetic equivalence analogous to the mass-energy equivalence. We raise and discuss some crucial questions implied by the aforementioned finding, refreshing our viewpoint of electromagnetism in curved spacetime. 1 Footnote 1: There are in fact two explicitly known forms of matter (taking into account the mass-energy equivalence), ‘ordinary’ matter and electromagnetic fields. These can be described by scalar and vector/tensor fields respectively. 2 Footnote 2: Apart from the so-called _Einstein-Maxwell coupling_, the _Weyl-Maxwell coupling_ (long-range curvature and electromagnetic field) has also been studied within the literature [2]. ## Introduction There are two kinds of well-known (fundamental) macroscopic field quantities introduced to causally describe the motion of matter on large (macroscopic) scales. These are the gravitational and electromagnetic fields, which are conventionally described by General Relativity and Maxwellian Electromagnetism. Due to the wide presence of electromagnetic fields in astrophysical and cosmological environments, we frequently need to consider the parallel presence, coupling or coexistence of gravity and electromagnetism on large-scales. In practice, our conventional perspective consists of envisaging electromagnetic fields (in analogy with matter fields) as sources of gravitation, and therefore generalising the laws of electrodynamics to curved spacetimes (we talk about electrodynamics in curved spacetime). However, unlike (ordinary) matter fields1, electromagnetic ones possess a geometric nature which allows for their double coupling with spacetime curvature, not only (indirectly) via Einstein's equations but (directly) through the so-called Ricci identities as well2[1]. Let us have a closer look at the two aforementioned types of coupling. ### Einstein-Maxwell coupling Firstly, according to Einstein's equations of gravitation, electromagnetic fields, as a form of energy, along with (ordinary) matter contribute to the formation of spacetime geometry. Hence, in a sense, Maxwell's electromagnetism is incorporated into (or makes part of) General Relativity by providing a kind of source for the gravitational field. Let us write here for reference the relativistic equations for gravity3 Footnote 3: We adopt the geometrised units system (refer to the appendix F in [3] for details), i.e. \(8\pi G=1=c\) (\(G\) is the gravitational constant and \(c\) is the speed of light), in which all quantities (ordinarily measured in terms of the fundamental units of length \(L\), time \(T\) and mass \(M\)) have dimensions expressed as integer powers of length. In particular, we note that mass, time, electric charge and energy have dimensions of length; velocity, force, action and Maxwell potential are dimensionless; Faraday electromagnetic field has inverse length dimensions whilst energy density and electric current density are measured in inverse square length units. \[R_{ab}-\frac{1}{2}Rg_{ab}=T_{ab}\,, \tag{1}\] where the (symmetric) Ricci tensor \(R_{ab}\) encodes the local-gravitational field (the Ricci scalar \(R=R^{a}{}_{a}\) provides a measure of the average local gravity), and the energy-momentum tensor \(T_{ab}\) (Noether conserved currents under translations and rotations) represents the energy sources of gravitation. For ordinary matter and electromagnetic fields, the aforementioned tensor reads: \(T_{ab}=T^{(\rm m)}{}_{ab}+T^{(\rm EM)}{}_{ab}=T^{(\rm m)}{}_{ab}+F_{ac}F^{c}{} _{b}+(1/4)F_{cd}F^{cd}g_{ab}\), where \(F_{ab}=2\partial_{[a}A_{b]}=\partial_{a}A_{b}-\partial_{b}A_{a}\) is the (antisymmetric) Faraday tensor, written in terms of the Maxwell 4-potential \(A_{a}\). Finally, the metric (also symmetric) tensor \(g_{ab}\) encodes the geometric properties of spacetime and is used to calculate lengths and angles in Riemannian manifolds. Through Einstein's equations \(R_{ab}\) and \(g_{ab}\) represent different aspects of the same beingness. In particular, \(g_{ab}\) is an agent for forming the Ricci scalar, basically the Lagrangian of relativistic gravitation, and it does not refer to inherent properties of spacetime but it is determined by the coordinate description of the physical system in question ### Ricci-Maxwell coupling Secondly, due to their geometric (vector/tensor) nature, electromagnetic fields directly couple with spacetime curvature via the so-called Ricci identities. In particular, the latter when applied to the Maxwell 4-potential and the Faraday tensor fields, read (in Riemannian geometry) \[2\nabla_{[a}\nabla_{b]}A_{c}=R_{abcd}A^{d}\qquad\qquad\mbox{and}\qquad\qquad 2 \nabla_{[a}\nabla_{b]}F_{cd}=-2R_{ab]c}{}^{e}F_{d]e}\,, \tag{2}\] respectively, where \(R_{abcd}\) is the Riemann curvature tensor encoding the total spacetime curvature. The above relations differ from the gravitational field equations in that firstly, they permit the coupling of only geometric quantities with the spacetime curvature. Secondly, they associate geometric fields with the total spacetime curvature (involving the Weyl, long-range curvature, field as well) apart from the local one (as encoded by the Ricci tensor). Overall, it seems to us that the aforementioned special coupling, described through (2), makes the status of electromagnetic fields essentially differ from that of a source of gravitation (e.g. ordinary matter). The aforementioned observation along with another, consisting of the mathematical similarity between the Faraday tensor \(F_{ab}=2\partial_{[a}A_{b]}\) and the so-called _homothetic _curvature_ tensor field \(\hat{R}_{ab}=\partial_{[a}Q_{b]}\) (follow the subsequent discussion), motivated us to investigate whether electromagnetic fields could be envisaged as a form/component of spacetime curvature. ### Metric affine framework Let us briefly present the metric-affine framework [4], within which, the above mentioned field \(\hat{R}_{ab}\) exists. To begin with, the transition from relativistic to metric affine spacetime requires rising two constraints of Riemannian geometry. On the one hand, we allow for an antisymmetric connection part, \(S_{ab}{}^{c}\equiv\Gamma^{c}{}_{[ab]}\) (i.e. the torsion tensor); on the other hand, for a non-vanishing covariant derivative of the metric tensor, \(Q_{abc}\equiv-\nabla_{a}g_{bc}\neq 0\) (note that \(Q_{a}=g^{bc}Q_{abc}=Q_{ac}{}^{c}\) and \(q_{a}=g^{bc}Q_{cba}=Q^{c}{}_{ca}\) are the non-metricity vectors). The former is associated with the impossibility to form infinitesimal parallelograms via parallel transport of a vector upon the direction of another and vice versa; the latter implies the vector length change during parallel transport. Within a metric-affine geometry, the Ricci tensor has also an antisymmetric part, containing contributions from both torsion and non-metricity. Homothetic curvature \(\hat{R}_{ab}=\partial_{[a}Q_{b]}\) is just a component of that antisymmetric part. In the particular case of torsionless spacetime, one has \(R_{[ab]}=\hat{R}_{ab}\). While Riemann curvature (or _direction curvature_) is responsible for changes in the direction of parallelly transported vectors along a closed curve, homothetic curvature (or _length curvature_) is associated with changes in vectors' length. It is worth noting that within the literature, the spacetime property of vectors' length change has been argued that it leads to the so-called _second clock effect_, the exclusion of existence of sharp spectral lines, and therefore to a non-physical theory. In particular, the aforementioned problem dates back to Weyl's gauge theory of gravity and Einstein's associated objections (for some historical information refer to e.g. [5]; for a modern approach to Weyl's theory see e.g. [6]). Interestingly however, it has been recently shown [7], [8] that under appropriate redefinition of proper time and the covariant derivative, the second clock effect does not actually arise in gravity theories with non-metricity. Up to this point our aim may have already become clear. We will examine whether \(\hat{R}_{ab}\) satisfies Maxwell equations, and whether there is a correspondence between homothetic curvature and the Maxwell field. In particular, it is the goal of the present manuscript to bring to the attention of the scientific community, the observation that there is indeed a (metric-affine) curvature component field which actually turns out to present an equivalence with the Maxwell field. In face of this finding we put into question our conventional perspective regarding the way we envisage macroscopic electromagnetic fields and their relation to gravity. Homogeneous (metric affine) Maxwell equations and the implication for gravito-electromagnetic equivalence Let us start from the expression \(\nabla_{[a}F_{bc]}=(1/3)(\nabla_{a}F_{bc}+\nabla_{c}F_{ab}+\nabla_{b}F_{ca})\), within a Riemannian framework. According to the homogeneous Maxwell equations, it has to be equal to zero. Taking thus into account that the Faraday tensor comes from a potential 4-vector, we follow the operations: \[\nabla_{[a}F_{bc]} = \frac{1}{3!}\left[2\nabla_{[a}\nabla_{b]}A_{c}+2\nabla_{[c}\nabla_{a] }A_{b}+2\nabla_{[b}\nabla_{c]}A_{a}\right]=\frac{1}{3}\left(R_{abcd}+R_{cabd}+R_ {bcad}\right)A^{d} \tag{3}\] \[=R_{[abc]d}A^{d}=0\,.\] In other words, we have recalled that if a second-rank antisymmetric tensor field can be written as the gradient of a 4-vector field, then the homogeneous Maxwell equations are a consequence of two geometric properties of the Riemannian spacetime4; these are the Ricci identities in the form of (2) and the first Bianchi identities (i.e. \(R_{[abc]d}=0\)).5 Inversely, if the homogeneous Maxwell equations are satisfied, the second-rank antisymmetric tensor field can be written as the gradient of a 4-vector field in Riemannian spacetime. Therefore, it is clear that \(\nabla_{[a}\hat{R}_{bc]}=0\) within the geometry in question. It is worth noting that the above well-known conclusion can be generalised to (non-Riemannian) geometries which possess non-metricity6 (e.g. see eqs. (1.152) and (1.158) in [4], corresponding to the metric-affine version of the Ricci identities and of the first Bianchi identities respectively). Nevertheless, in a general metric affine geometry, possessing torsion as well, the homogeneous Maxwell equations cease to be valid (once again see eqs. (1.152) and (1.158) of [4], in combination with (3)). In this case, homothetic curvature satisfies the following generalised version of Bianchi identities (known as _Weitzenbock identities_-see eq (1.169) in [4]) Footnote 4: Besides, the homogeneous Maxwell equations can be derived theoretically in Minkowski spacetime [9] through variation of the action \({\cal S}=\int\left(-\sum m_{i}\sqrt{\eta_{ab}\dot{x}_{(i)}^{a}\dot{x}_{(i)}^{b }}-\frac{1}{4}F_{cd}F^{cd}-\sum e_{i}A_{a}\dot{x}_{(i)}^{a}\right)\,d\tau\) with respect to the particles’ coordinates \(x^{a}(\tau)\) (\(\tau\) is the particle’s proper–time, its world–line parameter). Subsequently, the homogeneous Maxwell equations are generalised to curved (Riemannian) spacetime via the so–called minimal substitution rule. Footnote 5: For an arbitrary vector field \(A_{a}\) the aforementioned properties imply that \(\nabla_{[a}\nabla_{b}A_{c]}=0\). Footnote 6: It can be shown that both the Ricci and the first Bianchi identities maintain their Riemannian form when the relativistic background is modified by the additional non-metricity requirement. In fact, non-metricity is incorporated into the Riemann tensor. \[\nabla_{[a}\hat{R}_{bc]}=2\hat{R}_{d[a}S_{bc]}^{d}\,. \tag{4}\] Observe that in the absence of torsion, \(\hat{R}_{ab}\) satisfies the homogeneous set of Maxwell equations (i.e. \(\nabla_{[a}\hat{R}_{bc]}=0\)). Besides, it is known that Einstein-Hilbert action7 implies that \(S_{ab}{}^{c}=-(2/3)S_{[b}\delta_{a]}{}^{c}\) (with \(S_{a}\equiv S_{ab}{}^{b}\) being one of the torsion vectors)-see [4]. Given the aforementioned property, let us point out that homothetic curvature satisfies (recall eq. (4)) the following homogeneous set of Maxwell-like equations, namely Footnote 7: In fact, there is a generalised action (known under the name _quadratic theory_[10]), containing the Einstein-Hilbert, which has as a consequence the property \(S_{ab}{}^{c}=-(2/3)S_{[b}\delta_{a]}{}^{c}\). \[\hat{\nabla}_{[a}\hat{R}_{bc]}=0\,,\quad\mbox{ where }\quad\hat{\nabla}_{a}= \nabla_{a}-\frac{4}{3}S_{a}\,,\quad\mbox{ for }\quad S_{ab}{}^{c}=-\frac{2}{3}S_{[b}\delta_{a]}{}^{c}\,. \tag{5}\] We note once again that the above turns out to hold for a generalised action (quadratic theory [10]), a part of which is the Einstein-Hilbert. A possible correspondence between the Faraday tensor and the Maxwell potential with the homothetic curvature and the non-metricity vector is apparent. In particular, let us focus on the correspondence \(A_{a}\to Q_{a}\) and \(F_{ab}\to\hat{R}_{ab}\). Taking into account that in geometrised units, \(A_{a}\) and \(g_{ab}\) are dimensionless, a coupling constant \(k\) of length dimension is needed so that dimensional equivalence is established, i.e. \[A_{a}=kQ_{a}\quad\mbox{ and }\quad F_{ab}=k\hat{R}_{ab}\,, \tag{6}\] where \(Q_{a}\) obviously has inverse length dimension. Thus, a potential equivalence between the homogeneous Maxwell equations and (5) is pointed out via the correspondence: \(F_{ab}\to k\hat{R}_{ab}\) and \(\nabla\to\hat{\nabla}\). The question is: _Is there an action reproducing both Einstein and Maxwell field equations, and satisfying the condition \(S_{ab}{}^{c}=-\frac{2}{3}S_{[b}\delta_{a]}{}^{c}\) (appearing in (5)) as well?_ On finding such an action, the above assumed equivalence, will be established. Inhomogeneous Maxwell equations: From electrodynamics in curved spacetime to metric affine (gravitational) equivalent of Maxwellian electrodynamics In contrast to the homogeneous set of Maxwell equations, which springs from a purely geometric principle, the inhomogeneous one is known to be a consequence of an action principle (involving the electromagnetic field's strength and its coupling with matter. ### Maxwellian action in curved (relativistic) spacetime Before answering the question stated in the end of the previous subsection, let us recall that the action for electrodynamics in curved (Riemannian-relativistic) spacetime, reads (e.g. see [11] and [12]) \[{\cal S}_{CEM}=\int\left(R_{ab}g^{ab}+{\cal L}_{\rm m}-\frac{1}{4}F_{ac}F_{bd }g^{ab}g^{cd}-A_{a}J_{b}g^{ab}\right)\sqrt{-g}\,d^{4}x\,, \tag{7}\] where \(J^{a}\) is the current 4-vector, \({\cal L}_{\rm m}\) is the Lagrangian density of matter and \(g\) the determinant of the metric tensor. In the aforementioned combined action, the electromagnetic field couples with the metric tensor of the gravitational field to form the scalar (Lorentz invariant) inner products \(F_{ab}F^{ab}=F_{ac}F_{bd}g^{ab}g^{cd}\) and \(A^{a}J_{a}=A_{a}J_{b}g^{ab}\). Note that in the above there is actually only one fundamental field, the spacetime geometry or gravitation, while any other field-carrier of energy-acts as a source of the former. In this context, the metric tensor acts as a mediator between fields-sources of gravity-with geometric nature (vectors, tensors) and their energy content (i.e. Lagrangian densities). On the one hand, variations with respect to the potential \(A_{a}\) lead to Maxwell equations of the form: \[{\rm D}_{b}F^{ba}\equiv\frac{1}{\sqrt{-g}}\nabla_{b}\left(\sqrt{-g}F^{ba} \right)=J^{a}\,,\hskip 14.226378pt{\rm where}\hskip 14.226378pt\frac{1}{\sqrt{-g}} \nabla_{a}\sqrt{-g}=-\frac{1}{2}Q_{a}\,. \tag{8}\] The above formula reduces to \(\nabla_{b}F^{ba}=J^{a}\) in Riemannian spacetime, where \(Q^{a}\) vanishes. Variations with respect to the metric field, on the other hand, lead to Einstein's equations (1) and the energy-momentum tensors for the matter and Maxwell fields. ### Metric affine (gravitational) equivalent of the Maxwellian action and field equations We have seen that General Relativity accommodates separate field equations for gravity and electromagnetism, which are derived by a common combined (or 'coupled') action. Let us now return to our question regarding the search for an action reproducing inhomogeneous Maxwell-like equations for \(\hat{R}_{ab}\), under the condition: \(S_{ab}{}^{c}=-(2/3)S_{[b}\delta_{a]}{}^{c}\) (so that the homogeneous set (5) is also satisfied). Motivated by (7), the simplest action we can imagine, consists of the Einstein-Hilbert and a gravitational analogue of the Maxwellian-electromagnetic action-based on the correspondence (6). Hence, we consider the following \[{\cal S}_{GEM}=\int\left(R+{\cal L}_{(\rm m)}-\frac{k^{2}}{4}\hat{R}_{ab}\hat {R}^{ab}-\frac{k}{2}Q_{a}J^{a}\right)\sqrt{-g}\,d^{4}x\,, \tag{9}\] where \(Q_{a}J^{a}\) represents a coupling between charged currents and the non-metricity vector (in analogy with the coupling \(A_{a}J^{a}\) between matter and electromagnetic fields8). Within the spirit of our work, unlike electromagnetic fields, we do not envisage matter (and therefore the current \(J^{a}\)) as a geometric quantity9. Therefore, the term \(Q_{a}J^{a}\) expresses a coupling between charged matter and a component of spacetime curvature. It is worth noting that no new-unknown fields are introduced, just a gravitational analogue of the classical electromagnetic action. Moreover, all action terms are invariant under general coordinate transformations (in contrast to e.g. [13]. Note that in the aforementioned paper the homogeneous set of Maxwell equations is not satisfied). In eq. (9) both \(Q_{a}\) and therefore \(\hat{R}_{ab}\) depend on the metric tensor as well as on the connection (for details see [4]). Also, we shall keep in mind that the metric appears in the Lagrangian inner products and scalars (i.e. \(\hat{R}_{ab}\hat{R}^{ab}=\hat{R}_{ac}\hat{R}_{bd}\ g^{ab}g^{cd}\), \(Q_{a}J^{a}=Q_{a}J_{b}\ g^{ab}\) and \(R=R_{ab}\ g^{ab}\)). Footnote 8: The electric charge is a kind of coupling constant between matter and electromagnetic fields. Footnote 9: Our consideration, regarding the non-geometric origin of matter, differs from the historical effort for gravito-electromagnetic unification in a metric-affine framework, started by Eddington and developed by Einstein [5] First of all, let us consider metric variations of (9). Taking into account the auxiliary relations in the appendix A, we arrive at Einstein field equations (of the form (1)) with stress-energy tensor \(T_{ab}=T_{ab}^{(\rm m)}-(k^{2}/4)\hat{R}_{cd}\hat{R}^{cd}g_{ab}-k^{2}\hat{R}_{ ac}\hat{R}^{c}{}_{b}\). Note that \(R_{ab}\) and \(R\) contain now contributions from torsion and non-metricity, while \(T_{ab}^{(\rm m)}\) refers to the energy-momentum tensor for matter. Regarding variations with respect to the connection (see the appendix), we receive the following field equations \[\frac{1}{2}Q_{c}g^{ab}-Q_{c}{}^{ab}-\frac{1}{2}Q^{a}\delta^{b}{}_{c}+q^{a} \delta^{b}{}_{c}+2S_{c}g^{ab}-S^{a}\delta^{b}{}_{c}+g^{ad}S_{dc}{}^{b}+k^{2} \delta^{b}{}_{c}{\rm D}_{d}\hat{R}^{da}-kJ^{a}\delta^{b}{}_{c}=0\,, \tag{10}\] where \({\rm D}_{a}\equiv(1/\sqrt{-g})\nabla_{a}(\sqrt{-g}...)\). Note that the first four terms represent the so-called Palatini tensor. Moreover, all the first seven terms originate from the Einstein-Hilbert action, allowing for non-vanishing torsion and non-metricity (see chapter 2 of [4]). Subsequently, taking the three traces of (10), leads to the relations: \[-\frac{3}{2}Q^{a}+3q^{a}+ 4k^{2}{\rm D}_{b}\hat{R}^{ba} -4kJ^{a}-4S^{a}=0\,,\qquad\ \frac{1}{2}Q_{a}+q_{a}+k^{2}{\rm D}_{b}\hat{R}^{b}{}_{a}-kJ_{a}+4S_{a}=0 \tag{11}\] \[\mbox{and}\ \ \ k{\rm D}_{b}\hat{R}^{ba}=J^{a}\ \ (\mbox{with ${\rm D}_{a}J^{a}=0$})\,.\] Note that eq. (11c) represents the inhomogeneous set of Maxwell equations. Within the same (metric-affine) framework, action (7) would lead to the same equations for the Faraday field, namely \({\bf D}_{b}\mathbf{F^{ba}}=\mathbf{J^{a}}\). Let us point out that eq. (11c) is essentially a consequence of two basic mathematical properties and one physical property. In detail, the two mathematical properties are: firstly, the similar mathematical construction between the homothetic curvature \(\hat{R}_{ab}\) and the Faraday \(F_{ab}\) tensor field (i.e. written as the gradient of a vector field); secondly, the linear dependence of the non-metricity vector \(Q_{a}\) on the connection, so that \(\delta_{\Gamma}Q_{a}=2\delta_{a}{}^{d}\delta_{c}{}^{b}\delta\Gamma^{c}{}_{bd}\). The physical property is associated with the action (9) itself. ### Constraints and similarities with classical unified theories We observe that charge conservation is expressed in the form \(\mathrm{D}_{a}J^{a}=0\) (\(\Leftrightarrow\nabla_{a}J^{a}=(1/2)Q_{a}J^{a}\)). Moreover, taking the nabla divergence of (11c), we find out the constraint \[\nabla_{a}J^{a}=\frac{k}{2}\left(\hat{R}_{ab}\hat{R}^{ab}+Q_{a}\nabla_{b}\hat{ R}^{ba}\right)\quad\text{ or }\quad k\hat{R}_{ab}\hat{R}^{ab}=Q_{a}\left(J^{a}-k\nabla_{b}\hat{R}^{ba} \right)\,. \tag{12}\] In other words, we have figured out that the last two terms in the action (9) are actually related with each other through the above expression. Subsequently, considering various combinations of the three traces in (11) with the initial field equations (10) (this involves some lengthy but straightforward algebra)10 Footnote 10: Note that due to the non-metricity requirement, raising indices is no-longer a trivial operation. For instance, raising indices in (11b) leads to \[\frac{1}{2}Q^{a}+q^{a}-kQ_{b}{}^{ca}\hat{R}^{b}{}_{c}+\frac{k^{2}}{2}Q_{b}\hat {R}^{ba}+4S^{a}=0\,.\] , we eventually arrive at the constraints \[Q^{a}=4q^{a}=-\frac{16}{3}S^{a}\,. \tag{13}\] Namely, within the framework of the action (9), the non-metricity and torsion vectors are linearly dependent, so that they all together correspond to only one degree of freedom. The same thing generally happens when considering only the Einstein-Hilbert action (e.g. see [4]). In particular, it is well-known that Einstein-Hilbert action does not reproduce general relativity. Instead, it leads to Einstein's field equations along with an additional degree of freedom expressed by (13). As a consequence of the latter, relation (6) recasts into \[A_{a}=kQ_{a}=4kq_{a}-\frac{16}{3}kS_{a}\quad\text{ and }\quad F_{ab}=k\hat{R}_{ ab}=4kq_{ab}=-\frac{16}{3}kS_{ab}\,, \tag{14}\] where \(q_{ab}\equiv\partial_{[a}q_{b]}\) and \(S_{ab}\equiv\partial_{[a}S_{b]}\). In other words, the vectorial degree of freedom expressed by (13) and allowed by the Einstein-Hilbert action provides a gravitational equivalent for the Maxwell field. Furthermore, following some lengthy operations, involving eqs. (10) and (11) (see Ch. 2 of [4]), it can be shown that the torsion and non-metricity tensors are related with the associated vectors via \[S_{ab}{}^{c}=-\frac{2}{3}S_{[b}\delta_{a]}{}^{c}\quad\text{ and }\quad Q_{ abc}=\frac{1}{4}Q_{a}g_{bc}\,. \tag{15}\] The above constraints hold exactly the same for action (9), given that eq (10) reduces to Einstein-Hilbert \(\Gamma\)-field equations under (11c). Therefore, the homogeneous set of Maxwell equations in the form of (5), is also satisfied by the \(\hat{R}_{ab}\) field in the case of the action we examine. ## Closing remarks-Questions for further research The present work was initially motivated by the problem of classical gravito-electromagnetic unification, and it has indeed contributed towards that direction (we have shown that a component of its antisymmetric part obeys all of Maxwell equations). However, instead of a unification, our study points out more an equivalence between the Maxwell field and a metric affine component of the gravitational field (i.e. homothetic or length curvature), analogous to mass-energy equivalence. If someone would like to place the present effort within the unified theories context, then it would belong somewhere between the lines of Weyl and Eddington-Einstein. It shares some similarities with both the aforementioned approaches but it essentially differs from both. In particular, envisaging electromagnetism as a component of metric-affine gravity, dates back to the efforts of Weyl, Eddington and Einstein [5] (refer to the aforementioned review article for any information concerning past efforts and failures of unification). Despite that unifying theories are widely regarded by the modern scientific community as a vain dream (presumably because of a long history of failures), history of physics tends to favour an antidiametrically opposite point of view. Let us recall for instance, the many new paths opened through unification of electricity and magnetism, as well as of electromagnetic and weak interactions, in the distant and recent past. Overall, we have shown that the antisymmetric part of the Ricci tensor, namely the homothetic curvature, satisfies all of Maxwell equations. This finding points out the fundamental question: _Is it possible to exist two different kinds of fields both satisfying Maxwell equations and describing different things? If not, should electromagnetism be envisaged as a form, instead of a source, of gravity on large scales? Alternatively, are electromagnetic fields equivalent to gravitational fields and which is the equivalence relation?_. Our work shows that there must be such an equivalence, taking the form of (6), so that the Maxwell field can be calculated from a given metric. The aforementioned relation implies that a given electromagnetic field has a gravitational equivalent determined via the conversion constant \(k\). It is worth noting that there is a remarkable analogy between gravito-electromagnetic (eq. (6)) and mass-energy equivalence, i.e. \(E=mc^{2}\) (\(k\) is the counterpart of \(c^{2}\)). Showing the existence of a _gravito-electromagnetic equivalence_ is essentially the contribution of the present piece of work. Therefore, two crucial questions arise. Firstly, which is the nature of the conversion constant \(k\) and how can it be determined? Let us make a _conjecture_. We observe that the action term \(J_{a}Q^{a}\), introduced in (9), turns out to establish a coupling between charged matter on the one hand, and electromagnetic-gravitational fields on the other, according to the schema: charge\(\rightarrow\) electromagnetic field (i.e. \(J_{a}A^{a}\): role of the electric charge classical electrodynamics) +charge\(\rightarrow\) non-metricity component of gravity (i.e. \(J_{a}Q^{a}\) in action (9))\(\rightarrow\) electromagnetic\(\leftrightarrow\) gravitational-non-metricity component (i.e. \(A_{a}=kQ_{a}\)). Besides, we take into account that the electric charge has length dimension in geometrised units. Therefore, we state the following question: Could the coupling constant \(k\) (with length geometrised dimension) be identified as the total electric charge of a given charged distribution? If this is the case, it would appear that the electric charge behaves on large-scales as a quantity which translates a given electromagnetic field into its gravitational equivalent. Furthermore, according to (6), with \(k\rightarrow{\cal Q}\), opposite charges correspond to homothetic curvature of opposite sign. Could the macroscopic interaction between a positive and a negative charge distribution be envisaged as a consequence of an 'interaction' between opposite kinds of homothetic curvature? Secondly, how are the properties of the Maxwell field (\(A_{a}=kQ_{a}=(16/3)kS_{a}\), via (13)) reconciled with the geometric significance/properties of non-metricity and torsion? The aforementioned properties are respectively the change to a vector's magnitude under its parallel transport along a given curve, and the impossibility to form a closed (small) parallelogram under parallel transport of one vector to the direction of another [4]. Addressing the above exposed questions/problems will be an object of my future efforts. ## Appendix A Metric and connection variations In deriving the field equations within the main text we make use of the following relations for metric and connection variations [4], [10]. Concerning the former, we have: \[\delta_{g}Q_{a}=\partial_{a}\left(g_{bc}\delta g^{bc}\right)\,,\quad\delta_{g} \hat{R}_{ab}=\partial_{[a}\delta_{g}Q_{b]}=0\,,\quad\delta_{g}\sqrt{-g}=-(1/2 )\sqrt{-g}g_{ab}\delta g^{ab}\,,\] \[\delta_{g}\left(\hat{R}_{ab}\hat{R}^{ab}\right)=\delta_{g}\left(\hat{R}_{ab} \hat{R}_{cd}g^{ac}g^{bd}\right)=-2\hat{R}_{ac}\hat{R}^{c}{}_{b}\delta g^{ab}\] and \[\delta_{g}\left(\hat{R}_{ab}\hat{R}^{ab}\sqrt{-g}\right)=\left(-2\hat{R}_{ac }\hat{R}^{c}{}_{b}-(1/2)\hat{R}_{cd}\hat{R}^{cd}g_{ab}\right)\sqrt{-g}\delta g ^{ab}\,.\] As for the latter, we deploy: \[\delta_{\Gamma}Q_{a}=2\delta_{a}{}^{d}\delta_{c}{}^{b}\delta\Gamma^{c}{}_{bd} \quad\mbox{and}\quad\delta_{\Gamma}(\hat{R}_{ab}\hat{R}^{ab})=-4\nabla_{b} \hat{R}^{ba}\delta\Gamma^{c}{}_{ca}=-4\nabla_{d}\hat{R}^{da}\delta^{b}{}_{c} \delta\Gamma^{c}{}_{ab}\,,\] with \(\delta_{a}{}^{b}\) being the Kronecker symbol. **Acknowledgements:** To my beloved friend Evangelos Kipouridis, with whom I have been sharing my thoughts and interest in physics since I was at school. I am grateful to Damos Iosifidis for providing me with many useful technical details regarding the metric affine formulation; for some illuminating discussions and his overall great willingness to help. I also thank prof Tomi S. Koivisto for his insightful comments. The present research work was supported by the Hellenic Foundation for Research and Innovation (H.F.R.I.), under the 'Third Call for H.F.R.I. PhD Fellowships' (Fellowship No. 74191).
2310.12228
Solution to the conflict between the resolved and unresolved galaxy stellar mass estimation from the perspective of JWST
By utilizing the spatially-resolved photometry of galaxies at $0.2<z<3.0$ in the CEERS field, we estimate the resolved and unresolved stellar mass via spectral energy distribution (SED) fitting to study the discrepancy between them. We first compare $M_{\ast}$ derived from photometry with and without the JWST wavelength coverage and find that $M_{\ast}$ can be overestimated by up to 0.2 dex when lacking rest-frame NIR data. The SED fitting process tends to overestimate both stellar age and dust attenuation in the absence of rest-frame NIR data, consequently leading to a larger observed mass-to-light ratio and hence an elevated $M_{\ast}$. With the inclusion of the JWST NIR photometry, we find no significant disparity between the resolved and unresolved stellar mass estimates, providing a plausible solution to the conflict between them out to $z\sim 3$. Further investigation demonstrates that reliable $M_{\ast}$ estimates can be obtained, regardless of whether they are derived from spatially resolved or spatially unresolved photometry, so long as the reddest filter included in the SED fitting has a rest-frame wavelength larger than 10000 \AA.
Jie Song, GuanWen Fang, Zesen Lin, Yizhou Gu, Xu Kong
2023-10-18T18:04:11Z
http://arxiv.org/abs/2310.12228v1
Solution to the conflict between the resolved and unresolved galaxy stellar mass estimation from the perspective of JWST ###### Abstract By utilizing the spatially-resolved photometry of galaxies at \(0.2<z<3.0\) in the CEERS field, we estimate the resolved and unresolved stellar mass via spectral energy distribution (SED) fitting to study the discrepancy between them. We first compare \(M_{*}\) derived from photometry with and without the JWST wavelength coverage and find that \(M_{*}\) can be overestimated by up to 0.2 dex when lacking rest-frame NIR data. The SED fitting process tends to overestimate both stellar age and dust attenuation in the absence of rest-frame NIR data, consequently leading to a larger observed mass-to-light ratio and hence an elevated \(M_{*}\). With the inclusion of the JWST NIR photometry, we find no significant disparity between the resolved and unresolved stellar mass estimates, providing a plausible solution to the conflict between them out to \(z\sim 3\). Further investigation demonstrates that reliable \(M_{*}\) estimates can be obtained, regardless of whether they are derived from spatially resolved or spatially unresolved photometry, so long as the reddest filter included in the SED fitting has a rest-frame wavelength larger than 10000 A. Galaxy properties (615); High-redshift galaxies (734); Astronomy data analysis (1858) 0000-0002-3891-8088]Jie Song 0000-0002-3188-7886]GuanWen Fang 0000-0002-4882-2886]Zesen Lin 0000-0002-4882-2886]Yizhou Gu 0000-0002-4883-3871]Xu Kong ## 1 Introduction The stellar mass (\(M_{*}\)) is considered as a fundamental physical quantity for describing the properties of a galaxy. Many characteristics of a galaxy, such as its morphology, star formation rate, color, and metallicity, are closely related to \(M_{*}\)(e.g., Brinchmann et al., 2004; Tremonti et al., 2004; Rodighiero et al., 2010; Bassett et al., 2013; van der Wel et al., 2014). Therefore, accurately determining \(M_{*}\) plays a crucial role in our understanding of galaxy formation and evolution. Several methods exist for measuring the physical properties of galaxies, with spectral energy distribution (SED) fitting being a particularly important approach (Conroy, 2013). By fitting the SED consisted of multiwavelength photometry, a variety of galaxy properties can be obtained, including \(M_{*}\), stellar ages, and dust attenuation. However, the accuracy of these properties strongly relies on the available data and the models used in the analysis (e.g., Sawicki & Yee, 1998; Maraston et al., 2006, 2010; Pforr et al., 2012). For example, the estimation of \(M_{*}\) is influenced by the wavelength coverage of the available observations. Previous studies have consistently demonstrated the significance of rest-frame near-infrared (NIR) data in accurately determining \(M_{*}\) for galaxies (e.g., Maraston et al., 2006; Ilbert et al., 2010). Consequently, to achieve a trustworthy \(M_{*}\), it is essential to incorporate observed-frame NIR data for galaxies at low and intermediate redshifts and mid-infrared data for galaxies at higher redshifts. Recent advancements in high-resolution images (e.g., CANDELS; Grogin et al., 2011; Koekemoer et al., 2011) allow us to study the spatial distribution of stellar matter within galaxies by spatially resolved (e.g., pixel-by-pixel) SED fitting at high redshifts. Interestingly, some previous works reported significant differences between the resolved (via pixel-by-pixel SED fitting, \(M_{*,resolved}\)) and unresolved (derived from the integrated flux of the galaxy, \(M_{*,\rm unresolved}\)) stellar mass estimations. With 67 nearby galaxies from Sloan Digital Sky Survey (SDSS; Eisenstein et al., 2011), Sorba & Sawicki (2015) found that \(M_{*,\rm resolved}\) derived based on \(\mathfrak{u}\), g, r, i, z, and NUV data was approximately 13% (0.06 dex) larger than \(M_{*,\rm unresolved}\). This deviation increases gradually with specific star formation rate (sSFR) and reaches a maximum of 25% (0.12 dex) at an sSFR of \(10^{-8}yr^{-1}\). Similarly, Sorba & Sawicki (2018) studied a high-redshift galaxy sample from the Hubble eXtreme Deep Field (Illingworth et al., 2013) and found that the difference between \(M_{*,\rm resolved}\) and \(M_{*,\rm unresolved}\) is small for galaxies with small sSFR (predominant) consist of low-redshift galaxies) but increases rapidly when sSFR \(\gtrsim 10^{-9.5}yr^{-1}\). Several studies have attributed the difference between \(M_{*,\rm resolved}\) and \(M_{*,\rm unresolved}\) to the phenomenon of outshining. Young and massive stars possess lower \(M_{*}/L\) ratios but are orders of magnitude brighter compared to older stel lar populations. When considering the integrated flux of a galaxy, the optical luminosity is dominated by these young stars. Therefore, in SED fitting, the model tends to prioritize fitting the \(M_{*}/L\) of the young stellar population, potentially ignoring the mass contribution of old stellar populations (e.g., Sawicki and Yee, 1998; Papovich et al., 2001; Maraston et al., 2010; Gimenez-Arteaga et al., 2022). But it is important to highlight that, as aforementioned, the availability of rest-frame NIR data significantly influences the results of SED fitting. Notably, the longest observed wavelength of HST is approximately 16000 A and falls short of capturing the rest-frame NIR emission for high-redshift galaxies. Consequently, some previous studies, particularly those involving high-redshift samples, did not have rest-frame NIR data in the pixel-by-pixel SED fitting when they used HST data to ensure high spatial resolution. This makes it difficult to accurately determine whether the disparity between \(M_{*,\rm resolved}\) and \(M_{*,\rm unresolved}\) arises from the lack of rest-frame NIR data or the outshining effect. Fortunately, JWST offers a promising solution by providing high-resolution rest-frame NIR imagining data even up to \(z\sim 3\). Leveraging the capabilities of the JWST, in this work, we utilize images obtained from the Cosmic Evolution Early Release Science (CEERS) program (PI: Finkelstein) to conduct a comparative study between resolved and unresolved \(M_{*}\) estimated via SED fitting with a larger coverage of the rest-frame wavelength, notably encompassing rest-frame NIR observations at high redshift. Our result shows there is no significant difference between \(M_{*,\rm resolved}\) and \(M_{*,\rm unresolved}\) after considering the data from JWST, which means that we can obtain a reliable \(M_{*}\) estimation with spatially unresolved photometry when the wavelength coverage is large enough, without the need to consider the outshining effect (if any). The layout of this paper is as follows. In Section 2, we introduce the data we used and the method to estimate \(M_{*,\rm resolved}\) and \(M_{*,\rm unresolved}\). The main results are shown in Section 3. We give a brief discussion in Section 4 and conclude in Section 5. Throughout the paper, we adopt a flat \(\Lambda\)CDM cosmology with \(H_{0}=70\ \rm km\,s^{-1}\,Mpc^{-1}\), \(\Omega_{\rm m}=0.3\), and \(\Omega_{\Lambda}=0.7\), and a Chabrier (2003) initial mass function (IMF). ## 2 Data and Method ### Sample selection The CEERS program (Finkelstein et al., in prep.) is one of the early release science surveys of JWST. The mosaicked images and weight maps utilized in this study were processed by Valentino et al. (2023), based on the public grizil software package (Brammer, 2022). The HST images in the same field were also processed by Valentino et al. (2023). For this investigation, we employ imaging mosaics obtained from JWST in six broad-band filters (F115W, F150W, F200W, F277W, F356W, and F444W). Due to the proximity of the central wavelengths of filters JWST/F115W and HST/F125W, and filters JWST/F150W and HST/F160W, we do not include these two HST filters in this study, namely, only the F435W, F606W, and F814W filters from HST are utilized. Our galaxy sample is constructed based on the 3D-HST catalog (Brammer et al., 2012; Skelton et al., 2014; Momcheva et al., 2016). They fit galaxy SEDs in the wavelength range of \(0.3\)-\(8.0\ \mu\)m to estimate photometric redshift and other galaxy physical properties. For a more detailed description, we refer readers to their papers. Our sample com Figure 1: Images from different bands and the estimated stellar mass maps of a few example galaxies. The stellar mass maps are derived using the pixel-by-pixel SED fitting method. prises galaxies within the CEERS field that are bright and massive enough to facilitate reliable SED fitting. We require the result from the 3D-HST catalog to be reliable by setting \(\mathrm{use\_phot}=1\) and \(\mathrm{flags}\leq 2\) and restrict our sample to galaxies with \(\log(M_{*}/M_{\odot})>9\) at \(0.2<z<3.0\). Additionally, we only include galaxies that are detected (\(\mathrm{S/N}>3\)) in all six JWST filters and in at least two HST filters. Sources at the boundary of the field are removed. Finally, 1060 galaxies are contained in our sample. ### Analysising method In this section, we provide a brief overview of the method employed to construct the stellar mass maps in our study. To begin, we adopt the PSF models from grizil-psf-library provided by Brammer et al. (in prep) and create convolution kernels with the photutils package (Bradley et al., 2022) by optimizing the measurements of kernel performance proposed by Aniano et al. (2011). The shorter-wavelength images are then PSF-matched to the F444W band since the PSF FWHM of this band is the largest. Subsequently, the final images used to generate the spatially-resolved stellar mass maps are resampled to a common pixel scale of \(0\farcs 04\). The images used in this work are also corrected for the Milky Way extinction (Schlafly and Finkbeiner, 2011) assuming the Fitzpatrick (1999) extinction curve with \(R_{V}=3.1\). For each galaxy in our sample, we generate a cutout that is 1.5 times larger than the segmentation map provided by Valentino et al. (2023). Within this cutout, we perform SED fitting on all pixels to obtain the stellar mass map of the galaxy. We fix the redshift of all pixels to \(z_{\mathrm{best}}\) retrieved from the 3D-HST catalog and utilize the CIGALE program (Boquien et al., 2019) to estimate the \(M_{*}\) of each pixel. In brief, we adopt the delayed-\(\tau\) star formation history, Bruzual and Charlot (2003) stellar population models, Inoue (2011) nebular emission-line models, and Charlot and Fall (2000) dust attenuation law in the fitting. In Figure 1, we show the multi-band images and the corresponding stellar mass maps of a few example galaxies. The SED fitting result can be influenced by many factors, such as the treatment of the thermally pulsating asymptotic giant branch stage, the choice of IMF, and assumptions regarding the star formation history (e.g., Sawicki and Yee, 1998; Maraston et al., 2006, 2010; Pforr et al., 2012), leading to large model-dependent systematic uncertainties in estimating \(M_{*}\). However, the aim of this study is to study the difference between \(M_{*,\ \mathrm{resolved}}\) and \(M_{*,\ \mathrm{unresolved}}\) under the same model assumptions. Therefore, the systematic bias due to the model selection will not be considered in this study. Fortunately, once the redshift is well-determined, \(M_{*}\) is one of the most robust parameters estimated from the SED-fitting, while other parameters are generally more difficult to estimate because of the degeneracy (e.g., Caputi et al., 2015; Pacifici et al., 2023). Thanks to the richness of multi-band observations, the uncertainty of the photometric redshifts in the 3D-HST catalog is \(\sigma_{\Delta z/(1+z)}\sim 0.02\), which is small enough to ensure reliable \(M_{*}\) estimations. Additionally, within our sample, we have 82 sources with spectroscopic redshifts. In the subsequent analysis, we also independently verify the results using the subset of sources with spectroscopic redshifts and find that they align with the results obtained from the entire sample. ## 3 Results ### Comparisons between fittings with and without rest-frame NIR data Before investigating the disparity between \(M_{*,\mathrm{resolved}}\) and \(M_{*,\mathrm{unresolved}}\), we conduct an examination of the influence of the rest-frame NIR data on the \(M_{*}\) estimation. The total flux catalog extracted by Valentino et al. (2023) is used to estimate \(M_{*}\) here via the CIGALE program with the same configuration described in Section 2.2. For simplicity, \(M_{*}\) derived from the fittings using photometry with and without the long wavelength (LW) bands (i.e., F200W, F277W, F356W, and F444W) are denoted as \(M_{*,\mathrm{with\ LW}}\) and \(M_{*,\mathrm{without\ LW}}\), respectively. The comparison between them is presented in Figure 2. Figure 2: Upper panel: a direct comparison between \(M_{*}\) estimated with and without LW bands color-coded by redshift. The gray dashed line is the one-to-one relation. Lower panel: the difference between them (i.e., \(\log(M_{*,\mathrm{without\ LW}}/M_{*,\mathrm{with\ LW}})\)) as a function of redshift. The black solid line and red-shaded area represent the median and 1-\(\sigma\) uncertainty, respectively. In the upper panel of Figure 2, we present the distribution of \(M_{*,\rm without\ LW}\) and \(M_{*,\rm with\ LW}\) for our sample color-coded by redshift. It is evident that \(M_{*,\rm without\ LW}\) tends to be larger than \(M_{*,\rm with\ LW}\). To further investigate this issue, we examine the difference between them (i.e., \(\log(M_{*,\rm without\ LW}/M_{*,\rm with\ LW})\)) as a function of redshift in the lower panel. The solid black line represents the median difference, while the red-shaded region represents the 1-\(\sigma\) uncertainty estimated by the bootstrapping method. It can be seen that there is no significant difference between the two at \(z<0.5\). However, at higher redshift, a notable disparity emerges, reaching up to 0.2 dex. We attribute this redshift trend to the fact that the longest bands involved in the fitting without LW bands (i.e., F150W) can not trace the rest-frame NIR emission (\(\gtrsim 10000\) A) anymore at \(z\gtrsim 0.5\). Interestingly, a similar trend was reported by Ilbert et al. (2010) in which the authors compared \(M_{*}\) estimated with and without IRAC data, akin to our comparison with and without LW bands. In their work, at \(z<1.5\), the IRAC data have a negligible impact on the \(M_{*}\) estimation, since the K band (the longest band available except the IRAC data) can still cover the rest-frame NIR data. However, at \(z>1.5\), lack of IRAC data could lead to an overestimation of \(M_{*}\) up to 0.1 dex since the K band only covers rest-frame \(\sim 9000\) A in this redshift range. Moreover, we have further validated this result using simulated data. Using the CIGALE program, we construct a sophisticated Stellar Population Synthesis (SPS) library and randomly select a sample of 1500 SEDs from our library. The model employed in this simulation is the same as the one mentioned in Section 2.2. By fixing the redshift to 2, we are able to scrutinize the ramifications of incorporating rest-frame NIR data, employing the same filters with the observation. Recognizing the inherent variations in flux error medians across different filters in real observations, the standard deviation (\(\sigma\)) of the Gaussian noise has been tailored to match the median flux error of each band. Subsequently, we have proceeded to derive stellar mass from the perturbed fluxes, leveraging the capabilities of the CIGALE software. With this simulated data, the median deviation between the estimated and true stellar mass is approximately 0.08 dex in the absence of rest-frame NIR data, while this discrepancy reduces to a mere 0.02 dex when incorporating rest-frame NIR data. Furthermore, the dispersion in mass estimates is diminished when rest-frame NIR data is utilized. Specifically, the standard deviation amounts to 0.04 dex with NIR inclusion, compared to 0.14 dex when NIR is absent in the fitting. This indicates that without the rest-frame NIR data, the measurement of \(M_{*}\) is indeed biased statistically. There are several potential factors that could contribute to an overestimation of \(M_{*}\) in the absence of the rest-frame NIR data. Since \(M_{*}\) is determined by multiplying the galaxy's luminosity by the \(M_{*}/L\) ratio, the inclusion or exclusion of rest-frame NIR data may impact the derived \(M_{*}/L\) ratio. Numerous studies have demonstrated that factors such as stellar age and dust attenuation can influence the observed \(M_{*}/L\) ratio (e.g., Leja et al., 2019; Miller et al., 2022). Using galaxies at \(z\sim 2\), we compare the distributions of stellar age and dust attenuation (\(A_{\rm V}\)) obtained from the fittings with and without LW bands and present them in Figure 3. Evidently, both the stellar age and \(A_{\rm V}\) tend to be overestimated when performing SED fitting without rest-frame NIR data. The median stellar age and \(A_{\rm V}\) are 0.76 (0.88) Gyr and 0.37 (0.50) mag, respectively, for the results with (without) LW Figure 3: Distributions of stellar age (left panel) and \(A_{\rm V}\) (right panel) for galaxies at \(z\sim 2\), with blue and red colors representing the distributions from the fittings with and without LW bands, respectively. The median values are denoted as bars at the top of the figure. The stellar age and \(A_{\rm V}\) are indeed overestimated when rest-frame NIR data is not included in the SED fitting. bands. To assess the statistical significance of the differences between them, we apply the Kolmogorov-Smirnov test and find that the \(p\)-values for stellar age and \(A_{\rm V}\) are both smaller than \(10^{-3}\), suggesting substantial differences between the two distributions. We thus conclude that the stellar age and \(A_{\rm V}\) are indeed overestimated when the rest-frame NIR data is not included in the SED fitting. For a given luminosity, an older stellar population and a large \(A_{\rm V}\) can lead to a larger observed \(M_{*}/L\) and consequently an overestimation of \(M_{*}\). The physical reason for these differences when lacking rest-frame observations at wavelength \(\gtrsim 10000\) A deserves further investigation. ### Comparison between resolved and unresolved stellar mass Many previous studies showed that \(M_{*,\rm unresolved}\) is underestimated compared to \(M_{*,\rm resolved}\), particularly at high redshift (e.g., Zibetti et al., 2009; Sorba & Sawicki, 2018; Mosleh et al., 2020). However, as aforementioned, obtaining \(M_{*,\rm resolved}\) measurements relies on high spatial resolution images, primarily obtained from HST. It is important to note that the reddest filter of HST is F160W, which can only trace the rest-frame optical (\(\lesssim 8000\) A) emission at \(z>1\). Consequently, relying solely on these observations may lead to an overestimation of \(M_{*}\) for each pixel (see Section 3.1). To address this issue and mitigate the potential bias in \(M_{*}\) estimation, it is crucial to include rest-frame NIR observations, such as those provided by JWST. Here, we use PSF-matched HST and JWST images (from F435W to F444W) to further investigate the difference between \(M_{*,\rm resolved}\) and \(M_{*,\rm unresolved}\). Before deriving \(M_{*,\rm resolved}\), we need to account for the potential bias introduced by the poor signal-to-noise ratio (S/N) of individual pixels during the SED fitting, as discussed in Gallazzi & Bell (2009). To this end, we only consider pixels with S/N larger than 3 in all six JWST bands. Regarding the HST data in the CEERS field, we find that the S/N, particularly in the F435W band, is generally much lower compared to the JWST images. Applying a similar S/N threshold to the HST images leads to a significant reduction of the sample size. To obtain a galaxy sample with a reasonable size, we determined not to impose the S/N restrictions on the HST images. However, even if such restrictions are applied, we still obtain almost unchanged conclusions. Nonetheless, the same as Sorba & Sawicki (2018), the pixel number included in the analysis for each galaxy is required to be \(\geq 64\) since the uncertainty in mass estimation becomes inordinately large below this threshold. Finally, 535 galaxies in total are included in the following analysis. To estimate \(M_{*,\rm resolved}\), we sum the stellar masses of all pixels within the segmentation map that satisfy the aforementioned criteria. Then we estimate \(M_{*,\rm unresolved}\) by summing the fluxes from all the corresponding pixels and performing SED fitting with the same parameter configuration. The comparison between the resolved and unresolved stellar mass is depicted in Figure 4. Notably, after incorporating JWST data, there is no significant difference observed between the resolved and unresolved stellar masses. This result is further emphasized in the lower panel of Figure 4, in which we plot the difference between \(M_{*,\rm resolved}\) and \(M_{*,\rm unresolved}\) (i.e., \(\log(M_{*,\rm resolved}/M_{*,\rm unresolved})\)) as a function of redshift. It is evident that, when accounting for the associated errors, \(M_{*,\rm resolved}\) and \(M_{*,\rm unresolved}\) agree well, while the median bias and its 1-\(\sigma\) uncertainty are all about 0.02 dex. We calculate the mass-weighted age by weighting the stellar ages of all selected pixels by their stellar masses within one galaxy and find that the median mass-weighted age of the sample from the resolved fitting (0.83 Gyr) is slightly smaller than the one obtained from the unresolved photometry (0.97 Gyr), which is consistent with the tiny difference that \(M_{*,\rm resolved}\) is slightly smaller than \(M_{*,\rm unresolved}\). Note that outshining would expect a larger resolved mass-weighted age compared to the unresolved one. Although outshining might have an effect on the \(M_{*}\) estimation when the rest-frame NIR data is not included, our result suggests that incorporating the rest-frame NIR data in the fitting could correct for the potential impact of outshining (if any) and return a reliable measurement of \(M_{*}\), regardless of whether spatially resolved or unresolved photometry is used. Figure 4: Similar to Figure 2 but for the comparison between \(M_{*,\rm resolved}\) and \(M_{*,\rm unresolved}\). It is evident that there is no significant difference between \(M_{*,\rm resolved}\) and \(M_{*,\rm unresolved}\). Different SED fitting codes may employ different data handling methods. To validate our findings, we conduct similar analyses using the FAST (Kriek et al., 2009) and BAGPIPES (Carall et al., 2018) programs. Remarkably, the results obtained from these alternative programs still align with those obtained from the CIGALE fitting. By incorporating the JWST data, we consistently find no significant difference between the resolved and unresolved stellar masses. Moreover, different stellar populations may also produce notable differences in stellar mass estimates. We have also estimated both resolved and unresolved stellar masses for our sample, employing the CIGALE program, with the stellar population shifting from Bruzual & Charlot (2003) to Maraston (2005). With this new model, there is also no significant difference between \(M_{*,\rm resolved}\) and \(M_{*,\rm unresolved}\). Further efforts are still needed to examine the consistency between \(M_{*,\rm resolved}\) and \(M_{*,\rm unresolved}\) estimations with other stellar population models. ## 4 Discussion In many previous studies, \(M_{*,\rm resolved}\) is found to be larger than \(M_{*,\rm unresolved}\). However, with only HST images considered, Wuyts et al. (2012) found that there is no significant difference between \(M_{*,\rm resolved}\) and \(M_{*,\rm unresolved}\) at \(1.5<z<2.5\) when they used the Voronoi two-dimensional binning technique to ensure that the minimum S/N of each bin in the \(H_{160}\) image was 10. Interestingly, the authors found that when directly measuring \(M_{*,\rm resolved}\) without binning, the difference between \(M_{*,\rm resolved}\) and \(M_{*,\rm unresolved}\) for star-forming galaxies at \(z\sim 2.0\) could be as large as 0.2 dex. They attributed this discrepancy to the presence of pixels on the outskirts of galaxies that may lack sufficient color constraints, leading to an overestimation of stellar mass when pixel binning is not considered. In our study, pixels with low S/N on the outskirts of galaxies are ignored. Even so, our additional check still confirms that when the LW data was not considered, \(M_{*,\rm resolved}\) is larger than \(M_{*,\rm unresolved}\), at least at high redshift, which implies that our data does not support the high-S/N explanation for the consistency between \(M_{*,\rm resolved}\) and \(M_{*,\rm unresolved}\). On the other hand, Sorba & Sawicki (2015) studied the effect of image resolution on the difference between \(M_{*,\rm resolved}\) and \(M_{*,\rm unresolved}\) and found that the discrepancy diminishes when the physical scale of individual pixels is larger than 3 kpc, which is larger than the typical size of star-forming regions (Gusev et al., 2014). They speculated that the disappearance of the difference observed in Wuyts et al. (2012) may be attributed to the mixing of young and old stars when employing pixel binning, reducing the bias caused by the outshining effect. Since the inclusion of rest-frame NIR data is crucial for the accurate estimation of \(M_{*}\), determining the minimum requirement of the reddest filter that should be included in SED fitting is important. In Figure 2, we can see that without adding the LW data, we cannot obtain a reliable \(M_{*}\) estimate for galaxies at \(z>0.5\). Consequently, it seems that observations with a rest-frame wavelength longer than about 1\(\mu\)m (i.e., the regime covered by F160W at \(z\sim 0.5\)) are essential for obtaining accurate \(M_{*}\) estimates. We here further investigate this issue by gradually decreasing the longest wavelength when estimating \(M_{*,\rm resolved}\) and study the difference between the corresponding \(M_{*,\rm resolved}\) and \(M_{*,\rm unresolved}\). Note that \(M_{*,\rm unresolved}\) used here is the one derived from the fittings with all available photometry (i.e., with the LW data). The results are shown in Figure 5 where curves in different colors (cyan, lime, orange, violet, and red) represent the difference between \(M_{*,\rm resolved}\) and \(M_{*,\rm unresolved}\) when the reddest filters used are F150W, F200W, F277W, F356W, and F444W for \(M_{*,\rm resolved}\) estimates, respectively. Panel (a) shows that we can obtain a reliable \(M_{*,\rm resolved}\) at \(z\lesssim 0.5\), \(z\lesssim 1.4\), \(z\lesssim 2.1\), \(z\lesssim 2.8\), and \(z\lesssim 2.8\) when the reddest filters used are F150W, F200W, F277W, F356W, and F444W, respectively. We also show the difference as a function of the Figure 5: Difference between \(M_{*,\rm resolved}\) and \(M_{*,\rm unresolved}\) as a function of redshift (left panel) and the reddest rest-frame wavelength included in the SED fitting for \(M_{*,\rm resolved}\) estimate (right panel). \(M_{*,\rm unresolved}\) is estimated using all available data (i.e., from F435W to F444W). The cyan, lime, orange, violet, and red curves represent the median differences between \(M_{*,\rm resolved}\) and \(M_{*,\rm unresolved}\) when the reddest filters used are F150W, F200W, F277W, F356W, and F444W for \(M_{*,\rm resolved}\) estimates, respectively, while the color-shaded regions are the corresponding 1-\(\sigma\) uncertainties. The vertical grey line in the left panel indicates \(z=0.5\) below which there is no significant difference between \(M_{*,\rm resolved}\) and \(M_{*,\rm unresolved}\), while the similar line in the right panel locates rest-frame wavelength of 10000 Å beyond which the differences in \(M_{*}\) are negligible. reddest rest-frame wavelength included in the SED fitting in Panel (b). There is no significant difference between the resolved and unresolved \(M_{*}\) when the longest wavelength employed in SED fitting extends beyond the rest-frame 10000 A, which is consistent with the results discussed in Section 3.1. Therefore, we argue that an unbiased \(M_{*}\) estimation requires the reddest filter included in the SED fitting to have a rest-frame wavelength larger than 10000 A. ## 5 Summary Utilizing the high-resolution data from JWST and HST, we estimate \(M_{*,\rm resolved}\) and \(M_{*,\rm unresolved}\) using the spatially resolved and spatially unresolved photometry of galaxies with \(\log(M_{*}/M_{\odot})>9\) at \(0.2<z<3.0\) in the CEERS field. Our main conclusions are as follows. (1) The inclusion of rest-frame NIR data is crucial for accurate measurement of galaxy \(M_{*}\). In the absence of rest-frame NIR data, the SED fitting process tends to yield higher stellar ages and dust attenuations, leading to an overestimation (approximately \(0.1\sim 0.2\) dex) of \(M_{*}\). (2) After incorporating the data from JWST, there is almost no difference between \(M_{*,\rm resolved}\) and \(M_{*,\rm unresolved}\), suggesting that we could correct for the potential impact of outshining (if any) and obtain a reliable measurement of \(M_{*}\). (3) When the reddest filter included in the SED fitting has a rest-frame wavelength larger than 10000 A, both resolved and unresolved photometry can yield consistent and nearly unbiased measurements of stellar mass. Benefiting from the JWST photometry, we provide a plausible solution to the conflict between the resolved and unresolved \(M_{*}\) estimation. This solution works out to \(z\sim 3\) and will be further examined for galaxies at \(z\gtrsim 3\) using mid-infrared data. The physical reason for the role of emission at wavelength \(\gtrsim 10000\) A in determining \(M_{*}\) also will be studied once more data is available. This work is supported by the Strategic Priority Research Program of Chinese Academy of Sciences (Grant No. XDB 41000000), the National Science Foundation of China (NSFC, Grant No. 12233008, 11973038), the China Manned Space Project (No. CMS-CSST-2021-A07) and the Cyrus Chun Ying Tang Foundations. Z.S.L. acknowledges the support from the China Postdoctoral Science Foundation (2021M700137). Y.Z.G. acknowledges support from the China Postdoctoral Science Foundation funded project (2020M681281).
2302.13327
Log-Concavity of Infinite Product and Infinite Sum Generating Functions
We expand on the remark by Andrews on the importance of infinite sums and products in combinatorics. Let $\{g_d(n)\}_{d\geq 0,n \geq 1}$ be the double sequences $\sigma_d(n)= \sum_{\ell \mid n} \ell^d$ or $\psi_d(n)= n^d$. We associate double sequences $\left\{ p^{g_{d} }\left( n\right) \right\}$ and $\left\{ q^{g_{d} }\left( n\right) \right\} $, defined as the coefficients of \begin{eqnarray*} \sum_{n=0}^{\infty} p^{g_{d} }\left( n\right) \, t^{n} & := & \prod_{n=1}^{\infty} \left( 1 - t^{n} \right)^{-\frac{ \sum_{\ell \mid n} \mu(\ell) \, g_d(n/\ell) }{n} }, \\ \sum_{n=0}^{\infty} q^{g_{d} }\left( n\right) \, t^{n} & := & \frac{1}{1 - \sum_{n=1}^{\infty} g_d(n) \, t^{n} }. \end{eqnarray*} These coefficients are related to the number of partitions $\mathrm{p}\left( n\right) = p^{\sigma _{1 }}\left ( n\right) $, plane partitions $pp\left( n\right) = p^{\sigma _{2 }}\left( n\right) $ of $n$, and Fibonacci numbers $F_{2n} = q^{\psi _{1 }}\left( n\right) $. Let $n \geq 3$ and let $n \equiv 0 \pmod{3}$. Then the coefficients are log-concave at $n$ for almost all $d$ in the exponential and geometric cases. The coefficients are not log-concave for almost all $d$ in both cases, if $n \equiv 2 \pmod{3}$. Let $n\equiv 1 \pmod{3}$. Then the log-concave property flips for almost all $d$.
Bernhard Heim, Markus Neuhauser
2023-02-26T14:50:13Z
http://arxiv.org/abs/2302.13327v1
# Log-concavity of infinite product and infinite sum generating functions ###### Abstract. We expand on the remark by Andrews on the importance of infinite sums and products in combinatorics. Let \(\{g_{d}(n)\}_{d\geq 0,n\geq 1}\) be the double sequences \(\sigma_{d}(n)=\sum_{\ell\mid n}\ell^{d}\) or \(\psi_{d}(n)=n^{d}\). We associate double sequences \(\{p^{g_{d}}\left(n\right)\}\) and \(\{q^{g_{d}}\left(n\right)\}\), defined as the coefficients of \[\sum_{n=0}^{\infty}p^{g_{d}}\left(n\right)\,t^{n} := \prod_{n=1}^{\infty}\left(1-t^{n}\right)^{-\frac{\sum_{\ell \mid n}\mu\left(\ell\right)\,g_{d}\left(n/\ell\right)}{n}}, \tag{0.2}\] \[\sum_{n=0}^{\infty}q^{g_{d}}\left(n\right)\,t^{n} := \frac{1}{1-\sum_{n=1}^{\infty}g_{d}(n)\,t^{n}}. \tag{0.1}\] These coefficients are related to the number of partitions \(\mathrm{p}\left(n\right)=p^{\sigma_{1}}\left(n\right)\), plane partitions \(\mathrm{p}\left(n\right)=p^{\sigma_{2}}\left(n\right)\) of \(n\), and Fibonacci numbers \(F_{2n}=q^{\psi_{1}}\left(n\right)\). Let \(n\geq 3\) and let \(n\equiv 0\pmod{3}\). Then the coefficients are log-concave at \(n\) for almost all \(d\) in the exponential (0.1) and geometric (0.2) cases. The coefficients are not log-concave for almost all \(d\) in both cases, if \(n\equiv 2\pmod{3}\). Let \(n\equiv 1\pmod{3}\). Then the log-concave property flips for almost all \(d\). Key words and phrases:Generating functions, Log-concavity, Partition numbers 2020 Mathematics Subject Classification: Primary 05A17, 11P82; Secondary 05A20. ## 1. Introduction In this paper, we study log-concave properties of families of sequences related to infinite product and infinite sum generating functions [1, 2, 3, 16]. Log-concavity is an important property. For polynomials with positive coefficients, real-rootedness entails log-concavity of all internal coefficients, which implies unimodality. Recent breakthrough works by Huh and his collaborators, using methods in algebraic geometry, have proven the Mason and Heron-Rota-Welsh conjecture on the log-concavity of the chromatic polynomials of graphs, and finally the characteristic polynomials of matroids [1, 1, 2]. We refer to the survey by Kalai [14] on the work by Huh. Note that Zhang [15] proved that the coefficients of the Nekrasov-Okounkov polynomials are almost all unimodal, building on the work by Odlyzko and Richmond [12] and Hong and Zhang [13]. We offer an approach for sequences associated with generating functions, where in general, not all coefficients are log-concave. For example, it is well-known that the partition numbers \(\mathrm{p}\left(n\right)\) are log-concave for \(n\geq 26\). We encounter \(\mathrm{p}\left(n\right)\) and the number of plane partition numbers \(\mathrm{pp}\left(n\right)\) of \(n\)[1, 2] and Fibonacci numbers \(F_{n}\). A sequence \(\{a_{n}\}_{n}\) is called log-concave at \(n_{0}\), if \[a_{n_{0}}^{2}-a_{n_{0}-1}\,a_{n_{0}+1}\geq 0.\] Let \(\{g_{d}(n)\}_{d\geq 0,n\geq 1}\) be a double sequence of positive integers. We examine the coefficients of the associated generating functions of exponential (1.1) and geometric type (1.2): \[\sum_{n=0}^{\infty}p^{g_{d}}\left(n\right)\,t^{n} := \exp\left(\sum_{n=1}^{\infty}g_{d}(n)\,\frac{t^{n}}{n}\right)= \prod_{n=1}^{\infty}\left(1-t^{n}\right)^{-\frac{\alpha_{d}(n)}{n}}, \tag{1.2}\] \[\sum_{n=0}^{\infty}q^{g_{d}}\left(n\right)\,t^{n} := \frac{1}{1-\sum_{n=1}^{\infty}g_{d}\left(n\right)\,t^{n}}. \tag{1.1}\] Here \(\alpha_{d}(n)=\sum_{\ell\mid n}\mu(\ell)\,g_{d}(n\,/\,\ell)\), where \(\mu\) is the Mobius function. The approach offered in this paper, is incited by Andrews' remark ([1], chapter 6, page 99) in the context of Meinardus' theorem: "Unfortunately not much is known about problems when a series rather than a product is involved". We call \(n\) an exception related to a sequence \(\left\{a_{n}\right\}_{n}\), if \[a_{n}^{2}-a_{n-1}\,a_{n+1}<0.\] The set of all exceptions is denoted by \(E^{a}\). To this point only the exponential cases have been studied in the literature. Let \(g_{d}=\sigma_{d}\). For fixed \(d=1\), we have the number of partitions \(\mathrm{p}\left(n\right)=p^{\sigma_{1}}\left(n\right)\). Nicolas [14] proved in 1978, that the partition function \(\mathrm{p}\left(n\right)\) is log-concave, if and only if \(n\) is not an element of the finite set \[E^{p^{\sigma_{1}}}=\{2k+1\,|\,0\leq k\leq 12\}.\] This was proved again by DeSalvo and Pak [1]. Both proofs utilize the Rademacher formula for \(\mathrm{p}\left(n\right)\). In [10], we have proven that the plane partition function \(\mathrm{p}\mathrm{p}\left(n\right)=p^{\sigma_{2}}\left(n\right)\) is log-concave for almost all \(n\). Finally, based on numerical experiments, we conjectured that \[E^{p^{\sigma_{2}}}=\{2k+1\,|\,0\leq k\leq 5\}.\] Recently, the conjecture was proven by Ono, Pujahari, and Rolen [1]. In this paper, we study the similarities between log-concavity properties of the coefficients obtained by the generating function of exponential (1.1) and geometric type (1.2). ### Landscape of Exceptions in the Exponential Cases We consider log-concavity for \(\{p^{g_{d}}\left(n\right)\}\). We recall the results obtained in [10] and [10]. Note, the information on \(d=0\) is new. Numerical investigations indicate that \[E^{p^{\sigma_{0}}}=\{2k+1|0\leq k\leq 371\}\setminus\{717,723,729,735,741\}\] tested up to \(n=2500\). Further, for \(0\leq d\leq 5\) the cardinality of \(E^{p^{\sigma_{d}}}\) seems to be decreasing: \(367>13>6>4>2\geq 2\). But \(\left|E^{p^{\sigma_{6}}}\right|\geq 3\). We refer to Table 1. The case \(g_{d}=\psi_{d}\), if we see Table 2, reveals the similar pattern. Now, fixing \(n\) and studying log-concavity, reveals a new phenomenon. Let \(n\geq 3\). Let \(g_{d}(n)=\sigma_{d}(n)\) or \(\psi_{d}(n)\). Then the set of all exceptions for all \(d\) is finite, if and only if \(n\equiv 0\pmod{3}\). More generally [11], let \(\{g_{d}(n)\}_{d\geq 1,n\geq 1}\) be positive real numbers satisfying \(g_{d}(1)=1\) and \[0\leq g_{d}\left(n\right)-n^{d}\leq\left(g_{1}\left(n\right)-n\right)\,\left(n -1\right)^{d-1}.\] Let \(n\geq 3\). Then for almost all \(d\), \(p^{\theta_{d}}\left(n\right)\) is log-concave at \(n\), if and only if \(n\) is divisible by \(3\). Moreover, explicit bounds are given. It would be interesting to examine the results of this paper in the context of generalized Laguerre-Polya functions and Jensen polynomials [12]. ### Landscape of Exceptions in the Geometric Cases At first glance, the _geometric_ case, see Table 3 and Table 4, seems not to reveal much structure. Nevertheless, we recall that \(q^{\psi_{1}}(n)=F_{2n}\) can be identified with the sequence of the \(2n\)th Fibonacci numbers, which is log-concave for \(n>1\). This follows from the fact that \(q^{\psi_{1}}(n)=U_{n-1}(\frac{3}{2})\), where \(U_{n}(x)\) is the \(n\)th Chebyshev polynomial of the second kind. Thus, we have some kind of analogue to Nicolas' result. Thus far, for \(d=2\) and \(\psi_{2}(n)\), we expect infinitely many exceptions. Nevertheless, by fixing \(n\geq 3\) we obtain the following new result. We have the _geometric_ cases for \(g_{d}(n)=\sigma_{d}(n)\) in Table 3 and \(g_{d}(n)=n^{d}\) in Table 4. \begin{tabular}{c|c ### Main Results In this paper, we prove the following: **Theorem 1.1**.: _Let \(\{g_{d}(n)\}_{d\geq 0,n\geq 1}\) be a double sequence of positive real numbers with \(g_{d}(1)=1\) for all \(d\) and_ \[0\leq g_{d}(n)-n^{d}\leq\left(g_{0}\left(n\right)-1\right)\,(n-1)^{d}. \tag{1.3}\] _Suppose there is an \(0<r\leq 1\), such that \(q^{g_{0}}\left(n\right)\leq r^{-n}\). Let \(n\geq 3\) and \(D^{g}\left(n\right)\) be defined by_ \[D^{g}\left(n\right) := -2\log_{9/8}\left(r\right)\ n, \tag{1.5}\] \[D^{g}\left(n\right) := \log_{9/8}\left(3\right)-2n\log_{9/8}\left(r\right)-\log_{9/8} \left(n+2\right) \tag{1.4}\] _for \(n\equiv 0\pmod{3}\) in (1.4) and \(n\equiv 1\pmod{3}\) in (1.5). Further, let \(n\equiv 2\pmod{3}\) and \(n\neq 5\). Then \(D^{g}\left(n\right)\) is defined as_ \[\log_{9/8}\left(2\right)-(n+1)\log_{9/8}\left(r\right).\] _Moreover, \(D^{g}\left(5\right):=\log_{9/8}\left(2q^{g_{0}}\left(4\right)q^{g_{0}}\left( 6\right)\right)\)._ _Let \(d>D^{g}\left(n\right)\). Then_ \[\frac{\left(q^{g_{d}}\left(n\right)\right)^{2}}{q^{g_{d}}\left(n-1\right)\ q^{g_{d}} \left(n+1\right)}<1\text{, if and only if }n\equiv 1\pmod{3}. \tag{1.6}\] The double sequences \(\{g_{d}(n)\}\) given by \(\psi_{d}(n)=n^{d}\) and \(\sigma_{d}(n)=\sum_{\ell\mid n}\ell^{d}\) satisfy (1.3). In the case \(\psi_{0}\left(n\right)\), we have \(\frac{t}{1-t}=\sum_{n=1}^{\infty}t^{n}\) and \(\frac{1}{1-\frac{t}{1-t}}=1+\sum_{n=1}^{\infty}2^{n-1}t^{n}\). Therefore, \(q^{\psi_{0}}\left(n\right)\leq 2^{n}\) for \(n\geq 1\). We can apply Theorem 1.1 with \(r=\frac{1}{2}\). Let \[D^{\psi}\left(n\right):=\left\{\begin{array}{ll}2n\log_{9/8}\left(2\right),& n\equiv 0\pmod{3},\\ 2n\log_{9/8}\left(2\right)+\log_{9/8}\left(3\right)-\log_{9/8}\left(n+2\right),& n\equiv 1\pmod{3},\\ \left(n+2\right)\log_{9/8}\left(2\right),&n\equiv 2\pmod{3},n\neq 5,\\ \log_{9/8}\left(512\right),&n=5.\end{array}\right.\] **Corollary 1.2**.: _Let \(n\geq 3\). Let \(d>D^{\psi}(n)\). Then_ \[\frac{\left(q^{\psi_{d}}\left(n\right)\right)^{2}}{q^{\psi_{d}}\left(n-1\right) \ q^{\psi_{d}}\left(n+1\right)}<1\text{, if and only if }n\equiv 1\pmod{3}. \tag{1.7}\] For \(g_{0}=\sigma_{0}\), we obtain \(\frac{1}{1-\sum_{n=1}^{\infty}\sigma_{0}(n)t^{n}}=1+t+3t^{2}+7t^{3}+18t^{4}+4 3t^{5}+108t^{6}+\ldots\). Obviously, \(\sigma_{0}\left(n\right)\leq n\). Then \(\sum_{n=1}^{\infty}nt^{n}=\frac{t}{\left(1-t\right)^{2}}\) and the radius of convergence of the series expansion of \(\frac{1}{1-\frac{t}{\left(1-t\right)^{2}}}\) is \(\left(3-\sqrt{5}\right)/2>1/3\). Analyzing the coefficients shows that we can choose any \(0<r<\left(3-\sqrt{5}\right)/2\). For simplicity, we take \(r=\frac{1}{3}\) and obtain \(q^{\sigma_{0}}\left(n\right)\leq 3^{n}\). We define \(D^{\sigma}(n)\) for \(n\geq 3\) and \(n\neq 5\) by \[\left\{\begin{array}{ll}2\log_{9/8}\left(3\right)n,&n\equiv 0\pmod{3},\\ \left(2n+1\right)\log_{9/8}\left(3\right)-\log_{9/8}\left(n+2\right),&n\equiv 1 \pmod{3},\\ \log_{9/8}\left(2\right)+\left(n+1\right)\log_{9/8}\left(3\right),&n\equiv 2 \pmod{3}.\end{array}\right.\] Further, \(D^{\sigma}\left(5\right):=\log_{9/8}\left(3888\right).\) **Corollary 1.3**.: _Let \(n\geq 3\). Let \(d>D^{\sigma}(n)\). Then_ \[\frac{\left(q^{\sigma_{d}}\left(n\right)\right)^{2}}{q^{\sigma_{d}}\left(n-1 \right)\ q^{\sigma_{d}}\left(n+1\right)}<1\text{, if and only if }n\equiv 1\pmod{3}. \tag{1.8}\] ## 2. Proof of Theorem 1.1 Let \(g_{d}\) be fixed satisfying (1.3). To simplify notation, we put \(q_{d}\left(n\right)=q^{g_{d}}\left(n\right)\). We have \[\frac{1}{1-\sum_{n=1}^{\infty}g_{d}\left(n\right)\,t^{n}}=1+\sum_{n=1}^{\infty }\,\sum_{k\leq n}\sum_{\begin{subarray}{c}m_{1},\ldots,m_{k}\geq 1\\ m_{1}+\ldots+m_{k}=n\end{subarray}}g_{d}\left(m_{1}\right)\cdots g_{d}\left(m_{ k}\right)t^{n}=\sum_{n=0}^{\infty}q_{d}\left(n\right)t^{n}.\] Therefore, \[q_{d}\left(n\right)=\sum_{k\leq n}\sum_{\begin{subarray}{c}m_{1},\ldots,m_{k }\geq 1\\ m_{1}+\ldots+m_{k}=n\end{subarray}}g_{d}\left(m_{1}\right)\cdots g_{d}\left(m_{ k}\right)\] for \(n\geq 1\). ### Two Lemmata It is known [11] that: **Lemma 2.1**.: _Let \(n\geq 2\). Then_ \[\max_{\begin{subarray}{c}m_{1},\ldots,m_{k}\geq 1\\ m_{1}+\ldots+m_{k}=n\end{subarray}}m_{1}\cdots m_{k}=\left\{\begin{array}{ll} 3^{n/3},&n\equiv 0\pmod{3},\\ 4\cdot 3^{(n-4)/3},&n\equiv 1\pmod{3},\\ 2\cdot 3^{(n-2)/3},&n\equiv 2\pmod{3}.\end{array}\right.\] _For \(n\geq 6\), \(n\not\equiv 2\pmod{3}\), the second largest products are_ \[8\cdot 3^{(n-6)/3},n \equiv 0\pmod{3},\] \[10\cdot 3^{(n-7)/3},n \equiv 1\pmod{3}\] _and \(3\) for \(n=4\)._ Further, we provide an extension of a result from [11, 12]. **Lemma 2.2**.: _For \(n\geq 2\)_ \[3^{dn/3} \leq q_{d}\left(n\right)\leq 3^{dn/3}q_{0}\left(n\right),n\equiv 0\pmod{ 3},\] \[\left(4\cdot 3^{\left(n-4\right)/3}\right)^{d}\frac{\left(n-1\right) \left(n+8\right)}{18} \leq q_{d}\left(n\right)\leq\left(4\cdot 3^{\left(n-4\right)/3}\right)^{d}q_{0} \left(n\right),n\equiv 1\pmod{3},\] \[\left(2\cdot 3^{\left(n-2\right)/3}\right)^{d}\frac{n+1}{3} \leq q_{d}\left(n\right)\leq\left(2\cdot 3^{\left(n-2\right)/3}\right)^{d}q_{0} \left(n\right),n\equiv 2\pmod{3}.\] _Additionally, for \(n\geq 6\), \(n\not\equiv 2\pmod{3}\), we have_ \[q_{d}\left(n\right) \leq 3^{nd/3}+\left(8\cdot 3^{n/3-2}\right)^{d}q_{0}\left(n\right),n \equiv 0\pmod{3},\] \[q_{d}\left(n\right) \leq \left(4\cdot 3^{\left(n-4\right)/3}\right)^{d}\frac{\left(n-1\right) \left(n+8\right)}{18}+\left(10\cdot 3^{\left(n-7\right)/3}\right)^{d}q_{0} \left(n\right),n\equiv 1\pmod{3},\] _and \(q_{d}\left(4\right)\leq 2\cdot 4^{d}+3^{d}q_{0}\left(4\right)\)._ Proof.: Since \(m_{1}\cdots m_{k}\leq\max\limits_{\begin{subarray}{c}m_{1},\ldots,m_{k}\geq 1 \\ m_{1}+\ldots+m_{k}=n\end{subarray}}m_{1}\cdots m_{k}\), the upper bounds should be obvious as \(g_{d}\left(n\right)\leq n^{d}+\left(g_{0}\left(n\right)-1\right)\left(n-1\right) ^{d}\leq g_{0}\left(n\right)n^{d}\). For the lower bounds, we obtain \[\sum_{k\leq n}\sum_{\begin{subarray}{c}m_{1},\ldots,m_{k}\geq 1\\ m_{1}+\ldots+m_{k}=n\end{subarray}}\left(m_{1}\cdots m_{k}\right)^{d}\geq S \left(n\right)\max_{k\leq n}\max_{\begin{subarray}{c}m_{1},\ldots,m_{k}\geq 1 \\ m_{1}+\ldots+m_{k}=n\end{subarray}}\left(m_{1}\cdots m_{k}\right)^{d}\] where \(S\left(n\right)\) is the number of \(m_{1}+\ldots+m_{k}=n\), which yield the maximal product. Therefore, \[S\left(n\right)=\left\{\begin{array}{ll}1,&n\equiv 0\pmod{3},\\ \frac{n-1}{3}+\binom{\left(n+2\right)/3}{2}=\frac{\left(n-1\right)\left(n+8 \right)}{18},&n\equiv 1\pmod{3},\\ \frac{n+1}{3},&n\equiv 2\pmod{3}.\end{array}\right.\] For the refined upper bounds, we consider \(g_{d}\left(n\right)\leq n^{d}+\left(g_{0}\left(n\right)-1\right)\left(n-1 \right)^{d}\). Then \(g_{d}\left(m_{1}\right)\cdots g_{d}\left(m_{k}\right)\leq g_{0}\left(m_{1} \right)\cdots g_{0}\left(m_{k}\right)\left(m_{1}\cdots m_{k}\right)^{d}\) and for the maximal values \[g_{d}\left(m_{1}\right)\cdots g_{d}\left(m_{k}\right)\] \[\leq \left(m_{1}^{d}+\left(g_{0}\left(m_{1}\right)-1\right)\left(m_{1} -1\right)^{d}\right)\cdots\left(m_{k}^{d}+\left(g_{0}\left(m_{k}\right)-1 \right)\left(m_{k}-1\right)^{d}\right).\] Therefore, \[q_{d}\left(n\right)\leq\left(\max_{k\leq n}\max_{\begin{subarray}{c}m_{1}, \ldots,m_{k}\geq 1\\ m_{1}+\ldots+m_{k}=n\end{subarray}}m_{1}\cdots m_{k}\right)^{d}S\left(n\right)+ \left(Z\left(n\right)\right)^{d}q_{0}\left(n\right)\] where \(Z\left(n\right)\) is the second largest product \(m_{1}\cdots m_{k}\) of all \(m_{1}+\ldots+m_{k}=n\). ### Proof of Theorem 1.1 We consider the cases \(n\equiv 0,1,2\pmod{3}\) and \(n=5\) separately. #### 2.2.1. Let \(n\equiv 0\pmod{3}\) Then \[\frac{\left(q_{d}\left(n\right)\right)^{2}}{q_{d}\left(n-1\right)q_{ d}\left(n+1\right)} \geq \frac{3^{2dn/3}}{\left(2\cdot 3^{\left(n-3\right)/3}\cdot 4\cdot 3^{\left(n-3 \right)/3}\right)^{d}q_{0}\left(n-1\right)q_{0}\left(n+1\right)}\] \[\geq \left(\frac{9}{8}\right)^{d}\frac{1}{r^{-2n}}\geq 1\] for \(d\geq-2\log_{9/8}\left(r\right)n\). #### 2.2.2. Let \(n\equiv 1\pmod{3}\) Then \[\frac{\left(q_{d}\left(n\right)\right)^{2}}{q_{d}\left(n-1\right) q_{d}\left(n+1\right)} \leq \frac{\left(\left(4\cdot 3^{\left(n-4\right)/3}\right)^{d}q_{0} \left(n\right)\right)^{2}}{3^{\left(n-1\right)d/3}\cdot\left(2\cdot 3^{\left(n-1 \right)/3}\right)^{d}\frac{n+2}{3}}\] \[\leq \left(\frac{8}{9}\right)^{d}\frac{3r^{-2n}}{n+2}<1\] for \(d>\log_{9/8}\left(3\right)-2n\log_{9/8}\left(r\right)-\log_{9/8}\left(n+2\right)\). #### 2.2.3. Let \(n\equiv 2\pmod{3}\) and \(n\neq 5\) Then \[\frac{\left(q_{d}\left(n\right)\right)^{2}}{q_{d}\left(n-1\right)q_{d}\left(n+ 1\right)}\geq\frac{\left(\left(2\cdot 3^{\left(n-2\right)/3}\right)^{d} \frac{n+1}{3}\right)^{2}}{A\left(d,n\right)\ B\left(d,n\right)},\] where \[A\left(d,n\right) = \left(4\cdot 3^{\left(n-5\right)/3}\right)^{d}\frac{\left(n-2 \right)\left(n+7\right)}{18}+\left(10\cdot 3^{\left(n-8\right)/3}\right)^{d}q_{0} \left(n-1\right),\] \[B\left(d,n\right) = 3^{\left(n+1\right)d/3}+\left(8\cdot 3^{\left(n-5\right)/3} \right)^{d}q_{0}\left(n+1\right).\] Then as a lower bound for the expression on the right hand side of (2.1) we obtain the following: \[\frac{\left(\frac{n+1}{3}\right)^{2}}{\left(\frac{\left(n-2 \right)\left(n+7\right)}{18}+\left(\frac{5}{6}\right)^{d}r^{1-n}\right)\left( 1+\left(\frac{8}{9}\right)^{d}r^{-n-1}\right)}\] \[\geq \frac{\left(\frac{n+1}{3}\right)^{2}}{\left(\frac{\left(n-2 \right)\left(n+7\right)}{18}+\frac{4}{9}\right)\frac{3}{2}}\geq\frac{4\left(n +1\right)^{2}}{3\left(n+6\right)\left(n-1\right)}=1+\frac{\left(n-\frac{7}{2} \right)^{2}+\frac{39}{4}}{3\left(n+6\right)\left(n-1\right)}\geq 1\] for \[d \geq \max\left\{\log_{6/5}\left(9/4\right)-\left(n-1\right)\log_{6/5} \left(r\right),\log_{9/8}\left(2\right)-\left(n+1\right)\log_{9/8}\left(r \right)\right\}\] \[= \log_{9/8}\left(2\right)-\left(n+1\right)\log_{9/8}\left(r\right)\] for \(0<r\leq 1\) as \(0\leq-\log_{6/5}\left(r\right)\leq-\log_{9/8}\left(r\right)\) and \(\log_{6/5}\left(9/4\right)<\log_{9/8}\left(2\right)\). #### 2.2.4. Let \(n=5\) We have \[\frac{\left(q_{d}\left(5\right)\right)^{2}}{q_{d}\left(4\right)q_{d} \left(6\right)} \geq \frac{4\cdot 36^{d}}{\left(2\cdot 4^{d}+3^{d}q_{0}\left(4\right) \right)\left(9^{d}+8^{d}q_{0}\left(6\right)\right)}\] \[\geq \frac{4\cdot 36^{d}}{2\cdot 36^{d}+4\cdot 32^{d}q_{0}\left(4 \right)q_{0}\left(6\right)}\geq 1\] for \(d\geq\log_{9/8}\left(2q_{0}\left(4\right)q_{0}\left(6\right)\right)\). ## 3. Final Remarks Let us examine \(\left\{q^{g_{d}}\left(n\right)\right\}\). There are no exceptions for \(g_{d}=\sigma_{d}\) or \(\psi_{d}\) for \(n=2\), since \[\left(q^{g_{d}}\left(2\right)\right)^{2}-q^{g_{d}}\left(1\right)\,q^{g_{d}} \left(3\right)=\left(g_{d}\left(2\right)\right)^{2}-g_{d}(3).\] _Challenge 1_.: We consider the exponential case for \(d=0\) and \(\sigma_{0}\). We expect \(E^{p^{\sigma_{0}}}\) to be finite. Moreover, numerical experiments (tested up to \(n=2500\)) suggest that \[E^{p^{\sigma_{0}}}=\left\{2k+1|0\leq k\leq 371\right\}\setminus\left\{717,723,7 29,735,741\right\}.\] _Challenge 2_.: We consider the geometric case. We have \(E^{q^{\psi_{0}}}=E^{q^{\psi_{1}}}=\left\{1\right\}\), since \(q^{\psi_{0}}(n)=2^{n-1}\) and \(q^{\psi_{1}}(n)=F_{2n}\). For \(E^{q^{\psi_{2}}}\), we expect infinitely many exceptions and non-exceptions. _Challenge 3_ (Geometric case).: Let \(\sigma_{d}\) for \(0\leq d\leq 4\) be given. Then all the odd numbers \(n\) up to \(10^{4}\) are exceptions. Note that for \(d=5\) some even numbers also appear as exceptions. For example, \[\left(q^{\sigma_{5}}(10)\right)^{2}-q^{\sigma_{5}}\left(9\right)\,q^{\sigma_{ 5}}(11)<0.\] Nevertheless, it seems that the set of exceptions for each \(\sigma_{d}\) is infinite.
2308.06017
Optimizing transformer-based machine translation model for single GPU training: a hyperparameter ablation study
In machine translation tasks, the relationship between model complexity and performance is often presumed to be linear, driving an increase in the number of parameters and consequent demands for computational resources like multiple GPUs. To explore this assumption, this study systematically investigates the effects of hyperparameters through ablation on a sequence-to-sequence machine translation pipeline, utilizing a single NVIDIA A100 GPU. Contrary to expectations, our experiments reveal that combinations with the most parameters were not necessarily the most effective. This unexpected insight prompted a careful reduction in parameter sizes, uncovering "sweet spots" that enable training sophisticated models on a single GPU without compromising translation quality. The findings demonstrate an intricate relationship between hyperparameter selection, model size, and computational resource needs. The insights from this study contribute to the ongoing efforts to make machine translation more accessible and cost-effective, emphasizing the importance of precise hyperparameter tuning over mere scaling.
Luv Verma, Ketaki N. Kolhatkar
2023-08-11T08:47:52Z
http://arxiv.org/abs/2308.06017v1
Optimizing Transformer-based Machine Translation Model for Single GPU Training: A Hyperparameter Ablation Study ###### Abstract In machine translation tasks, the relationship between model complexity and performance is often presumed to be linear, driving an increase in the number of parameters and consequent demands for computational resources like multiple GPUs. To explore this assumption, this study systematically investigates the effects of hyperparameters through ablation on a sequence-to-sequence machine translation pipeline, utilizing a single NVIDIA A100 GPU. Contrary to expectations, our experiments reveal that combinations with the most parameters were not necessarily the most effective. This unexpected insight prompted a careful reduction in parameter sizes, uncovering "sweet spots" that enable training sophisticated models on a single GPU without compromising translation quality. The findings demonstrate an intricate relationship between hyperparameter selection, model size, and computational resource needs. The insights from this study contribute to the ongoing efforts to make machine translation more accessible and cost-effective, emphasizing the importance of precise hyperparameter tuning over mere scaling. Natural Language Processing (NLP) Transformers Machine Translation Single GPU Ablation ## 1 Introduction The advent of technology has propelled humankind into a new era where machine learning is progressing rapidly. With every passing day, models in fields such as data engineering, finance, computer vision, and natural language processing are growing both in size and capability. However, a pivotal question arises: Is bigger always better? In our current studies, we approached this question from two perspectives. First, we delved into the intricacies of multiprocessing within Python using CUDA Institute (2019). Second, we selected a complex machine translation task in the domain of natural language processing, requiring a complete sequence-to-sequence pipeline Vaswani et al. (2017), to examine the relationship between model size and performance. Traditionally, the enhancement of model performance has been synonymous with an increase in parameters, layers, and complexity. Yet, we must confront the reality that the relationship between size and efficacy is not necessarily linear, especially when considering the need for efficiency, stability, and resource-conscious design. Our journey began with an ambitious plan to utilize both model parallelism and data parallelism in PyTorch, with a starting model comprising 500 million parameters. However, constraints such as the availability of multi-GPUs and a limited compute window of 8 hours on a single A100 GPU NVIDIA Corporation (2020) steered us toward a different path. Instead of continually expanding the model, we embarked on a gritty exploration to reduce the model's parameters. We conducted ablation studies, manipulating variables like model size, number of heads, layers, and dropouts, with a computation limit on an A100 single GPU. By capping the number of epochs to 100, we investigated the effects of varying parameter sizes on the accuracies of machine translation models. Our findings culminated in the fascinating observation that an increase in parameters does not always yield a better model, a notion aligned with recent research conducted in Chinchilla paper Hoffmann et al. (2022). This study underscores the idea that training a robust model does not necessarily require a multi-GPU setup. Before scaling up hardware, it is vital to meticulously analyze how the model is performing within existing resources and understand how the model's size correlates with accuracy. ## 2 Experimental Details In this section, we delve into the ablation studies conducted to unravel the intricate relationship between the model's size, number of parameters, running time, and accuracy. The study was guided by a key question: Does continuously increasing the model size invariably lead to an increase in accuracy? Through meticulous experimentation and hyperparameter tuning, this investigation aimed to answer this question. As an example, a machine translation task (english to spanish) was taken, as it utilizes complete encoder and decoder pipeline. ### Data Preparation and Preprocessing The experimental study focused on the machine translation task from English to Spanish. The dataset comprising 1 lakh translations was utilized, sourced from Kaggle (can be directly downloaded from tensorflow reference TensorFlow (2022)). The chosen size of the problem was designed to experiment with the relationship between the model's size, number of parameters, and its performance. The code is available on GitHub Verma (2023) ### Model Architecture The Transformer model architecture was utilized in this study, renowned for its parallelization and the ability to capture long-range dependencies in the sequence data. The model consists of an encoder and decoder, each with multiple layers of attention and feed-forward neural networks. The following hyperparameters were explored: * **Model Size**: Ranging from 16 to 512, reflecting the embedding dimensions. * **Number of Heads**: Values varied from 4 to 16, influencing the model's attention mechanism. * **Number of Layers**: Tested with 2 to 16 layers, impacting the depth of the network. * **Dropout Sizes**: Ranging from 0.1 to 0.5, to prevent overfitting. * **Number of Epochs**: Since, this study was about evaluating the effect of increase in number of parameters on model accuracy, the most of the cases were limited to 100 epochs. A few best cases were run till 400 epochs. ### Training Configuration Training was conducted on a single NVIDIA A100 GPU. The model's loss was computed using the cross-entropy loss function for the machine translation task. A training-validation split of 70-30 % was employed. ## 3 Results and Discussions ### Results with model size of 512 The results presented in Figure 1 reveal a disconcerting instability, particularly apparent when the model size is at its peak value of 512, with both the number of heads and layers set at 4. The instability in learning is suspected to be linked to a dropout value of 0.5, a feature typically employed to prevent overfitting through the introduction of noise. However, in this context, the chosen dropout value appears excessively high, inhibiting convergence and potentially suppressing essential features. This is exacerbated by the validation perplexity reaching into the millions, a stark and troubling indicator of the model's learning instability. With a reduction in the dropout value and the number of layers to 2, as depicted in Figure 2, the model begins to exhibit signs of learning, despite a persistently rugged learning surface. This intriguing pattern suggests an intricate interplay between the model's complexity (and thus parameter count) and the effectiveness of learning with a reduced dropout value. Figure 2 offers a closer examination of this phenomenon. Although learning begins to occur, convergence is not achieved. Validation losses and perplexity remain elevated even when the dropout is decreased to 0.3. An anomalous 'kink' in validation perplexity observed around the 100th epoch for a dropout of 0.4 (as illustrated in Figure 2(b)) may hint at the model's need for additional epochs to stabilize. This interpretation, though compelling, requires more rigorous testing or analysis to be validated. ### Results with model size of 256 Figure 3 presents a further reduction in model size to 256, exploring two specific combinations of heads and layers. For a configuration of 16 heads and 8 layers, the learning is minimal, denoted by a train accuracy improvement on the order of 1e-3. Conversely, the model demonstrates overfitting and becomes unstable, the ruggedness augmenting with increasing epochs. When heads are reduced to 4 and layers increased to 16, learning fails to occur, and perplexity surpasses 10 million for roughly 20 epochs. The enormity of perplexity suggests that more epochs may not salvage this configuration. In another set of experiments for a model size of 256 (Figure 4), the combination of 4 heads and 8 layers (with a dropout of 0.5) led to the model becoming unstable, starting to overfit, and displaying erratic perplexity. Other experimental combinations, such as 4 heads with 16 layers and a dropout of 0.5, showed overfitting; the setup with 8 Figure 1: Experiments: Maximum length = 512, Model Size = 512, number of heads = 4, 8, 32, number of layers = 4, 8, 16, and dropout sizes = 0.5 (combinations shown in legends). Ran for 400 epochs approx. (x-axis). **(a)** Train and Validation Accuracy: Comparison of training and validation accuracy across different configurations. **(b)** Train and Validation Loss: Plot of training and validation loss for different models. **(c)** Train and Validation Perplexity: Visualization of training and validation perplexity under various hyperparameter settings. Figure 2: Experiments: Maximum length = 512, Model Size = 512, number of heads = 4, number of layers = 2, and dropout sizes = 0.3, 0.4, 0.5, (combinations shown in legends). Ran for 100 epochs (x-axis). **(a)** Train and Validation Accuracy. **(b)** Train and Validation Loss. **(c)** Train and Validation Perplexity. heads, 16 layers, and a dropout of 0.5 was halted midway due to the onset of enormous perplexity scores and evident overfitting ### Results with the model size of 128 We further dropped the model size to 128. In Figure 6, the only configuration that showed signs of learning was with 8 heads and 4 layers, combined with a dropout of 0.5. However, near the end of the 98th epoch, this setup suddenly became unstable, with training accuracy plunging to 0.06 from 0.14. This demonstrated that none of the three combinations above were effective. The situation improved when the number of layers was reduced to 2 instead of 4 (Figure 7). This demonstrated that the model was learning with a lesser number of layers. From the validation accuracy and loss, it was observed that with a reduced number of heads in each layer (4 instead of 8), learning was more effective, and this progress continued up until 100 epochs. Figure 4: Experiments: Maximum length = 512, Model Size = 256, number of heads = 4, 8, number of layers = 4, 8, 16, and dropout sizes = 0.4, 0.5 (combinations shown in legends). Ran for 100 epochs (x-axis). **(a)** Train and Validation Accuracy. **(b)** Train and Validation Loss. **(c)** Train and Validation Perplexity. Figure 3: Experiments: Maximum length = 512, Model Size = 256, number of heads = 4, 16, number of layers = 8,16, and dropout sizes = 0.5 (combinations shown in legends). Ran for 100 epochs (x-axis). **(a)** Train and Validation Accuracy. **(b)** Train and Validation Loss. **(c)** Train and Validation Perplexity. \begin{table} \begin{tabular}{c c c c c c c c c} \hline \hline model & number of & number of & dropout & Time (min) & Train & Validation & Validation & Parameter \\ size & heads & layers & & & Loss & Loss & Perplexity & Count (million) \\ \hline [MISSING_PAGE_POST] \hline \hline \end{tabular} \end{table} Table 1: An analysis of the impact of different hyperparameters on machine translation model performance, along with the corresponding parameter count for each configuration (**for 100 epochs**) ### Results with the model size of 64 In Figure 8, we further reduced the model size to 64 (from 128), keeping the number of heads, layers, and the dropout value the same as in Figure 7. We observed a peculiar trend where the model learned for 100 epochs, then regressed, and eventually started overfitting, as training loss reduced while validation loss increased. This curious trend prompted us to attempt training for more epochs in select cases. In Figure 9, with the model size kept at 64 and the number of heads increased to 4, 8, and 16 (and layers increased to 8 with a dropout of 0.5), learning became unstable. Interestingly, when the number of heads was 16 and layers were 4, the model performed better than other cases in Figure 9. This indicates that there might be a scope for improvement if trained for more epochs. From Figure 10, for a model size of 64, with 4 heads and either 2 or 4 layers (and dropouts of 0.3 and 0.4), the model started learning without any signs of overfitting. It is quite evident from this figure that increasing the number of encoder-decoder layers reduced the learning rate, even with a higher dropout value of 0.4. Figure 5: Experiments: Maximum length = 512, Model Size = 256, number of heads = 4, number of layers = 2, and dropout sizes = 0.3, 0.4, 0.5 (combinations shown in legends). Ran for 100 epochs (x-axis). **(a)** Train and Validation Accuracy. **(b)** Train and Validation Loss. **(c)** Train and Validation Perplexity. Figure 6: Experiments: Maximum length = 512, Model Size = 128, number of heads = 4, 8, number of layers = 4, 8, and dropout sizes = 0.5 (combinations shown in legends). Ran for 100 epochs (x-axis). **(a)** Train and Validation Accuracy. **(b)** Train and Validation Loss. **(c)** Train and Validation Perplexity. ### Results with the model size of 32 Next, we reduced the model size further to 32, keeping the layers at 2 and the dropout at 0.5, but varying the number of heads (4 and 8). With a model size of 32, 4 heads, 2 layers, and a dropout of 0.5, the model began overfitting within 100 epochs. When we increased the number of heads to 8 (for a model size of 32 and 2 layers), intriguingly, the model did not overfit. However, the validation accuracy was still higher, and the validation loss was lower than in the previous case. This could simply be attributed to the increase in heads, and it's possible that the model would have started overfitting after 100 epochs. Next, we wanted to observe the effect of reducing the dropout from 0.5 to 0.1 (Figure 12). To our amazement, the model performed better in learning compared to the cases discussed in Figure 11. We tested a model size of 32, head sizes of 4 and 8, and a number of layers of 2 and 4, and found that the learning was comparable in all these cases. This suggested that with lower dropout, changes in the model head or layers might not significantly affect learning. Figure 8: Experiments: Maximum length = 512, Model Size = 64, number of heads = 4, number of layers = 2, and dropout sizes = 0.5 (combinations shown in legends). Ran for 100 epochs (x-axis). **(a)** Train and Validation Accuracy. **(b)** Train and Validation Loss. **(c)** Train and Validation Perplexity. Figure 7: Experiments: Maximum length = 512, Model Size = 128, number of heads = 4, 8, number of layers = 2, and dropout sizes = 0.5 (combinations shown in legends). Ran for 100 epochs (x-axis). **(a)** Train and Validation Accuracy. **(b)** Train and Validation Loss. **(c)** Train and Validation Perplexity. ### Results with the model size of 16 In our subsequent experiment, we reduced the model size further to 16 (Figure 13), and again the model was tested with 4 and 8 heads, and 2 and 4 layers. In the case where the dropout was kept at 0.5, the learning was least effective, even with 8 heads for 2 layers. However, other cases with dropouts of 0.1 (and 4 or 8 heads with 2 layers) led us to rethink our previous approach of using higher dropouts like 0.5, 0.4, or 0.3. Overall, we observed that smaller dropout values yielded better results. ### Results with increasing reducing dropout value to 0.1 Next, to test our hypothesis that a reduction in dropout value yields more stable results (i.e., from 0.5 to 0.1), we experimented with larger model sizes (128, 256) but kept the number of heads fixed at 4 and used only smaller encoder-decoder layers (2, 4). We observed that the model's accuracy was highest within 100 epochs for the configuration with a model size of 128, heads and layers equal to 4, and it was still learning even after 100 epochs. Increasing the model size to 256 reduced the validation accuracy and increased the loss, and changing the dropout from 0.1 to 0.5 further Figure 10: Experiments: Maximum length = 512, Model Size = 64, number of heads = 4, number of layers = 2, 4, and dropout sizes = 0.3, 0.4 (combinations shown in legends). Ran for 100 epochs (x-axis). **(a)** Train and Validation Accuracy. **(b)** Train and Validation Loss. **(c)** Train and Validation Perplexity. Figure 9: Experiments: Maximum length = 512, Model Size = 64, number of heads = 4, 8, 16, number of layers = 4, 8, and dropout sizes = 0.5 (combinations shown in legends). Ran for 100 epochs (x-axis). **(a)** Train and Validation Accuracy. **(b)** Train and Validation Loss. **(c)** Train and Validation Perplexity. degraded learning. As evidenced by Table 1, it took just 26 million parameters (model size: 128, heads: 4, layers: 4) to achieve the best performance, and performance also degraded with an increase in the number of parameters. ### Best Results from Table 1 A comprehensive review of the table reveals an intriguing pattern that runs contrary to common expectations in model design; namely, that an increase in the number of parameters does not consistently correspond to a decrease in validation perplexity. This is vividly illustrated in the comparison between configurations with different model sizes, numbers of heads, layers, and dropout rates. Remarkably, the combination of a model size of 128, 4 heads, 4 layers, and a dropout rate of 0.1 achieved the best performance with just 26 million parameters. Even configurations with higher complexities and more parameters, such as those with 256 or 512 model sizes, failed to surpass this level of efficiency. Validation perplexity in some of these larger models escalated to alarming figures, as evidenced in specific rows of the table (Table Figure 11: Experiments: Maximum length = 512, Model Size = 32, 16, number of heads = 4, 8, number of layers = 2, and dropout sizes = 0.5 (combinations shown in legends). Ran for 100 epochs (x-axis). **(a)** Train and Validation Accuracy. **(b)** Train and Validation Loss. **(c)** Train and Validation Perplexity. Figure 12: Experiments: Maximum length = 512, Model Size = 32, number of heads = 4, 8, number of layers = 2, 4 and dropout sizes = 0.1 (combinations shown in legends). Ran for 100 epochs (x-axis). **(a)** Train and Validation Accuracy. **(b)** Train and Validation Loss. **(c)** Train and Validation Perplexity. ### Investigating Best Results from Table 1 further by reducing dropout to 0 Lastly, we picked up the best configurations, according to Table 1, and we increased the number of epochs (Figure 15). This time we extended the runs between 350-400. In the best configuration (with model size of 128), we decrease the dropout to 0, to test the effect. We also reduced the model size to 64 and the dropout to 0, and we did one more test with the model size of 512. These tests were done to capture the effects on model performance with lower/zero dropouts, higher number of epochs, and range of parameters. We observed that decreasing the dropout to 0 in model size of 64 and 128 led to overfitting showing that for better performance dropouts are needed. Similarly, We observed that when the model size is bigger (model size of 512), it is still overfitted even with a dropout of 0.1. Perhaps, the higher dropout would have taken care of the overfitting, however, it's accuracy was way to less than the model size of 512. The investigation into various configurations of model sizes, numbers of heads, layers, and dropout values has unveiled a complex and multifaceted relationship between model complexity and learning efficacy. Contrary to conventional wisdom, the results consistently demonstrate that increased model complexity, as characterized by larger model sizes and more heads and layers, does not necessarily yield improved learning performance. In fact, certain simpler configurations, notably a model size of 128 with 4 heads and 4 layers and a lower dropout rate of Figure 14: Experiments: Maximum length = 512, Model Size = 128, 256, number of heads = 4, number of layers = 2, 4 and dropout sizes = 0.1, 0.5 (combinations shown in legends). Ran for 100 epochs (x-axis). **(a)** Train and Validation Accuracy. **(b)** Train and Validation Loss. **(c)** Train and Validation Perplexity. Figure 13: Experiments: Maximum length = 512, Model Size = 16, number of heads = 4, 8, number of layers = 2, 4 and dropout sizes = 0.1, 0.5 (combinations shown in legends). Ran for 100 epochs (x-axis). **(a)** Train and Validation Accuracy. **(b)** Train and Validation Loss. **(c)** Train and Validation Perplexity. 0.1, outperformed more complex structures, achieving superior performance with just 26 million parameters. This unexpected outcome emphasizes that a naive escalation in complexity and parameter count can not only inhibit learning but may lead to instability and overfitting. In a stark departure from the common pursuit of ever-larger models, these findings advocate for a more nuanced understanding of model architecture and hyperparameter tuning, balancing complexity against efficiency and stability. ## 4 Conclusion * **Model Complexity vs. Efficiency**: The study's findings highlight that increasing model size and complexity does not necessarily lead to better learning performance. Moderate complexity in configurations at times outperformed larger, more intricate setups. * **Avoidance of Excessive Compute Power**: Rather than simply throwing more compute power and multiple GPUs at the problem, the study emphasizes a nuanced relationship between complexity and efficiency. It revealed that this approach could lead to instability, overfitting, and counterproductive results. * **Role of Dropout**: The analysis unveiled the multifaceted role of dropout in learning. The fine-tuning of this hyperparameter was shown to be vital, with too high or too low values leading to hindrances in convergence and learning. * **Overfitting Trends**: A balanced choice of model size, heads, layers, and dropout was found to be crucial for optimal learning without overfitting. Thoughtful hyperparameter tuning is preferred over a brute force increase in complexity. * **Influence of Heads and Layers**: These parameters had a nuanced impact on the learning process, illustrating the importance of understanding the complex interplay between different hyperparameters. * **Evaluation Metrics and Perplexity**: The study made significant use of validation loss curves and perplexity as indicators of learning stability and efficiency, focusing on these over traditional metrics like BLEU scores to understand overfitting and the impact of parameter increases on accuracy. * **Advocacy for Thoughtful Tuning**: Some simpler configurations achieved outstanding performance with only 26 million parameters. This demonstrates the importance of an intelligent, balanced approach that considers various hyperparameters instead of merely increasing resources and model sizes. * **Counterintuitive Findings**: The results challenged conventional wisdom, notably achieving the best performance with a configuration that was neither the largest nor most complex. It further emphasizes that optimization can often achieve better outcomes without relying on significant computational assets. * **Need for Hyperparameter Optimization**: Overall, the study underscores the importance of meticulous hyperparameter tuning. It acts as a reminder that less can be more if paired with an understanding of the intricacies of model architecture and careful optimization. Figure 15: Experiments: Maximum length = 512, Model Size = 64,128, 512, number of heads = 4, number of layers = 2, 4 and dropout sizes = 0.1, 0 (combinations shown in legends). Ran for 100 epochs (x-axis). **(a)** Train and Validation Accuracy. **(b)** Train and Validation Loss. **(c)** Train and Validation Perplexity.
2303.04572
An Empirical reionization history model inferred from the low-redshift Lyman continuum survey and the star-forming galaxies at $z>8$
We present a new analysis of the rest-frame UV and optical spectra of a sample of three $z>8$ galaxies discovered behind the gravitational lensing cluster RX\,J2129.4+0009. We combine these observations with $z>7.5$ galaxies from the literature, for which similar measurements are available. As already pointed out in other studies, the high [\oiii]$\lambda$5007/[\oii]$\lambda$3727 ratios ($O_{32}$) and steep UV continuum slopes ($\beta$) are consistent with the values observed for low redshift Lyman continuum emitters, suggesting that such galaxies contribute to the ionizing budget of the intergalactic medium. We construct a logistic regression model to estimate the probability of a galaxy being a Lyman continuum emitter based on the measured \MUV, $\beta$, and $O_{32}$. Using this probability and the UV luminosity function, we construct an empirical model that estimates the contribution of high redshift galaxies to reionization. The preferred scenario in our analysis shows that at $z\sim8$, the average escape fraction of the galaxy population (i.e., including both LyC emitters and non-emitters) varies with \MUV, with intermediate UV luminosity ($-19<M_{UV}<-16$) galaxies having larger escape fraction. Galaxies with faint UV luminosity ($-16<M_{UV}<-13.5$) contribute most of the ionizing photons. The relative contribution of faint versus bright galaxies depends on redshift, with the intermediate UV galaxies becoming more important over time. UV bright galaxies, although more likely to be LCEs at a given log($O_{32}$) and $\beta$, contribute the least of the total ionizing photon budget.
Yu-Heng Lin, Claudia Scarlata, Hayley Williams, Wenlei Chen, Patrick Kelly, Danial Langeroodi, Jens Hjorth, John Chisholm, Anton M. Koekemoer, Adi Zitrin, Jose M. Diego
2023-03-08T13:37:38Z
http://arxiv.org/abs/2303.04572v3
# An Empirical reionization history model inferred from star-forming galaxies at \(z>8\) ###### Abstract We present a new analysis of the rest-frame UV and optical spectra of a sample of three \(z>8\) galaxies discovered behind the gravitational lensing cluster RX J2129.4+0009. We combine these observations with those of a sample of \(z>7.5\) galaxies from the literature, for which similar measurements are available. As already pointed out in other studies, the high [O iii]\(\lambda 5007\)/[O ii]\(\lambda 3727\) ratios (\(O_{32}\)) and steep UV continuum slopes (\(\beta\)) are consistent with the values observed for low redshift Lyman continuum emitters, suggesting that such galaxies contribute to the ionizing budget of the intergalactic medium. We construct a logistic regression model to estimate the probability of a galaxy being a Lyman continuum emitter based on the measured \(M_{UV}\), \(\beta\), and \(O_{32}\) values. Using this probability together with the UV luminosity function, we construct an empirical model that estimates the contribution of high redshift galaxies to reionization based on these observable quantities. Our analysis shows that at \(z\sim 8\), the average escape fraction of the galaxy population (i.e., including both LyC emitters and non-emitters) varies with \(M_{UV}\), with brighter galaxies having larger \(f_{esc}\). For \(M_{UV}<-19\) we find an average escape fraction of 20%, decreasing to almost zero for \(M_{UV}\)\(>-16\). Galaxies with intermediate UV luminosity (\(-19<M_{UV}<-16\)) contribute half of the ionizing photons. The relative contribution of faint versus bright galaxies depends on redshift, with UV bright galaxies (\(-23<M_{UV}<-19\)) becoming more important over time and reaching \(\approx 40\%\) at the end of reionization around \(z=6\). reionization, galaxies: high-redshift, galaxies: clusters: general, gravitational lensing: strong 0000-0001-0001-000-0001-0001-0001-001-0001-001-0001-001-001-001-0001-001-001 et al., 2023; Williams et al., 2022; Langeroodi et al., 2022; Mascia et al., 2023; Tang et al., 2023). The rest-frame ultraviolet (UV) and optical spectra of these galaxies are observed to have steep UV continuum slopes and large [O iii]\(\lambda\)5007/[O ii]\(\lambda\)3727 (=O\({}_{32}\)) emission line ratios, which are typical of hard ionizing sources and of sources with escape fraction of ionizing radiation larger than zero (Izotov et al., 2021; Flury et al., 2022, 2022). The contribution of star-forming galaxies to the reionization of the intergalactic medium (IGM) is often parameterized as a function of three components \(\dot{N}_{ion}=f_{esc}\,\xi_{ion}\,\rho_{UV}\), where \(\dot{N}_{ion}\) is the production rate of ionizing photon emitted to the IGM, \(f_{esc}\) is the escape fraction of Lyman continuum (LyC) photons, \(\xi_{ion}\) is the ionizing photon production efficiency, and \(\rho_{UV}\) is the UV luminosity density measured at rest-frame 1500 A (Robertson, 2022). The UV luminosity density is the integral of the UV luminosity function down to a given magnitude limit. The luminosity function (LF), often described with a Schechter function (Schechter, 1976), can be directly measured at all redshifts. Before _JWST_, _Hubble_ and ground based programs provided measurements of the redshift evolution of the LFs out to \(z\sim 8\)(Bouwens et al., 2015; Livermore et al., 2017; Atek et al., 2018; Bouwens et al., 2021). Its shape, when observations are sufficiently deep to cover the knee of the function, does not deviate substantially from a Schechter LF, although a double power-law parameterization has been suggested for the highest redshift samples at \(z\sim 6\)(Bowler et al., 2015; Khusanova et al., 2020; Harikane et al., 2022; Donnan et al., 2023). Results before the first _JWST_ data were obtained indicated that luminous galaxies are progressively less numerous toward high-redshifts. For example, luminous galaxies (\(M_{UV}<-19\)) at \(z\sim 8\) are 25 times less numerous compared to those at \(z\sim 0\)(Bouwens et al., 2021). The early \(JWST\) results have discovered more bright galaxies at the early universe at \(z>8\) than we expected (Finkelstein et al., 2022). A double power-law LF (e.g., Harikane et al., 2022) is proposed to include the newly discovered population of bright galaxies, since the double power-law (DP) LF are higher at the bright UV end (\(M_{UV}\)\(<-22\)), and decreases slower toward high redshift at \(z>8\) than the LFs proposed pre-\(JWST\) (e.g., Bouwens et al., 2021). \(\xi_{ion}\) is the ionizing production rate relative to the UV luminosity density at 1500 A. \(\xi_{ion}\) depends on the shape of the galaxy's ionizing spectrum, which in turns depends on the initial mass function (IMF), the stellar metallicity, and the fraction of binary stars. During the EoR, galaxies are believed to have a "top-heavy" IMF (Dave, 2008; van Dokkum, 2008; Sharda & Krumholz, 2022), lower metallicity, and low dust extinction, where all these properties lead to a higher \(\xi_{ion}\)(Chisholm et al., 2019; Atek et al., 2022). Indeed, the average \(\xi_{ion}\) is observed to increase toward higher redshifts (Matthee et al., 2017; Shivaei et al., 2018; Atek et al., 2022; Matthee et al., 2022). Models of the reionization history of the universe (e.g., Robertson et al., 2013) typically assume a range of log(\(\xi_{ion}\)) between 25.2 and 25.3, depending on the specifics of the stellar population models that are used (e.g., Leitherer & Heckman, 1995). This range is often referred to as the "canonical" log(\(\xi_{ion}\)) (e.g., Shivaei et al., 2018). Some studies find that the average log(\(\xi_{ion}\)) evolves with redshift (Matthee et al., 2017). The cause of this evolution is typically ascribed to the evolution of the intrinsic properties of galaxies (e.g., the lower stellar metallicity). The mode of star-formation at any given time, however, can also be important. This is because the definition of \(\xi_{ion}\) is linked to the lifetimes of massive stars (Kennicutt & Evans, 2012). Being produced by recombination in H ii regions, the H\(\beta\) luminosity traces the presence of the massive stars with short lifetimes (tens of Myrs), while the UV continuum, can also be generated from intermediate mass stars, which have longer lifetimes. Accordingly, if galaxies were, on average, characterized by shorter and more frequent bursts of star-formation at higher redshifts (for example because their dark-matter halo is still growing and feedback could have a stronger impact than in galaxies at low redshift) we would expect a different \(\xi_{ion}\) distribution, even for the same physical properties such as metallicity and IMF. In the calculation of \(\dot{N}_{ion}\), \(f_{esc}\) is the term that is least constrained by observations, and the only one that cannot be measured directly for galaxies seen during the EoR. At redshifts \(z\gtrsim 4\), even if ionizing photons were escaping from galaxies in a proportion similar to their low redshift counterparts, those photons would be absorbed by the neutral hydrogen in the IGM (e.g., Worseck et al., 2014). Therefore, during the EoR, \(f_{esc}\) can only be estimated using empirical indirect indicators calibrated from low-redshift LyC emitters (Flury et al., 2022). Direct observation of the intrinsically faint LyC is challenging. Until recently, only a few dozens of galaxies at \(z<0.4\) were spectroscopically confirmed as LyC emitters (\(f_{esc}>0\)) (Bergvall et al., 2006; Leitet et al., 2011; Borthakur et al., 2014; Leitherer et al., 2016; Izotov et al., 2016, 2018; Flury et al., 2022), and even fewer LyC emitters at \(2<z<4\) have been identified (Vanzella et al., 2016; de Barros et al., 2016; Shapley et al., 2016; Bian et al., 2017; Rivera-Thorsen et al., 2019; Fletcher et al., 2019; Marques-Chaves et al., 2021; Saldana-Lopez et al., 2022). Using indirect probes of LyC escape fraction, such as high values of O\({}_{32}\), steep UV continuum slopes, and high star formation rate surface densities, various authors have been successful in identifying a population of LyC emitting galaxies (Chisholm et al., 2018; Naidu et al., 2020; Flury et al., 2022; Saldanac-Lopez et al., 2022; Chisholm et al., 2022). These indicators, however, may not directly provide the value of \(f_{esc}\), as the escape is a complex process that depends on the neutral gas density and covering fraction (Bassett et al., 2019). In this paper, we present a new analysis of a sample of gravitationally lensed galaxies at \(z>8\) discovered in the RX J2129 galaxy-cluster field (Langeroodi et al., 2022; Williams et al., 2022). We refer to these three galaxies as the RXJ2129 high-\(z\) galaxies. Using these galaxies and inference based on the analysis of the low-redshift LyC survey by Flury et al. (2022), we present a new empirical model for the galaxy contribution to the reionization history. The structure of the paper is as follows. In Section 2, we describe the observations and our measurements. We compare our objects with the low redshift Lyman continuum emitters in Section 3. In Section 4, we present our estimation of reionization history and conclude in Section 5. Throughout the paper, we denote \(f_{esc}\) as the singular and plural absolute Lyman continuum "escape fraction" and "escape fractions", and we assume a \(\Lambda\)CDM cosmology with \(H_{0}=67.66\) km s\({}^{-1}\) Mpc\({}^{-1}\), \(\Omega_{b}=0.04897\), and \(\rho_{c}=\)8.5988\(\times 10^{-30}\) g cm\({}^{-3}\)(Planck Collaboration et al., 2020). ## 2 Observations and Analysis The details of the observations and the data reduction are reported in the companion papers Williams et al. (2022) and Langeroodi et al. (2022). Briefly, we obtained imaging of the RXJ2129 cluster field with the _JWST_ NIRCam instrument in the F115W, F150W, F200W, F277W, F356W, and F444W filters as part of the Director's Discretionary program (DD-2767; PI: P. Kelly) to obtain follow-up spectroscopy of the strongly lensed SN 2022riv (Kelly et al., 2022). We identified three high-redshift galaxy candidates using the EAZY (Brammer et al., 2008) photometric redshift estimation algorithm. The follow-up spectroscopy of the RXJ2129 cluster field was obtained using the NIRSpec instrument on _JWST_ in Multi-Object Spectroscopy (MOS) mode. The spectrum wavelength covers from 0.6\(\mu\)m to 5.3\(\mu\)m, and the spectral resolution ranges from \(R\approx\) 50 on the blue end to \(R\approx\) 400 on the red end. The flux calibration for the NIRSpec spectroscopy was performed in the PHOTOM step of the _JWST_ Spec2Pipeline1. Aperture corrections were applied to the NIRSpec data in the PATHLOSS step of the Spec2Pipeline. This step calculated the expected slit-losses for a point source in a given position within the shutter. Since our sources are so small with half light radius \(R_{e}\simeq 0.04\pm 0.01\) arcsecond (Williams et al., 2022), we did not apply any additional aperture correction. Footnote 1: [https://jwst-pipeline.readthedocs.io/](https://jwst-pipeline.readthedocs.io/) In Figure 1, we show the rest-frame UV spectra with associated errors of RXJ2129-ID 11027 at redshift \(z\)=9.51, and RXJ2129-ID 11002 at \(z\)=8.16. The rest-frame UV spectrum of RXJ2129-ID 11022 falls outside of the spectral range covered by the detector. Note that in the RXJ2129-ID 11002 spectrum, the masked peak at 1125 A is caused by a cosmic-ray hit rather than Ly\(\alpha\) emission. ### Ultraviolet Properties In this section we describe how we derive the galaxy properties which will be needed in Section 4 to compute the average ionizing background (i.e., to compute \(\dot{N}_{ion}\)), namely the 1500A absolute UV magnitude (\(M_{UV}\)), the slope of the spectral continuum, \(\beta\), the escape fraction of ionizing radiation, \(f_{esc}\), and the ionizing photon production efficiency, \(\xi_{ion}\). Other properties, such as measurements of rest-frame optical emission lines, are presented in the companion paper on the mass-metallicity relation (Langeroodi et al., 2022). #### 2.1.1 \(M_{uv}\) and the slope of the stellar continuum, \(\beta\) To measure the \(M_{UV}\) we use the rest-frame UV spectrum, when available. Specifically, we measure \(M_{UV}\) by averaging the flux density between 1400 A and 1600 A. This is a spectral region typically free from strong emission lines. For galaxy ID 11022, we calculate the \(M_{UV}\) using the flux density computed in the F150W filter, which corresponds to a rest-frame wavelength of 1650 A. The analysis of the spectral energy distributions presented in Williams et al. (2022) and Langeroodi et al. (2022) suggests that they suffer low level of dust attenuation. To calculate the observed UV continuum slopes we do not apply any dust correction to the observed magnitudes. The color excess E(B\(-\)V) is then estimated with the observed UV continuum slopes (Chisholm et al., 2022). Since the spectrum of of galaxy ID 11027 suffers from the contamination (Figure 1), we measure the UV continuum slopes, \(\beta\), defined as \(f_{\lambda}\propto\lambda^{\beta}\), by fitting a power-law function to the observed flux densities in the photometric bands F1500W, F200W, and F277W. We used Markov Chain Monte Carlo sampling to sample the posterior on the parameter \(\beta\). The observed UV continuum slopes \(\beta\simeq-2\) suggest a certain amount of dust attenuation level, with E(B\(-\)V)\({}_{SMC}\simeq 0.03\) (see Chisholm et al., 2022, Section 4 and Appendix A). We adopt the Small Magellanic Cloud-like dust attenuation law from Gordon et al. (2016) with R\({}_{V}\)\(\equiv\) A\({}_{V}\)/E(B\(-\)V) = 2.74 to calculate the dust extinction. #### 2.1.2 Ionizing Photon Production Efficiency \(\xi_{ion}\) The ionizing photon production efficiency \(\xi_{ion}\) is defined as the ratio between the ionizing photon production rate \(Q_{0}\) and the non-ionizing UV luminosity density at 1500 A, \(L_{\nu}(1500)\), \(\xi_{ion}=Q_{0}/L_{\nu}(1500)\). Assuming Case B recombination theory (Osterbrock, 1989; Leitherer and Heckman, 1995), we can write \(Q_{0}\) in terms of the observed H\(\beta\) luminosity, L(H\(\beta\)), and the escape fraction of ionizing radiation, \(f_{esc}\) (i.e., \(Q_{0}\propto L(H\beta)/(1-f_{esc})\)). Then, \(\xi_{ion}\) can be estimated as: \[\xi_{ion}=\frac{L(\mathrm{H}\beta)}{4.76\times 10^{-13}(1-f_{esc})L_{\nu}(1500 )}. \tag{1}\] There are two sources of uncertainty in this estimation of \(\xi_{ion}\): \(f_{esc}\) and the dust extinction. The dust extinction (\(A_{V}\simeq 0.1\)) decreases the log(\(\xi_{ion}\)) by \(\sim 0.13\) dex. In order to quantify the uncertainties introduced by the unknown \(f_{esc}\), we introduce the photon production efficiency computed for \(f_{esc}\)= 0, \(\xi_{ion}^{0}=\xi_{ion}(f_{esc}=0)\). #### 2.1.3 Escape Fraction \(f_{esc}\) The escape fraction of ionizing radiation cannot be directly measured at \(z\gtrsim 4\) because the optical depth for ionizing photons is high (e.g., Worseck et al., 2014). Accordingly, we need to use indirect estimators of \(f_{esc}\), calibrated at lower redshifts using galaxies as close as possible to those that we observe at \(z\gtrsim 6\), during the EoR. Schaerer et al. (2022) demonstrated that local extreme emission line galaxies have similar properties to the galaxies _JWST_ has been uncovering. A number of studies have measured \(f_{esc}\) in the local universe and found relations between \(f_{esc}\) and galaxy properties. Despite a number of \(f_{esc}\) indicators (e.g., O32, \(\beta\)) have successfully been identified using low-redshift LyC emitters, the scatter between the value of \(f_{esc}\) and the value of its indirect estimator is observed to be substantial (Flury et al., 2022). These indicators, therefore, may not be sufficient for an accurate estimate of \(f_{esc}\) due the complicated escape process involving gas density and covering fraction (Bassett et al., 2019). Here, we use the relation between \(f_{esc}\) and \(\beta_{1500}\), the UV slope between 1300A to 1800 A, discussed in Chisholm et al. Figure 1: The rest-frame UV spectra of RXJ2129-ID 11027 at \(z\)=9.51 (top) and RXJ2129-ID 11002 at \(z\)=8.16 (bottom). The black lines are the spectra and the gray shaded areas are the uncertainty. In the RXJ2129-ID 11002 spectrum, the masked peak at 1125 Å is caused by a cosmic-ray hit. (2022). The relation is given as follows: \[f_{esc}(\beta_{1500})=(1.3\pm 0.6)\times 10^{-4}~{}10^{(-1.22\pm 0.10)\beta_{1500}}. \tag{2}\] We report the UV properties of the RXJ2129 high-\(z\) galaxies in Table 1. For the three galaxies in the SMACS0723 sample, we adopt the magnification, UV absolute magnitudes, and photometric UV continuum slopes reported in Schaerer et al. (2022). Since the H\(\beta\) flux is not reported in Schaerer et al. (2022), we adopt the metallicity, emission line fluxes, and ratios from Curti et al. (2023). For completeness, we report the properties of the SMACS0723 high-\(z\) galaxies in Table 2. ## 3 Comparison with low redshift analogs We compare the properties of the \(z>7.5\) galaxies, including the RXJ2129 high-\(z\) galaxies, the SMACS0723 high-\(z\) galaxies, and the galaxies recently reported in the _CLASS-JWST_ program (Mascia et al., 2023) and the _CEERS_ survey (Tang et al., 2023) to those of the low-redshift galaxies studied in the Low-\(z\) Lyman Continuum survey (LzLCs, at \(z\sim 0.3\) Flury et al. 2022). The LzLCs includes 66 newly observed low redshift (\(z<0.4\)) galaxies and 23 galaxies (\(z<0.46\)) in the literature (Izotov et al., 2016, 2018, 2018, 2021; Wang et al., 2019), for which the LyC emitters (LCE) are defined as galaxies with Lyman Continuum detected with 97.725% confidence. We refer to these 89 galaxies as the LzLCs sample. In this sample, 50 galaxies are confirmed as LyC emitters (LCE), and 39 galaxies are not detected in the LyC (non-LCE). The LzLC objects were selected to span a broad range in physical properties associated with a large probability of high escape fraction of ionizing radiation. Specifically, galaxies were selected to have high O\({}_{32}\) ratio, steep UV continuum slope, and high star formation rate surface density, \(\Sigma_{SFR}\). ### The probability of a galaxy being a LyC emitter We combine the O\({}_{32}\) ratio and the slope of the UV continuum in a combined indicator, \(\beta O_{32}=\)log(O\({}_{32}\))\(-\beta\). We choose \(\beta\), O\({}_{32}\), and \(M_{UV}\) in our model for their simple accessibility in the observations. In Figure 2, we show how, in a diagram of \(\beta O_{32}\) as a function of the absolute UV magnitude, LyC emitters (LCE, open circles) are efficiently separated from non-LyC emitters (non-LCE, filled black circles). We also explored whether a correlation exists between \(\beta O_{32}\) and \(f_{esc}\), but we do not observe any simple relationship between these quantities, strengthening the conclusion that indirect estimators are mostly useful to identify LyC emitting galaxies, rather than estimating the value of \(f_{esc}\). Accordingly, we apply a logistic regression to the LzLCs sample, to estimate the probability that a galaxy is a LCE based on \(M_{UV}\) and \(\beta O_{32}\). The logistic discriminator can be written as: \[P_{LCE}=\frac{1}{1+e^{-(b_{0}+b_{1}M_{UV}+b_{2}\beta O_{32})}}, \tag{3}\] \begin{table} \begin{tabular}{c c c c} \hline \hline RXJ2129 & 11027 & 11002 & 11022 \\ \hline redshift \(z\) & 9.51 & 8.16 & 8.15 \\ magnification \(\mu\) & 19.2\(\pm\)3.6 & 2.23\(\pm\)0.15 & 3.29\(\pm\)0.33 \\ 12+log(O/H)\({}^{(a)}\) & 7.47\(\pm\)0.10 & 7.65\(\pm\)0.09 & \(<\)7.72 \\ O\({}_{32}\) & 13.28\(\pm\)3.75 & 13.51\(\pm\)5.08 & \(>\)6.16 \\ H\(\beta\) flux\({}^{(b)}\) & 2.17\(\pm\)0.16 & 0.88\(\pm\)0.12 & 0.16\(\pm\)0.08 \\ \(M_{UV}\)\({}^{(c)}\) & \(-\)20.78\(\pm\)0.26 & \(-\)20.69\(\pm\)0.17 & \(-\)18.74\(\pm\)0.18 \\ \(M_{UV}\)\({}^{(d)}\) & \(-\)18.20\(\pm\)0.28 & \(-\)20.31\(\pm\)0.19 & \(-\)17.91\(\pm\)0.20 \\ \(\beta\) & \(-\)1.92\(\pm\)0.19 & \(-\)2.29\(\pm\)0.21 & \(-\)2.12\(\pm\)0.44 \\ E(B\(-\)V)\({}_{SMC}\) & 0.04 & 0.02 & 0.03 \\ log(\(\xi_{ion}^{0}\)) & 25.60\(\pm\)0.11 & 25.18\(\pm\)0.09 & 25.28\(\pm\)0.17 \\ f\({}_{esc}(\beta)\) & 0.06\(\pm\)0.03 & 0.08\(\pm\)0.04 & 0.05\(\pm\)0.02 \\ \hline \end{tabular} \end{table} Table 1: The derived properties for the RXJ2129 high-\(z\) galaxies. \({}^{(a)}\) Measured with strong line calibration (Izotov et al., 2019), the measurements of RXJ2129 ID 11022 are reported in 1-\(\sigma\) limit (Langeroodi et al., 2022). \({}^{(b)}\) Observed, in unit of 10\({}^{-18}\) erg s\({}^{-1}\) cm\({}^{-2}\)(Williams et al., 2022; Langeroodi et al., 2022). \({}^{(c)}\) Observed, without correction for magnification. For RXJ2129 ID 11022, we derive the UV magnitude from the photometry at rest frame wavelength \(\sim\)1650 Å. \({}^{(d)}\) Corrected for lensing and dust extinction. \begin{table} \begin{tabular}{c c c c} \hline \hline SMACS & 04590 & 06355 & 10612 \\ \hline redshift \(z\) & 8.495 & 7.664 & 7.660 \\ magnification \(\mu\) & 7.9 & 1.7 & 1.7 \\ 12+log(O/H) & 6.99\(\pm\)0.11 & 8.24\(\pm\)0.07 & 7.73\(\pm\)0.12 \\ O\({}_{32}\) & 12.8\(\pm\)1.4 & 7.4\(\pm\)0.3 & 23.7\(\pm\)6.4 \\ H\(\beta\) flux\({}^{(a)}\) & 1.54\(\pm\)0.06 & 2.11\(\pm\)0.05 & 1.19\(\pm\)0.04 \\ \(M_{UV}\)\({}^{(b)}\) & \(-\)20.29 & \(-\)21.09 & \(-\)20.38 \\ \(M_{UV}\)\({}^{(c)}\) & \(-\)18.06 & \(-\)20.51 & \(-\)19.81 \\ \(\beta\) & \(-\)2.20\(\pm\)0.15 & \(-\)19.6\(\pm\)0.22 & \(-\)2.31\(\pm\)0.11 \\ f\({}_{esc}(\beta)\) & 0.06\(\pm\)0.03 & 0.03\(\pm\)0.01 & 0.08\(\pm\)0.04 \\ log(\(\xi_{ion}^{0}\)) & 25.72\(\pm\)0.04 & 25.44\(\pm\)0.04 & 25.47\(\pm\)0.04 \\ \hline \end{tabular} \end{table} Table 2: The derived properties for the SMACS0723 high-\(z\) galaxies. \({}^{(a)}\) Observed, in unit of 10\({}^{-18}\) erg s\({}^{-1}\) cm\({}^{-2}\)(Curti et al., 2023). \({}^{(b)}\) Observed, including lensing (Schaerer et al., 2022). \({}^{(c)}\) Corrected for lensing (Schaerer et al., 2022), we adopted a 0.2 magnitude uncertainty throughout the calculation. where \(P_{LCE}\) is the probability of the galaxy being a LyC emitter, and the best fit values for (\(b_{0}\), \(b_{1}\), \(b_{2}\)) are (\(-25.82\), \(-1.09\), \(1.72\)). Assuming the discriminator has no evolution on redshift, we find that RXJ2129 ID 11002, SMACS 06355, and SMACS 10612 have P\({}_{LCE}>0.8\), suggesting that these galaxies are likely LyC emitters. SMACS 04590 has higher O\({}_{32}\) and steeper \(\beta_{1500}\) than SMACS 06355, but its position in Figure 2 (most right yellow point at \(M_{UV}\approx-18\)) suggests that SMACS 04590 is less likely to be a LCE due to the fainter UV magnitude, with \(P_{LCE}=0.39\). We note, however, that the local LzLC sample does not extend to magnitudes fainter than \(\approx-18.5\), such that it is not clear where is the boundary between LyC and non-LyC emitters at the faint end of the UV magnitude \(>-19\), and our logistic discriminator may not be applicable at the faint UV side. The physical connections between the \(f_{esc}\) indicators and \(M_{UV}\) is likely the changes of metallicity, dusts, and star formation. The faint star-forming galaxies (associated with the low metallicity galaxies) have noticeably different properties compared with their bright counterpart. For instance, the intrinsic UV continuum slope does not get appreciably bluer below 10% solar metallicity (Bouwens et al., 2010; Topping et al., 2022). The changes in O\({}_{32}\) are also less obvious below 10% solar metallicity (Curti et al., 2017; Sanders et al., 2021). Therefore the \(\beta\) and \(O_{32}\) may become less effective as indicators for LyC leaking at the faint UV end. In this paper we simply assume the logistic discriminator can be extrapolated to the faint end, and we will explore this assumption in a future work. reionization history. Specifically, in the calculation of the redshift evolution of the neutral fraction (\(x_{\rm H\,i}\)) we consider (1) the empirical constraints on the probability that a galaxy is a LCE, (2) the dependency of \(f_{esc}\) on \(\beta\), and (3) the dependency of \(\log(\xi_{ion})\) on \(M_{UV}\) or redshift \(z\). ### The empirical model We calculate the neutral fraction \(x_{\rm H\,i}\) by solving \[\frac{d(1-x_{\rm H\,i})}{dt}=\frac{\dot{N}_{ion}}{n_{\rm H}}-\frac{(1-x_{\rm H \,i})}{t_{\rm rec}}, \tag{4}\] where \(\dot{N}_{ion}\) is the ionizing photon production rate, \(n_{\rm H}\) is the comoving gas number density, and \(t_{\rm rec}\) is the recombination time scale (Madau et al., 1999; Robertson et al., 2013; Ishigaki et al., 2018). \(n_{\rm H}\) and \(t_{\rm rec}\) are defined as \[n_{\rm H}=\frac{X_{p}\Omega_{b}\rho_{c}}{m_{\rm H}}, \tag{5}\] \[t_{\rm rec}=[C_{\rm H\,ii}\;\alpha_{B}(T)(1+Y_{p}/4X_{p})n_{\rm H}(1+z)^{3}]^{- 1}, \tag{6}\] where \(X_{p}=0.76\), \(Y_{p}=0.24\) are the primordial mass fraction of hydrogen and helium (Planck Collaboration et al., 2020), \(\Omega_{b}\) is the baryon energy density fraction, \(\rho_{c}\) is the critical density, \(C_{\rm H\,ii}\equiv\langle n_{\rm H\,ii}\rangle/\langle n_{\rm H\,ii}\rangle^{2}\) is the clumping factor, and \(\alpha_{B}(T)\) is the case B recombination coefficient. Here we assume \(C_{\rm H\,ii}=3\), and \(\alpha_{B}=2.6\times 10^{-13}\) cm\({}^{3}\)s\({}^{-1}\), for an electron temperature of \(10^{4}\)K. We solve equation 4 iteratively, assuming the boundary condition that \(x_{\rm H\,i}=1\) at \(z=15\), i.e., we assume that the first sources of ionizing photons appear at this redshift.2 Footnote 2: We note that starting at \(z=15\) or \(z=20\) introduces a negligible difference in our model. The ionizing photon production rate depends on the escape fraction of ionizing radiation, the ionizing photon production efficiency and the volume density of UV luminosity (\(\dot{N}=f_{esc}\,\xi_{ion}\,\rho_{UV}\)). In what follows we explain how we include the empirical constraints derived in the previous sections in the calculation of \(\dot{N}\) at each time step. First, we derive the population average escape fraction \(\langle f_{esc}\rangle\) as a function of the UV absolute magnitude, accounting for the dependency of \(f_{esc}\) on \(\beta\) and on the probability of each galaxy being a LCE. In each bin3 of \(M_{UV}\), we simulate 100 galaxies each with a different value of \(\beta\) and \(\log(\rm O_{32})\), drawn from the distributions described below. These values are used, together with \(M_{UV}\), to compute the probability \(\rm P_{LEC}(M_{UV},\,\beta O_{32})\), using Equation 3. We then draw a Boolean value based on \(\rm P_{LEC}\) to determine whether or not the galaxy is a LCE. If a galaxy is non-LCE, we set its \(f_{esc}\)1 = 0. If a galaxy is a LCE, we draw its \(f_{esc}\) value from the multivariate normal distribution of the coefficients described by Equation 2. The \(\beta\) values are drawn from the observation of \(z>8\) galaxies with the following multivariate normal distribution: \(\beta=(-0.17\pm 0.05)\times M_{UV}-5.4\pm 1.2\), truncated at \(\beta=-3.5\)(Cullen et al., 2022). The \(\log(\rm O_{32})\) values are generated from a normal distribution of \(0.5\pm 0.1\)(Sanders et al., 2023). Finally, we compute \(\langle f_{esc}\rangle\)=\(\frac{1}{100}\sum_{i=1}^{100}f_{esc}^{i}\) as the average \(f_{esc}\) of the 100 galaxies at each \(M_{UV}\) bin, and we repeat this calculation 500 times. Footnote 3: We consider 45 bins of 0.2 magnitude between \(-13.5\) and \(-23\). The results of our simulations are shown in Figure 5. In the top panel, we show in red the average \(\rm P_{LEC}\) and in green we show the average \(f_{esc}\) computed without accounting for \(\rm P_{LEC}\) as a function of \(M_{UV}\). We find that, brighter galaxies are more likely to be LCEs than fainter galaxies, and have lower \(f_{esc}\) without accounting for \(\rm P_{LEC}\). These trends explain the trend we observe in the bottom panel, where we show the average escape fraction of the population that now includes \(\rm P_{LEC}\), as a function of \(M_{UV}\). For galaxies brighter than \(M_{UV}\)\(\approx-19\), \(\langle f_{esc}\rangle\) does not depend strongly on \(M_{UV}\), and we find \(\langle f_{esc}\rangle\)=\(0.22\pm 0.05\). For fainter galaxies, we see a trend with fainter galaxies having lower \(\langle f_{esc}\rangle\). This \(\langle f_{esc}\rangle\) vs. \(M_{UV}\) evolution in our model behaves similarly to the \(\langle f_{esc}\rangle\) inferred from the Ly\(\alpha\) emitters in Matthee et al. (2022). To demonstrate the redshift evolution of \(f_{esc}\), we also model the \(P_{LCE}\), \(f_{esc}\), and \(\langle f_{esc}\rangle\) of the galaxies at redshift \(z=4\) (dashed lines). The \(\beta\) vs. \(M_{UV}\) dependency is adopted from Bouwens et al. (2014) (see also Chisholm et al. Figure 4: Ionizing photon production efficiency measured over a wide range of redshift. The blue line and blue shaded area is the redshift evolution derived in Matthee et al. (2017), and the 1-\(\sigma\) limit. intrinsic scatter \(\Delta\beta\sim 0.4\) in order to model the diverse \(\beta\) of individual galaxies. In our model, while \(P_{LCE}\) does not evolve much with redshift, \(f_{esc}\) and \(\langle f_{esc}\rangle\) decrease at the lower redshift as the UV continuum \(\beta\) become flatter/redder. The \(\langle f_{esc}\rangle\)\(\simeq 0.02\) at \(z=4\) is lower than the Lyman Break galaxies at \(z\sim 3\) in Pahl et al. (2021), which have an average of \(f_{esc}\)=0.06\(\pm\)0.01. For log(\(\xi_{ion}\)), we propose two models: the \(\xi_{ion}\)(UV) model and the \(\xi_{ion}(z)\) model. In the \(\xi_{ion}\)(UV) model, we assume \(\xi_{ion}^{0}\) is linearly dependent on \(M_{UV}\). The linear relation is obtained with the minimum \(\chi^{2}\) fitting of all galaxies at \(z>7.5\) in Figure 3: \(\xi_{ion}^{0}\)(UV) = \(0.11(M_{UV}+20.0)+25.46\), and flatten at \(\xi_{ion}^{0}\) = 24.5 and 26. We exclude the brightest object in our fitting, as it may potentially be powered by an AGN (Mainali et al., 2018; Tang et al., 2023). In the second model, we adopt \(\xi_{ion}(z)\) as a function of redshift. We use the 1-\(\sigma\) upper limit redshift dependence in Matthee et al. (2017): log(\(\xi_{ion}\))(\(z\)) = 24.493+1.180 log(1+\(z\)), which is consistent with more \(z>6\) objects in Figure 4. To explore the impact of the newly discovered bright galaxy populations at \(z>8\) to reionization, we adopt two luminosity functions at \(z>8\) into our models: a Schechter LF (Bouwens et al., 2021), and a Double Power-law (DP) LF (Harikane et al., 2022). We find the IGM neutral fractions at \(z>8\) derived the DP LF are only 1% lower than that of the Schechter LF (see Figure 6). Therefore, throughout this paper, we adopt the Schechter luminosity function described in Bouwens et al. (2021), with \[\alpha=-1.94-0.11(z-6), \tag{7}\] \[\Psi=0.40\times 10^{-3}\ 10^{(-0.33(z-6)+(-0.024(z-6)^{2}))}, \tag{8}\] \[M_{*}=-21.03-0.04(z-6). \tag{9}\] We integrate the UV magnitude from \(-23\) to \(-13.5\). The UV magnitude is truncated at \(-13.5\) to match previous studies (Livermore et al., 2017; Ishigaki et al., 2018; Atek et al., 2018; Naidu et al., 2020; Trebitsch et al., 2022). The required global \(f_{esc}\) may be lower if the UV magnitude is truncating at a fainter UV magnitude limit (\(M_{UV}\)\(>-13.5\)), since we include a numerous population of "ultra-faint" galaxies. However, these "ultra-faint" galaxies play little role for the reionization in our model given our assumption of \(f_{esc}\) in Section 3.1 (see Figure 5). ### The results of the empirical model We present the reionization history of the two models in Figure 6, with the constraints derived from observations: Ly\(\alpha\) equivalent width (EW(Ly\(\alpha\))) of galaxies (Mason et al., 2018, 2019; Hoag et al., 2019); the clustering of Ly\(\alpha\) emitter galaxies (Ouchi et al., 2010; Greig and Mesinger, 2017); Ly\(\alpha\) and Ly\(\beta\) dark fraction (McGreer et al., 2015); QSO damping wings (Davies et al., 2018). As reference, we show two simple models where all galaxies have uniform constant \(f_{esc}\)= 0.2 and \(\xi_{ion}^{0}\) = 25.3 (\(\xi_{ion}\) = 25.4), adopting either Schechter luminosity function or a double power-law luminosity function. Both of our models show similar histories, and process slightly earlier than the simple model. For our models, we define the beginning redshift of the reionization as \(z_{90}\) when the IGM neutral fraction \(x_{\rm H\,i}\) = 0.9. For both \(\xi_{ion}\)(UV) model and \(\xi_{ion}\)(\(z\)) model, \(z_{90}\sim 9.0\). We define the ending redshift of the reionization as \(z_{10}\) with \(x_{\rm H\,i}\) = 0.1. For the \(\xi_{ion}\)(UV) model, \(z_{10}\sim 6.4\), and for the \(\xi_{ion}\)(\(z\)) model \(z_{10}\sim 6.2\). In Figure 7 we show in both models the relative contribution to the reionization from galaxies with different UV luminosities. We group the galaxies by their \(M_{UV}\) into UV faint (\(-16<M_{UV}<-13.5\)), UV intermediate (\(-19<M_{UV}<-16\)), and UV bright (\(-23<M_{UV}<-19\)) galaxies. In both models, the UV intermediate galaxies are the major contributors of the reionization, producing \(\sim 50\%\) of the ionizing photons throughout all redshifts. The UV Figure 5: Top panel: \(P_{LCE}\) (red) and \(f_{esc}(\beta)\) as a function of absolute UV magnitude. \(P_{LCE}\) is the probability of whether a galaxy is a LCE or not. \(f_{esc}(\beta)\) is the average \(f_{esc}\) value computed without accounting for \(P_{LCE}\). Bottom panel: The blue line shows \(\langle f_{esc}\rangle\) the average of \(f_{esc}\). The shadow areas are the the 90% and 10 % percentiles of the 500 runs. bright and faint galaxies show opposite evolution. The relative contribution of bright galaxies increases as time goes by, similar to the results in Naidu et al. (2020), and reaches 30% to 40% at the end of the reionization. While the relative contribution of faint galaxies decreases over time. The evolution in Figure 7 mainly results from our assumption in Section 3.1 such that the UV faint galaxies require higher \(\beta O_{32}\) to be LCE. We model \(\beta\) based on the studies that found \(\beta\) being steeper/bluer (lower value) in the UV fainter galaxies at high redshift (e.g., Cullen et al., 2022), while assume that log(O\({}_{32}\)) does not evolve with \(M_{UV}\). Therefore, as we sample log(O\({}_{32}\)) from the same normal distribution at different \(M_{UV}\), the \(\langle f_{esc}\rangle\) of the UV faint galaxies will be lower (Figure 5). If the UV faint galaxies have higher log(O\({}_{32}\)) or our logistic discriminator is not applicable at the faint UV side (\(M_{UV}>-18\)), then we underestimate the contribution of ionizing photons from the faint galaxies. In this case, our model will produce even earlier reionization history, and to match the observational constrains, we will have to lower our estimations of \(f_{esc}\), log(\(\xi_{ion}\)), or the luminosity function. ## 5 Conclusions In this paper we present a new analysis of the rest frame UV and optical spectra of a new a sample of \(z>8\) galaxies discovered behind the gravitational lensing cluster RX J2129.4+0009 (Williams et al., 2022; Langeroodi et al., 2022). We combine these observations with those of the \(z>7.5\) galaxies for which similar data are available (Pontoppidan et al., 2022; Arellano-Cordova et al., 2022; Schaerer et al., 2022; Trump et al., 2022; Carnall et al., 2023; Curti et al., 2023; Rhoads et al., 2023; Mascia et al., 2023; Tang et al., 2023). We compare the properties of these galaxies with those observed as part of the low redshift Lyman continuum survey (Flury et al., 2022). The high [O iii]\(\lambda 5007\)/[O ii]\(\lambda 3727\) emission line ratios (O\({}_{32}\)) and steep UV continuum slopes, \(\beta<-2\), of our sample are consistent with the values observed for low redshift Lyman continuum emitters, suggesting that these galaxies potentially contribute to the ionizing budget for the intergalactic medium. We use the H\(\beta\) and UV luminosity to estimate the average ionizing photon production efficiency of our sample. Figure 6: Reionization History. The blue and green dashed lines are the \(\xi_{ion}\)(UV) model and the \(\xi_{ion}(z)\) model, respectively. The black line and red dotted line are simple model with constant \(f_{esc}\)= 0.2, constant \(\xi_{ion}^{0}=25.3\), and single Schechter LF from Bouwens et al. (2021) and double power-law LF from Harikane et al. (2022), respectively. The shaded areas are the 90% and 10 % percentiles of the 500 runs. The observational constraints are shown in white markers. Figure 7: The \(\dot{N}_{ion}\) fraction of galaxies grouped by \(M_{UV}\) as a function of redshift. The blue, green, and red regions show the relative contribution from the UV faint (\(-16<\)\(M_{UV}<-13.5\)), intermediate (\(-19<\)\(M_{UV}<-16\)), and bright (\(-23<\)\(M_{UV}<-19\)) galaxies, respectively. The shaded areas show the 90% and 10 % percentiles of the 500 runs. The vertical dashed lines are the redshifts \(z_{90}\) and \(z_{10}\), when the IGM neutral fractions \(x_{\rm H\,1}\) are 0.9 and 0.1, respectively. We apply a logistic regression (equation 3) to estimate the probability of a galaxy being a Lyman continuum emitter based on the measured \(M_{UV}\) and \(\beta O_{32}\) values. Using this probability, we construct an empirical model that estimates the galaxy contribution to the reionization budget based on the observable quantities (\(M_{UV}\), \(\beta\), \(O_{32}\)). Our analysis shows that at \(z=8\), the average escape fraction of the galaxy population (i.e., including both LyC emitters and non-emitters) varies with \(M_{UV}\). \(f_{esc}\) is approximately 20% for bright galaxies (\(M_{UV}<-19\)), and decreases for fainter magnitudes. Galaxies with intermediate UV luminosity (\(-19<M_{UV}<-16\)) contribute half of the ionizing photons throughout the epoch of reionization. The relative contribution of faint versus bright galaxies depends on redshift, with UV bright galaxies (\(-23<\)\(M_{UV}\)\(<-19\)) becoming more important over time and reaching \(\approx 40\%\) at the end of reionization. P.L.K. is supported by NSF grant AST-1908823 and anticipated funding from _JWST_ DD-2767. D.L. and J.H. were supported by a VILLUM FONDEN Investigator grant (project number 16599). A.Z. acknowledges support by Grant No. 2020750 from the United States-Israel Binational Science Foundation (BSF) and Grant No. 2109066 from the United States National Science Foundation (NSF), and by the Ministry of Science & Technology, Israel.
2304.11866
Some results on Continuous dependence of fractal functions on the Sierpiński gasket
In this article, we show that $\alpha$-fractal functions defined on Sierpi\'nski gasket (denoted by $\triangle$) depend continuously on the parameters involved in the construction. In the latter part of this article, the continuous dependence of parameters on $\alpha$-fractal functions defined on $\triangle$ is shown graphically.
Vishal Agrawal, Ajay Prajapati, Abhilash Sahu, Tanmoy Som
2023-04-24T07:23:07Z
http://arxiv.org/abs/2304.11866v1
# Some results on continuous dependence of fractal functions on the Sierpinski Gasket ###### Abstract. In this article, we show that \(\alpha\)-fractal functions defined on Sierpinski gasket (denoted by \(\triangle\)) depend continuously on the parameters involved in the construction. In the latter part of this article, the continuous dependence of parameters on \(\alpha\)-fractal functions defined on \(\triangle\) is shown graphically. Key words and phrases:fractal dimension, fractal interpolation, Sierpinski Gasket, continuous dependence 2010 Mathematics Subject Classification: Primary 28A80; Secondary 41A10 ## 1. Introduction In the field of Numerical Analysis, the computational implications of interpolation have been a major concern for many years. The theory of interpolation has evolved throughout the development of classical approximation theory. Various interpolation techniques are used in Numerical Analysis and Classical Approximation Theory, and they are based on polynomial, trigonometric, spline, and rational functions. Based on the underlying idea of the model under investigation, these techniques can be applied to a specific data set. While nonrecursive interpolation techniques in the literature almost always produce smooth interpolants, it should be emphasized that they are not all recursive. There are a great number of real-world phenomena and experimental signals that are confusing, and sometimes their traces appear smooth. Due to their non-differentiability and complexity, a simple mathematical framework may not adequately describe their smallest geometric complexity. In order to produce an interpolant with a more complex geometric structure, it is necessary to develop a novel interpolation approach. Univariate real-valued interpolation functions constructed on a compact interval in \(\mathbb{R}\) were first proposed by Barnsley [3]. These are referred to as Fractal Interpolation Functions (FIFs for short), and their construction is based on the Iterated Function System (IFS) theory [10]. In [3], the groundbreaking research on fractal interpolation has attracted a lot of interest in the literature and is currently going strong. The author of [16] demonstrated that the FIFs theory can be used to construct a family of continuous functions with fractal properties from a given continuous function. A number of studies have exposed several significant characteristics of FIFs, including their smoothness, stability, one-sided approximation property and constraint approximation property, as well as their box dimension and Hausdorff dimension. There are several research articles available on various types of FIFs, see, for instance, [12, 13, 17, 5, 19, 8]. Numerous studies reveal several significant aspects of FIFs, such as their smoothness, stability, one-sided approximation property and constrained approximation properties, as well as box dimension and Hausdorff dimension of their graphs. Celik et al. [4] expanded the notion of FIF to incorporate interpolation of a data set on \(\triangle\). Following this article, Ruan [24] developed FIF on post critically finite self-similar sets. Kigami [14] is credited with having introduced and analysed these sets. On \(\triangle\), Ri and Ruan [23] defined several fundamental characteristics of a space of FIF. The fractal dimensions of FIF defined on various domains have been thoroughly investigated in numerous articles, see, for instance, [6, 7, 9, 11, 18, 21, 25, 27, 28, 31, 32]. Recently, Mohapatra et al. [15] introduced a concept that generalised the notions of the Kannan map and contraction. In [20], Prasad and Verma have constructed the FIF on the product of two Sierpinski gaskets. We denote the space of all the real-valued continuous functions defined on \(\triangle\) by \(C(\triangle,\mathbb{R})\) and graph of \(f\) by \(graph(f)\) throughout this paper. ## 2. Fractal interpolation function on the Sierpinski gasket We begin by providing a brief overview of the relevant concepts and an introduction to \(\triangle\). The reader may refer to [2, 26, 30, 2] for further information. We begin by recalling an established \(\triangle\) construction based on IFS. Consider a set \(V_{0}=\left\{x_{1},x_{2},x_{3}\right\}\) such that points in \(V_{0}\) have equal distances from each other. Corresponding to each point of \(V_{0}\), define the contraction map \(u_{i}\) on \(\mathbb{R}^{2}\) as follows: \[u_{i}(t)=\frac{1}{2}\left(t+x_{i}\right),\] where \(i=1,2,3\). Then, three contraction maps together with the plane constitute an IFS, which produces \(\triangle\) as an attractor, i.e., \[\triangle=\bigcup_{i=1}^{3}u_{i}(\triangle).\] Define \(V_{1}\) by \(V_{1}=\left\{x_{1},x_{2},x_{3},u_{1}\left(x_{2}\right),u_{2}\left(x_{3}\right),u_{3}\left(x_{1}\right)\right\}\). Let us consider a continuous function \(f:\triangle\rightarrow\mathbb{R}\). Let \(U=\triangle\times\mathbb{R}\) and define maps \(H_{i}:U\to U\) by \[H_{i}(t,x)=\left(u_{i}(t),M_{i}(t,x)\right),i=1,2,3,\] where \(M_{i}(t,x):\triangle\times\mathbb{R}\rightarrow\mathbb{R}\) is a contraction map in the last variable, that is, \[\left|M_{i}(.,x)-M_{i}\left(.,x^{\prime}\right)\right|\leq c\left|x-x^{\prime}\right|\] with \(M_{i}\left(p_{j},f\left(p_{j}\right)\right)=f\left(u_{i}\left(p_{j}\right)\right)\). In particular, we take \[M_{i}(t,x)=\alpha_{i}x+f\left(u_{i}(t)\right)-\alpha_{i}b(t),\] where \(b\in C(\triangle,\mathbb{R})\) is base function and \(f\in C(\triangle,\mathbb{R})\) original function such that \(b\left(p\right)=f\left(p\right)\ \forall\ p\in V_{0}\), and scale vector \(\alpha_{i}\in\mathbb{R}\) with \(\left|\alpha_{i}\right|_{\infty}<1\). Thus, We have an IFS \(\mathcal{J}:=\left\{U,H_{i}:i=1,2,3\right\}\). **Theorem 2.1**.: _[_1_]_ _Let \(f\in C(\triangle,\mathbb{R})\), and \(b\in C(\triangle,\mathbb{R})\). Then, above defined IFS \(\mathcal{J}\) has a unique attractor \(graph(f^{\alpha})\). The set \(graph(f^{\alpha})\) is the graph of a continuous function \(f^{\alpha}:\triangle\rightarrow\mathbb{R}\), which satisfies \(\left.f^{\alpha}\right|_{V_{1}}=\left.f\right|_{V_{1}}\). Furthermore, we have the following functional equation_ \[f^{\alpha}(t)=f(t)+\alpha_{i}\left(f^{\alpha}-b\right)\left(u_{i}^{-1}(t) \right)\ \forall\ t\in u_{i}(\triangle),i\in\{1,2,3\}. \tag{1}\] ## 3. Continuous dependence on parameter \(\alpha\) and \(b\) Let \(f\in C(\triangle,\mathbb{R})\). Define a map \(W\) from \(S\) to \(C(\triangle,\mathbb{R})\) by \[W(\alpha)=f_{b}^{\alpha},\] where \(S=\left\{\alpha\in\mathbb{R}^{3}:|\alpha|_{\infty}\leq r<1\text{ and }r\text{ is a fixed number}\right\}\) and \(f_{b}^{\alpha}\) is \(\alpha\)-fractal function associated with \(f\) with respect to \(b\) and the scale vector \(\alpha\). **Theorem 3.1**.: _For fixed \(f\in C(\triangle,\mathbb{R})\) and for suitable \(b\in C(\triangle,\mathbb{R})\), the map \(W:S\to C(\triangle,\mathbb{R})\) is continuous._ Proof.: The fixed point theory says that for a fixed a scale vector \(\alpha\), and \(b\), the map \(f_{b}^{\alpha}\) is unique. Further, being fixed point of RB- operator, \(f_{b}^{\alpha}\) satisfies the functional equation: \[f_{b}^{\alpha}(t)=f(t)+\alpha_{i}\left(f_{b}^{\alpha}-b\right)\circ u_{i}^{-1 }(t),\ \forall\ t\in u_{i}(\triangle),i\in\{1,2,3\}.\] It is obvious that \(W\) is well defined. Let \(\alpha\in S\), then from the above functional equation, we have \[W(\alpha)(t)=f_{b}^{\alpha}(t)=f(t)+\alpha_{i}\left(f_{b}^{\alpha}-b\right) \circ u_{i}^{-1}(t),\ \forall\ t\in u_{i}(\triangle),i\in\{1,2,3\}\] and for \(\beta\in S\), \[W(\beta)(t)=f_{b}^{\beta}(t)=f(t)+\beta_{i}\left(f_{b}^{\beta}-b\right)\circ u _{i}^{-1}(t),\ \forall\ t\in u_{i}(\triangle),i\in\{1,2,3\}.\] We shall show that \(W\) is continuous at \(\alpha\). For this, subtract one from other of the above two equations, for \(t\in u_{i}(\triangle)\), we have \[\begin{split} f_{b}^{\alpha}(t)-f_{b}^{\beta}(t)&= \left[f(t)+\alpha_{i}\left(f_{b}^{\alpha}-b\right)\circ u_{i}^{-1}(t)\right]- \left[f(t)+\beta_{i}\left(f_{b}^{\beta}-b\right)\circ u_{i}^{-1}(t)\right]\\ &=\left[\alpha_{i}f_{b}^{\alpha}\left(u_{i}^{-1}(t)\right)-\beta_{ i}f_{b}^{\beta}\left(u_{i}^{-1}(t)\right)\right]+\left(\beta_{i}-\alpha_{i} \right)b\circ u_{i}^{-1}(t)\\ &=\left[\alpha_{i}f_{b}^{\alpha}-\beta_{i}f_{b}^{\alpha}+\beta_{ i}f_{b}^{\alpha}-\beta_{i}f_{b}^{\beta}\right]\circ u_{i}^{-1}(t)+\left(\beta_{i}- \alpha_{i}\right)b\circ u_{i}^{-1}(t)\\ &=\left[\left(\alpha_{i}-\beta_{i}\right)f_{b}^{\alpha}+\beta_{i} \left(f_{b}^{\alpha}-f_{b}^{\beta}\right)\right]\circ u_{i}^{-1}(t)+\left( \beta_{i}-\alpha_{i}\right)b\circ u_{i}^{-1}(t).\end{split} \tag{2}\] Now, using triangle inequality and definition of uniform norm, we have \[\begin{split}\left|f_{b}^{\alpha}(t)-f_{b}^{\beta}(t)\right|& \leq\left|\left[\left(\alpha_{i}-\beta_{i}\right)f_{b}^{\alpha}+\beta_{i} \left(f_{b}^{\alpha}-f_{b}^{\beta}\right)\right]\circ u_{i}^{-1}(t)\right|+ \left|\left(\beta_{i}-\alpha_{i}\right)b\circ u_{i}^{-1}(t)\right|\\ &\leq|\alpha-\beta|_{\infty}\left\|f_{b}^{\alpha}\right\|_{\infty }+|\beta|_{\infty}\left\|f_{b}^{\alpha}-f_{b}^{\beta}\right\|_{\infty}+|\beta- \alpha|_{\infty}\|b\|_{\infty}\\ &=|\alpha-\beta|_{\infty}\left(\left\|f_{b}^{\alpha}\right\|_{ \infty}+\|b\|_{\infty}\right)+|\beta|_{\infty}\left\|f_{b}^{\alpha}-f_{b}^{ \beta}\right\|_{\infty}.\end{split} \tag{3}\] It follows that, for all \(t\in\triangle\), we get \[\left|f_{b}^{\alpha}(t)-f_{b}^{\beta}(t)\right|\leq|\alpha-\beta|_{\infty} \left(\left\|f_{b}^{\alpha}\right\|_{\infty}+\|b\|_{\infty}\right)+|\beta|_{ \infty}\left\|f_{b}^{\alpha}-f_{b}^{\beta}\right\|_{\infty}. \tag{4}\] The above implies that \[\left\|f_{b}^{\alpha}-f_{b}^{\beta}\right\|_{\infty}\leq|\alpha-\beta|_{\infty} \left(\left\|f_{b}^{\alpha}\right\|_{\infty}+\|b\|_{\infty}\right)+|\beta|_{ \infty}\left\|f_{b}^{\alpha}-f_{b}^{\beta}\right\|_{\infty}. \tag{5}\] Using \(1-|\beta|_{\infty}\geq 1-r\), finally we have \[\|W(\alpha)-W(\beta)\|_{\infty}=\|f_{b}^{\alpha}-f_{b}^{\beta}\|_{\infty}\leq \frac{|\alpha-\beta|_{\infty}}{1-r}\left(\|f_{b}^{\alpha}\|_{\infty}+\|b\|_{ \infty}\right). \tag{6}\] Since \(\alpha\) is fixed and \(\|f_{b}^{\alpha}\|_{\infty}\) is bounded, we have \(W\) is continuous at \(\alpha\). Since \(\alpha\) was taken arbitrarily, hence, \(W\) is continuous on \(S\). **Theorem 3.2**.: _Let \(f\in C(\triangle,\mathbb{R})\) and scale vector \(\alpha\in\mathbb{R}^{3}\) with \(|\alpha|_{\infty}<1\) and \(X_{f}=\left\{b\in C(\triangle,\mathbb{R}):b\big{|}_{V_{0}}=f\big{|}_{V_{0}}\right\}\). Then the map \(T:X_{f}\to C(\triangle,\mathbb{R})\) defined by \(T(b)=f_{b}^{\alpha}\) is Lipschitz continuous._ Proof.: We know that for a scale vector \(\alpha\) and a suitable function \(b:\triangle\rightarrow\mathbb{R}\), the function \(f_{b}^{\alpha}\) is unique. Further, being fixed point of RB - operator, \(f_{b}^{\alpha}\) satisfies the functional equation: \[f_{b}^{\alpha}(t)=f(t)+\alpha_{i}\left(f_{b}^{\alpha}-b\right)\circ u_{i}^{-1 }(t),\ \forall\ t\in u_{i}(\triangle),i\in\{1,2,3\}.\] It is obvious that \(T\) is well defined. Let \(b,c\in X_{f}\) then from the above functional equation, we have \[T(b)(t)=f_{b}^{\alpha}(t)=f(t)+\alpha_{i}\left(f_{b}^{\alpha}-b\right)\circ u_ {i}^{-1}(t),\ \forall\ t\in u_{i}(\triangle),i\in\{1,2,3\}.\] and \[T(c)(t)=f_{c}^{\alpha}(t)=f(t)+\alpha_{i}\left(f_{c}^{\alpha}-c\right)\circ u_ {i}^{-1}(t),\ \forall\ t\in u_{i}(\triangle),i\in\{1,2,3\}.\] On subtracting one from other of the above two equations, we get for \(t\in u_{i}(\triangle)\) \[f_{b}^{\alpha}(t)-f_{c}^{\alpha}(t) =\left[f(t)+\alpha_{i}\left(f_{b}^{\alpha}-b\right)\circ u_{i}^{- 1}(t)\right]-\left[f(t)+\alpha_{i}\left(f_{c}^{\alpha}-c\right)\circ u_{i}^{- 1}(t)\right]\] \[=\alpha_{i}\left(f_{b}^{\alpha}-f_{c}^{\alpha}\right)\circ u_{i}^ {-1}(t)+\alpha_{i}(c-b)\circ u_{i}^{-1}(t).\] Now using triangle inequality and definition of uniform norm, we have \[|f_{b}^{\alpha}(t)-f_{c}^{\alpha}(t)| =\left|\alpha_{i}\left(f_{b}^{\alpha}-f_{c}^{\alpha}\right)\circ u _{i}^{-1}(t)+\alpha_{i}(c-b)\circ u_{i}^{-1}(t)\right|\] \[\leq\left|\alpha_{i}\left(f_{b}^{\alpha}-f_{c}^{\alpha}\right) \circ u_{i}^{-1}(t)\right|+\left|\alpha_{i}(c-b)\circ u_{i}^{-1}(t)\right|\] \[\leq|\alpha|_{\infty}\left\|f_{b}^{\alpha}-f_{c}^{\alpha}\right\| _{\infty}+|\alpha|_{\infty}\|c-b\|_{\infty}.\] The above inquality holds for all \(t\in\triangle\), therefore, we write \[\|f_{b}^{\alpha}-f_{c}^{\alpha}\|_{\infty}\leq|\alpha|_{\infty}\left\|f_{b}^{ \alpha}-f_{c}^{\alpha}\right\|_{\infty}+|\alpha|_{\infty}\|c-b\|_{\infty}.\] This can be recasted as, \(\|f_{b}^{\alpha}-f_{c}^{\alpha}\|_{\infty}\leq\frac{|\alpha|_{\infty}}{1-| \alpha|_{\infty}}\|b-c\|_{\infty}\). It follows that \[\|T(b)-T(c)\|_{\infty}\leq\frac{|\alpha|_{\infty}}{1-|\alpha|_{\infty}}\|b-c \|_{\infty},\] which shows that \(T\) is a Lipschitz continuous map with Lipschitz constant \(\frac{|\alpha|_{\infty}}{1-|\alpha|_{\infty}}\). Next, we plot the \(graph(f_{b}^{\alpha})\) for different values of parameters, that is, original function, base function and scale vector. One can easily identify the variation in these graphs by changing the values of parameters. Hence, \(graph(f_{b}^{\alpha})\) depends on these parameters. Figure 2. \(graph(f_{b}^{\alpha})\) for the various values of the \(\alpha\), where \(b(x,y)=sin(x+3.7)+1.3x-x^{2}y+0.866x^{2}+xy-0.866x\), \(f(x,y)=sin(x+3.7)+1.3x\). \(graph(f_{b}^{\alpha})\) at \(\alpha=0.1\)\(graph(f_{b}^{\alpha})\) at \(\alpha=0.3\)\(graph(f_{b}^{\alpha})\) at \(\alpha=0.3\)\(graph(f_{b}^{\alpha})\) at \(\alpha=0.6\)\(graph(f_{b}^{\alpha})\) at \(\alpha=0.9\)\(graph(f_{b}^{\alpha})\) at \(\alpha=0. \(graph(f_{b}^{\alpha})\) at \(\alpha=0.1\)\(graph(f_{b}^{\alpha})\) at \(\alpha=0.3\)\(graph(f_{b}^{\alpha})\) at \(\alpha=0.3\)\(graph(f_{b}^{\alpha})\) at \(\alpha=0.6\)\(graph(f_{b}^{\alpha})\) at \(\alpha=0.9\)\(graph(f_{b}^{\alpha})\) at \(\alpha=0. ## 4. Declaration **Funding.** Not applicable. **Conflicts of interest.** We do not have any conflict of interest. **Availability of data and material.** Not applicable. **Code availability.** Not applicable. **Authors' contributions.** Each author contributed equally in this manuscript.
2310.14659
Predicting Accurate Lagrangian Multipliers for Mixed Integer Linear Programs
Lagrangian relaxation stands among the most efficient approaches for solving a Mixed Integer Linear Programs (MILP) with difficult constraints. Given any duals for these constraints, called Lagrangian Multipliers (LMs), it returns a bound on the optimal value of the MILP, and Lagrangian methods seek the LMs giving the best such bound. But these methods generally rely on iterative algorithms resembling gradient descent to maximize the concave piecewise linear dual function: the computational burden grows quickly with the number of relaxed constraints. We introduce a deep learning approach that bypasses the descent, effectively amortizing the local, per instance, optimization. A probabilistic encoder based on a graph convolutional network computes high-dimensional representations of relaxed constraints in MILP instances. A decoder then turns these representations into LMs. We train the encoder and decoder jointly by directly optimizing the bound obtained from the predicted multipliers. Numerical experiments show that our approach closes up to 85~\% of the gap between the continuous relaxation and the best Lagrangian bound, and provides a high quality warm-start for descent based Lagrangian methods.
Francesco Demelas, Joseph Le Roux, Mathieu Lacroix, Axel Parmentier
2023-10-23T07:53:47Z
http://arxiv.org/abs/2310.14659v1
# Predicting Accurate Lagrangian Multipliers for Mixed Integer Linear Programs ###### Abstract Lagrangian relaxation stands among the most efficient approaches for solving a Mixed Integer Linear Programs (MILP) with difficult constraints. Given any duals for these constraints, called Lagrangian Multipliers (LMs), it returns a bound on the optimal value of the MILP, and Lagrangian methods seek the LMs giving the best such bound. But these methods generally rely on iterative algorithms resembling gradient descent to maximize the concave piecewise linear dual function: the computational burden grows quickly with the number of relaxed constraints. We introduce a deep learning approach that bypasses the descent, effectively amortizing the local, per instance, optimization. A probabilistic encoder based on a graph convolutional network computes high-dimensional representations of relaxed constraints in MILP instances. A decoder then turns these representations into LMs. We train the encoder and decoder jointly by directly optimizing the bound obtained from the predicted multipliers. Numerical experiments show that our approach closes up to 85 % of the gap between the continuous relaxation and the best Lagrangian bound, and provides a high quality warm-start for descent based Lagrangian methods. ## 1 Introduction Mixed Integer Linear Programs (MILPs) (Wolsey, 2021) have two main strengths that make them ubiquitous in combinatorial optimization (Korte & Vygen, 2012). First, they can model a wide variety of combinatorial optimization problems. Second, extremely efficient solvers can now handle MILPs with millions of constraints and variables. They therefore have a wide variety of applications. MILP algorithms are exact: they return an optimal solution, or an optimality gap between the returned solution and an optimal one. MILPs are sometimes hard to solve due to a collection of difficult constraints. Typically, a small number of constraints may link otherwise independent subproblems. For instance, in vehicle routing problems (Golden et al., 2008), there is one independent problem for each vehicle, except for the linking constraints that ensure that exactly one vehicle operates each task of interest. Lagrangian relaxation approaches are popular in such settings as they enable to decouple the different subproblems. More formally (Conforti et al., 2014, Chap. 8), let \((P)\) be a MILP of the form: \[(P) v_{P}=\min_{\mathbf{x}} \mathbf{w}^{\top}\mathbf{x}\] \[s.t. \mathbf{Ax}=\mathbf{b} \tag{1a}\] \[\mathbf{Cx}=\mathbf{d}\] (1b) \[\mathbf{x}\in\mathbb{R}^{m}\times\mathbb{N}^{n} \tag{1c}\] The relaxed Lagrangian problem obtained by dualizing (the difficult) constraints (1a) and penalizing their violation with Lagrangian multipliers (LMs) \(\mathbf{\pi}\) is: \[\begin{split}(LR(\mathbf{\pi}))\qquad\mathcal{G}(\mathbf{\pi})=\,\min_{\mathbf{ \pi}}&\mathbf{w}^{\top}\mathbf{x}+\mathbf{\pi}^{\top}(\mathbf{b}-\mathbf{A}\mathbf{x})\\ s.t.&\mathbf{Cx}=\mathbf{d}\\ &\mathbf{x}\in\mathbb{R}^{m}\times\mathbb{N}^{n}\end{split} \tag{2}\] Standard weak Lagrangian duality ensures that \(\mathcal{G}(\mathbf{\pi})\) is a lower bound on \(v_{P}\). The Lagrangian dual problem aims at finding the best such bound. \[(LD)\qquad v_{D}=\max_{\mathbf{\pi}}LR(\mathbf{\pi}). \tag{3}\] Geoffrion theorem (Geoffrion, 1974) ensures that \(v_{D}\) is a lower bound at least as good as the continuous relaxation. It is strictly better on most applications. Beyond this bounds, Lagrangian approaches are also useful to find good solutions of the primal. Indeed, Lagrangian heuristics exploit the dual solution \(\mathbf{\pi}\) and the primal (possibly infeasible) solution of the relaxed Lagrangian problem \(LR(\mathbf{\pi})\) to compute good quality solutions of (1). Remark that both the bound and the heuristic work as soon as we have "good" but not necessarily optimal duals \(\mathbf{\pi}\). By good, we mean Lagrangian duals \(\mathbf{\pi}\) that lead to a bound \(\mathcal{G}(\mathbf{\pi})\) which is better than the continuous relaxation, and not too far from \(v_{D}\). Since \(\mathbf{\pi}\mapsto\mathcal{G}(\mathbf{\pi})\) is piecewise linear and concave, it is generally optimized using a subgradient algorithm. Unfortunately, the number of iterations required to obtain good duals quickly increases with the dimension of \(\mathbf{\pi}\), which makes the approach extremely intensive computationally. In this paper, we introduce an encoder-decoder neural network that computes "good" duals \(\pi\). The neural network uses state-of-the-art encoder-decoder architecture. The probabilistic encoder \(q_{\mathbf{\phi}}(\mathbf{z}|_{\iota})\) takes in input \(\iota\) a MILP instance as well as the primal and dual solutions of its continuous relaxation, and returns an embedding of the instance as a labelled graph. As is classical when using learning algorithms, we represent a MILP instance as a graph whose vertices are the variables and constraints, and whose edges are the non-zero coefficient of the constraint matrix. The vector representation of each vertex lives in a high dimensional space. The deterministic decoder \(f_{\mathbf{\theta}}(\mathbf{\pi}|\mathbf{z})\) projects reconstructs single dimensional duals from constraints labels. We show that the Lagrangian dual function \(\mathcal{G}(\mathbf{\pi})\) provides a natural loss function. Numerical experiments on two problems from the literature show that the predicted duals close three fourth of the gap between the continuous relaxation and the Lagrangian dual bound. These results are even improved when we make use of the probabilistic nature of the encoder to sample several duals. Finally, when the optimal duals are the target, we show that the predicted duals provide an excellent warm-start for state-of-the-art algorithms for (3). ## 2 Learning Framework ### Overall Architecture Iterative algorithms for setting LMs to optimality such as the subgradient method or the Bundle method (BM) start by setting the initial values for LMs. They can be initialized to zero but a solution considered as better in practice by the Combinatorial Optimization community is to take advantage of the CR bound, often cheap to compute. Specifically, optimal values of the CR dual variables identified with the constraints dualized in the Lagrangian relaxation can be understood as LMs. In many problems of interest these LMs are not optimal and can be improved by the subgradient method or BM. We leverage this observation by trying to predict a deviation from the LMs obtained by reinterpreting the continuous relaxation dual solution as a Lagrangian bound. The architecture is depicted on Figure 1. We start from an input instance \(\iota\) of MILP \((P)\) with a set of constraints for which the Lagrangian relaxed problem is easy to compute, we solve \((CR)\) and obtain the corresponding primal and dual solutions. This input is then passed through a probabilistic encoder, composed of three parts: _(i)_ the input is represented by a bipartite graph in a similar fashion as in (Gasse et al., 2019) and initial features for the graph nodes are extracted, _(ii)_ this graph is passed through a graph neural network in charge of refining the node features by taking into account the structure of the MILP, _(iii)_ based on the last layer of the graph neural network, we parametrize a distribution from which we are able to sample a vector \(\mathbf{z}_{c}\) for each dualized constraint \(c\). The decoder then translates \(\mathbf{z}_{c}\) to a LM \(\pi_{c}=\lambda_{c}+\delta_{c}\) by predicting a deviation \(\delta_{c}\) from the CR dual solution variable \(\lambda_{c}\). Finally, the predicted LMs can be used in several ways, in particular to compute a Lagrangian bound or to warmstart an iterative solver. ### Objective We train the network's parameters in an end-to-end fashion by maximizing the average Lagrangian dual bound \(LR\) defined in (2), obtained from the predicted LMs over a training set. This can be cast as an empirical risk optimization, or an Energy-Based Model (Le Cun et al., 2006) with latent variables, where the Lagrangian bound is the (negative) energy corresponding to the coupling of the instance with the subproblem solutions, and the LMs -- or more precisely their high-dimensional representations -- the latent variables. For our problem, a natural measure of the quality of the prediction is provided by the value of the Lagrangian bound that we want to maximize to tighten the duality gap. Given an instance \(\iota\) we want to learn how to predict the latent representations of the LMs from which the Lagrangian bound is the highest: \[\max_{\mathbf{\phi},\mathbf{\theta}}\mathbb{E}_{\mathbf{z}\sim q_{\mathbf{\phi}}(\cdot|\iota) }\left[LR(\mathbf{\lambda}+f_{\mathbf{\theta}}(\mathbf{z});\iota)\right]\] where \(q_{\mathbf{\phi}}\) is the probabilistic encoder, mapping each dualized constraint \(c\) in \(\iota\) to a latent vector \(\mathbf{z}_{c}\) computed by independent gaussian distributions, \(f_{\mathbf{\theta}}\) is the decoder mapping each \(\mathbf{z}_{c}\) to the corresponding LM deviation \(\delta_{c}\) from the CR dual value \(\lambda_{c}\), and \(LR\) is the Lagrangian bound.1 We can observe that this objective has the following properties amenable to gradient-based learning: Footnote 1: With a slight abuse of notation, we use function \(f:\mathbb{R}^{m}\rightarrow\mathbb{R}^{n}\) on _batches_ to become \(\mathbb{R}^{m\times p}\rightarrow\mathbb{R}^{n\times p}\). 1. \(LR\) is bounded from above: optimal LMs \(\mathbf{\pi}^{*}\) maximize \(LR(\cdot)\) over all possible LMs, that is \(LR(\mathbf{\pi}^{*})\geq LR(\mathbf{\pi})\) for any \(\mathbf{\pi}=\mathbf{\lambda}+f_{\mathbf{\theta}}(\mathbf{z})\) Moreover, \(LR\) is a concave piece-wise linear function, in other words all optimal solutions will give the same bound. 2. It is straightforward to compute a subgradient w.r.t. to parameters \(\mathbf{\theta}\). We have that: \[\nabla_{\mathbf{\theta}}LR(\mathbf{\lambda}+f_{\mathbf{\theta}}(\mathbf{z});\iota)=\left( \frac{\partial f_{\mathbf{\theta}}(\mathbf{z})}{\partial\mathbf{\theta}}\right)^{\top} \nabla_{\mathbf{\pi}}LR(\mathbf{\pi};\iota)\] The jacobian of \(f_{\mathbf{\theta}}\) is computed via backpropagation, while \(LR\) is simple enough for a subgradient to be given analytically. Provided that \(\bar{\mathbf{x}}\) is an optimal solution of \(LR(\mathbf{\pi})\), we derive: \[\nabla_{\bar{\mathbf{x}}}LR(\mathbf{\pi};\iota)=\mathbf{b}-\mathbf{A}\bar{\mathbf{x}}\] 3. For parameters \(\mathbf{\phi}\), we again leverage function composition and the fact that \(q_{\mathbf{\phi}}\) is a gaussian distribution, so we can approximate the expectation by sampling and use the reparametrization trick (Kingma & Welling, 2014; Schulman et al., 2015) to perform standard backpropagation. We implement \(q_{\mathbf{\phi}}\) as a neural network, described in details in the following section, returning a mean vector and a variance vector for each dualized constraint \(c\), from which a sampler returns a representation vector \(\mathbf{z}_{c}\). For numerical stability, the variance is clipped to a safe interval following (Rybkin et al., 2021). Figure 1: Overall Architecture. From the bipartite graph representation of a MILP and its CR solution, the model computes a Lagrangian dual solution. First the MILP is encoded by a GNN, from which we parametrize a sampler for constraint representations. These representations are then passed through a decoder to compute Lagrangian Multipliers. ### Encoding and Decoding Instances EncoderOne of the challenges in Machine Learning applications to Combinatorial Optimization is that instances have different input sizes, and so the encoder must be able to cope with these variations to produce high-quality features. Of course this is also the case in many other applications, for instance NLP where texts may differ in size, but there is no general consensus as to what a good feature extractor for MILP instances looks like, contrarily to other domains where variants of RNNs or Transformers have become the de facto standard encoders. We depart from previous approaches to Lagrangian prediction Sugishita et al. (2021) restricted to instances of the same size, and follow more generic approaches to MILP encoding such as (Gasse et al., 2019; Nair et al., 2020; Khalil et al., 2017) where each instance is converted into a bipartite graph and further encoded by GCNs to compute meaningful feature vectors associated with dualized constraints. Each MILP is converted to a bipartite graph composed of one node for each variable and one node for each constraint. There is an arc between a variable node \(n_{v}\) and a constraint node \(n_{c}\) if and only if \(v\) appears in \(c\). We differ from Gasse et al. (2019) who add to each arc \((n_{v},n_{c})\) a weight equal to the coefficient of variable \(v\) in constraint \(c\). We found these coefficients not useful, on the two datasets that we experimented with during preliminary testing, and thus omitted them. Each node (variable or constraint) is represented by an initial feature vector \(\mathbf{e}_{n}\). We use features similar to (Gasse et al., 2019), see Appendix C for more details. Following (Nair et al., 2020), variables and constraints are encoded as the concatenation of variables features followed by constraint features, of which only one is non-zero, depending on the type of nodes. To design our stack of GCNs, we take inspiration from structured prediction models for images and texts, where Transformers (Vaswani et al., 2017) are ubiquitous. However, since our input has a bipartite graph structure, we replace the multihead self-attention layers with simple linear graph convolutions2(Kipf and Welling, 2016). Closer to our work, we follow Nair et al. (2020) which showed that residual connections (He et al., 2016), dropout (Srivastava et al., 2014) and layer normalization (Ba et al., 2016) are important for the successful implementation of feature extractors for MILP bipartite graphs. Footnote 2: Alternatively, this can be seen as a masked attention, where the mask is derived from the input graph adjacency. Before the actual GCNs, initial feature vectors \(\left\{\mathbf{e}_{n}\right\}_{n}\) are passed through a MLP \(F\) to find feature combinations and extend node representations to high-dimensional spaces: \(\mathbf{h}_{n}=F(\mathbf{e}_{n}),\forall n\). Then interactions between nodes are taken into account by passing vectors through blocks, represented in Figure 2, consisting of two sublayers. * The first sublayer connects its input via a residual connection to a layer normalization \(LN\) followed by a linear graph convolution \(CONV\) of length 1, followed by a dropout regularization \(DO\): \[\mathbf{h}_{n}^{\prime}=\mathbf{h}_{n}+DO(CONV(LN(\mathbf{h}_{n})))\] The graph convolution passes messages between nodes. In our context, it passes information from variables to constraints, and vice versa. * The second sublayer takes as input the result of first one, and connects it with a residual connection to a sequence made of a layer normalization \(LN\), a MLP transformation and a dropout regularization \(DO\): \[\mathbf{h}_{n}=\mathbf{h}_{n}^{\prime}+DO(MLP(LN(\mathbf{h}_{n}^{\prime})))\] This MLP is in charge of finding non-linear interactions between the information collected in the previous sublayer. This block structure, depicted in Figure 2, is repeated several times, typically 5 times in our experiments, in order to extend the domain of locality. The learnable parameters of a block are the parameters of the convolution in the first sublayer and the parameters of the MLP in the second one. Remark that we start each sublayer with normalization, as it has become the standard approach in Transformer recently Chen et al. (2018). We note in passing that this has also been experimented with by (Gasse et al., 2019) in the context of MILP, although only once before the GCN input, whereas we normalize twice per block, at each block. Finally, we retrieve the final vectors associated with dualized constraints \(\{\mathbf{h}_{c}\}_{c}\). Each vector \(\mathbf{h}_{c}\) is interpreted as the concatenation of two vectors \([\mathbf{z}_{\mu};\mathbf{z}_{\sigma}]\) from which we compute \(\mathbf{z}_{c}=\mathbf{z}_{\mu}+\exp(\mathbf{z}_{\sigma})\cdot\mathbf{\epsilon}\) where elements of \(\mathbf{\epsilon}\) are sampled from the normal distribution. This concludes the implementation of the probabilistic encoder \(q_{\phi}\). Decoder and Lagrangian SubproblemRecall that, in our architecture, from each latent vector representation \(\mathbf{z}_{c}\) of dualized constraint \(c\) we want to compute the scalar deviation \(\delta_{c}\) to the CR dual value \(\lambda_{c}\) so that the sum of the two improves the Lagrangian bound given by the CR dual solution. In other words, we want to compute \(\mathbf{\delta}\) such as \(\mathbf{\pi}=\mathbf{\lambda}+\mathbf{\delta}\) gives a _good_ Lagrangian bound. For each independent Lagrangian subproblem we want to find its optimal variable assignment, usually with local combinatory constraints, for its objective reparametrized with \(\mathbf{\pi}\). This approach is typical of structured prediction: we leverage neural networks to extract features in order to compute local energies (scalars), which are utilized by a combinatorial algorithm outputing a structure whose objective value can be interpreted as a global energy. For instance, this is reminiscent of how graph-based syntactic parsing models in NLP compute parse scores (global energies) as sums of arc scores (local energies) computed by RNNs followed by MLPs, where the choice of arcs is guided by well-formedness constraints enforced by a maximum spanning tree solver, see for instance (Kiperwasser and Goldberg, 2016). Thus, the decoder is local to each dualized constraint, and we leverage subproblems to interconnect predictions: 1. We compute LMs (local energies) \(\pi_{c}=\lambda_{c}+f_{\mathbf{\theta}}(\mathbf{z}_{c})\) for all dualized constraints \(c\), where \(f_{\mathbf{\theta}}\) is implemented as a feed-forward network computing the deviation. 2. For parameter learning or if the subproblems or the Lagrangian bound are the desired output, vector \(\mathbf{\pi}\) is then passed to the Lagrangian subproblems3 which compute independently and in parallel their local solutions \(\mathbf{x}\) and the corresponding values that are summed to give (global energy) \(LR(\mathbf{\pi})\). The exact computation of \(LR\) is of combinatorial nature, problem specific, and is described in Appendix A and in Appendix B. Footnote 3: We use the plural form, but it might be the case that there is only one such problem, depending on the type of problem or instance. ## 3 Related Work In this work, we define a Machine Learning model to predict a dual bound for MILP instances sharing common features, which can in turn be used to improve solvers. There is a growing interest in leveraging ML alongside optimization algorithms (Bengio et al., 2018), and particularly with aim of improving MILP solvers (Zhang et al., 2023). Indeed, even though MILP solvers solve problems in an exact way, they make a lot of heuristic decisions for which ML can be used to base these decisions Figure 2: The Graph Neural Network block. The first part is graph message-passing: we apply layer normalization to node features, then convolution over the instance’s bipartite graph representation and finally dropout. The second phase consists of normalization, a Multi-Layer perceptron in parallel over all the nodes of the bipartite graph, then dropout. Both sublayers use residual connection between input and output. We apply this block several times to improve feature representations. on non-trivial patterns. For instance, classifiers have been designed for Branch and Bound (B&B) algorithms (Lodi and Zarpellon, 2017) in order to choose which variables to branch on (Alvarez et al., 2016; Khalil et al., 2016; Etheve et al., 2020), which B&B node to process (Yilmaz and Yorke-Smith, 2021; Labassi et al., 2022), to decide when to perform heuristics (Hottung et al., 2017; Khalil et al., 2017) or how to schedule them (Chmiela et al., 2021). More closely related to this contribution, several works have tackled the prediction of high quality primal and dual bounds. For instance, Nair et al. (2020) predict optimal values for large subsets of variables, resulting in small MILPs that can be solved to optimality. Another way to provide good primal solutions is to learn to transform an input MILP into an easier problem, solve it and apply a postoptimization procedure to recover primal feasibility of the computed solution (Parmentier, 2023; Dalle et al., 2022). For dual bounds, ML has been employed for cut selection in cutting planes algorithms. Indeed, efficient MILP solvers contain cut generators for strengthening the linear relaxation but a key point is to find a trade off between the improvement of the dual bound and the increasing solving time of the linear relaxation due the added cuts (Dey and Molinaro, 2018). Baltean-Lugojan et al. (2018); Babaki and Jena (2022/09/06); Wang et al. (2023); Balcan et al. (2021); Berthold et al. (2022); Tang et al. (2020); Huang et al. (2022); Afia and Kabbaj (2017) have trained different models for this cut selection. Similarly, Morabit et al. (2021) learn to select good columns at each iteration of a column generation. Regarding prediction for Lagrangian dual solutions, Nair et al. (2018) consider specifically 2-stage stochastic MILPs, approached by a Lagrangian decomposition for which they learn to predict LMs that would comply with any second-stage scenario to give a good bound on average. Abbas and Swoboda (2022) propose a new approach to solve linear optimization problems based on Binary Decision Diagrams (Lange and Swoboda, 2021) and a Lagrangian decomposition by constraint. This algorithm can be run on GPU and is trainable. In (Abbas and Swoboda, 2022), they improve their algorithm by learning an initial dual solution as well as the step in their subgradient method. Other works predict Lagrangian dual solutions for deterministic MILPs like us but stemming from specific combinatorial optimization problems in contrast to our generic method. Kraul et al. (2023) consider the cutting stock problem and propose a MLP to predict the dual Lagrangian value for each constraint (_i.e._ stock) separately4. They use the Lagrangian dual solution to stabilize a column generation using du Merle stabilization method (du Merle et al., 1999). Sugishita et al. (2021) predict a Lagrangian dual solution for the unit commitment problem. In their context, the same problem is solved daily but with different demand forecasts. The prediction is made by a MLP or a random forest and the dual solution is used to warmstart a proximal bundle method, similarly to our evaluation. Footnote 4: Another MLP is proposed predicting all dual Lagrangian values at once but it is limited to instances containing no more than a fixed number of stocks. In our work, we assume that the decomposition in subproblems is given, but the prediction of good decompositions is also an active avenue of research, to find a good compromise between the quality of the Lagrangian dual bound and the complexity of the relaxed Lagrangian problem. Kruber et al. (2017) train a ML model to determine whether to apply a decomposition and which constraints to dualize while Basso et al. (2020) use ML to classify decompositions from features of the MIP instances. ## 4 Evaluation We evaluate our approach on the Multi-Commodity Fixed-Charge Network Design problem and on the Capacitated Facility Location problem. This section describes the dataset considered for the two problems and the numerical results for a series of experiments. ### Benchmarks Multi-Commodity Fixed-Charge Network Design (MCDN)Given a network with arc capacities and a set of commodities, MCND consists in activating a subset of arcs and routing each commodity from its origin to its destination, possibly fractionned on several paths, using only the activated arcs. The objective is to minimize the total cost induced by the activation of the arcs and the routing of the commodities. This problem is NP-hard and the continuous relaxation provides poor bounds when arc capacities are high. Hence, it is usually tackled with Lagrangian relaxation-based methods (Akhavan Kazemzadeh et al., 2022). We describe the mathematical formulation as a MILP of this problem, its Lagrangian Relaxation and subproblems in Appendix A. Since there is no publicly available dataset for this problem adapted to Machine Learning, with large collections of instances sharing common features, we designed four small datasets on which we can run experiments. We generate four MCND datasets from a subset of instances of the Canad dataset (Crainic et al., 2001), well known and commonly used by the optimization community to benchmark solvers. The first two datasets, Mcnd-Small-Com40 and Mcnd-Small-ComVar, contain instances which all share the same network (\(20\) nodes and \(230\) edges) and the same arc capacities and fixed costs, but with different values for origins, destinations, volumes and routing costs. Instances of the former all involve the same number of commodities (\(40\)), while for the latter the number of commodities can also vary from \(40\) to \(200\). Dataset Mcnd-Big-Com40 is generated similarly to Mcnd-Small-Com40 but upon a bigger graph containing \(30\) nodes and \(500\) arcs. Finally, Mcnd-Big-ComVar contains examples generated using either the network of Mcnd-Small-Com40 or the one of Mcnd-Big-Com40, with the number of commodities varying between \(40\) and \(200\). More details can be found in Appendix D. Capacitated Facility Location (CFL)CFL consists, given a set of customers and a set of facilities, in deciding which facilities to open in order to serve the customers at minimum cost, defined as the sum of the fixed costs associated with the opening of the facilities plus the sum of the service costs between facilities and customers. We generate one dataset CFL. Each example has either 16, 25 and 50 facilities, and 50 customers. For a given number of facilities, the fixed costs and capacities are the same but customer demands and service costs vary. ### Numerical Results We want to evaluate how our Lagrangian bound prediction compares to an iterative model based on subgradient, and how useful the former is as an initial point to warmstart the latter. For that purpose, we choose a state-of-the-art proximal bundle solver provided by SMS++ (Frangioni et al., 2023) which allows writing a MILP in a block structure fashion and using decomposition techniques to solve subproblems efficiently. We also compare our approach with CR computed using CPLEX5 optimiser. Footnote 5: [https://www.ibm.com/fr-fr/analytics/cplex-optimizer](https://www.ibm.com/fr-fr/analytics/cplex-optimizer) All MILP instances for which we want to evaluate our model on are first solved by Sms++. For an instance \(\iota\) we denote \(\mathbf{\pi}_{\iota}^{*}\) the LMs returned by Sms++, and \(\mathbf{\widehat{\pi}}_{\iota}\) the LMs returned by our model. We drop the index when instance is clear from the context. Recall from Section 1 that given an instance \(\iota\) and Lagrangian multipliers vector \(\mathbf{\pi}\) we denote by \(LR(\mathbf{\pi};\iota)\) the objective value of the Lagrangian bound. We write \(CR(\iota)\) the value returned by the continuous relaxation of \(\iota\). Finally, when evaluating we do not sample constraint representations but rather take the modes of their distributions. In practice, following the notations from Section 2.3 we set \(\mathbf{z}_{c}=\mathbf{z}_{\mu}\) for each dualized constraint \(c\). MetricsWe want to measure how close our prediction is to the solution returned by BM, considered as a proxy to the optimal solution, and how it compares as a starting point for BM with the all-zeros vector and CR dual solution interpreted as a Lagrangian bound. Hence, we use two metrics which average these measures over a dataset \(\mathcal{I}\): * the mean gap percentage (GAP): \[100\frac{1}{|\mathcal{I}|}\sum_{\iota\in\mathcal{I}}\frac{LR(\mathbf{\pi}^{*}; \iota)-LR(\mathbf{\widehat{\pi}};\iota)}{LR(\mathbf{\pi}^{*};\iota)}\] GAP measures the optimality of our prediction. The GAP is equal to zero when we predict exactly a vector of optimal Lagrangian multipliers. * the mean gap closure percentage w.r.t. to continuous relaxation (GAP-CR): \[100\frac{1}{|\mathcal{I}|}\sum_{\iota\in\mathcal{I}}(1-\frac{LR(\mathbf{\pi}^{*}; \iota)-LR(\widehat{\mathbf{\pi}};\iota)}{LR(\mathbf{\pi}^{*};\iota)-CR(\iota)})\] GAP-CR measures how our prediction compares to CR. It is negative if the prediction provides a bound worse than the continuous relaxation, and positive if it is better. Moreover, it is equal to 100 if the bound is the same as the optimal Lagrangian bound. Data for EvaluationWe use cross-validation to evaluate our model. Each dataset is partitioned in 10 subsets, or folds. Each element of the partition is tested with a model trained on its complement, where we divide the complement as 90% train, 10% validation. Results are averaged over folds. Bound AccuracyIn Table 1 we report how our model behaves compared to the optimal Lagrangian bound given by our BM solver, and how it compares with the CR bound. Our model can reach \(2\%\) difference with BM on Mcnd-Small-Com40, the easiest corpus with a small fixed network and a fixed number of commodities. This means that one pass through our network can save numerous iterations if we can accept about \(2\%\) bound error on average. When the number of commodities also varies, as in Mcnd-Small-ComVar, we see that our model GAP is twice as high, reaching about \(4\%\). From the GAP-CR results on these two datasets, we can see that we our model can effectively predict a solution different from CR and is able to close almost \(85\%\) of the gap between the CR bound and the Lagrangian bound on Mcnd-Small-Com40. On Mcnd-Small-ComVar the results are analog: when the number of commodities varies, our model is less accurate. We can see a similar trend on Mcnd-Big-Com40, Mcnd-Big-ComVar and Cfl with results slightly lower. This might be due to the fact that the datasets are more difficult, or simply because we explored hyper-parameters on MCND small datasets, and it might be the case that they are suboptimal for bigger graphs and different MILPs. Bundle Method WarmstartIn Table 2 we compare different initial Lagrangian Multipliers vectors on the validation set of Mcnd-Big-ComVar. We run our Bundle Method solver until the threshold \(\epsilon\) is greater than the difference between \(LR(\mathbf{\pi}^{*})\) and the current Lagrangian bound. We average resolution time and number of iterations over instance, and compute standard deviation. Three initialization methods are compared: initializing LMs with zero, using the CR dual solution, and the prediction of our model. We can see that CR is really not competitive with the null vector initialization, since the the small gain in number of iterations is absorbed by the supplementary computation. On the other hand, our method based on prediction shows a significant improvement over the other two initialization methods. Resolution time is more roughly halved for the coarse threshold, and above one third faster for the fine one. This is expected, as gradient based method naturally slow down as they approach convergence. Ablation StudyIn Table 3 we compare four variants of our original model, denoted full, on the first fold of Mcnd-Big-ComVar. In the first variant -sum, the dual solution values are passed as constraint node features but are not added to output of the decoder to produce LMs. The network must transport these values from its input layer to its output. In the second variant, -cr the CR \begin{table} \begin{tabular}{l c c c c c c} \hline \hline \multirow{2}{*}{Dataset} & \multicolumn{3}{c}{GAP \% (\(\downarrow\))} & \multicolumn{3}{c}{GAP-CR \% (\(\uparrow\))} \\ \cline{2-7} & train & validation & test & train & validation & test \\ \hline Mcnd-Small-Com40 & 1.75 & 2.09 & 2.01 & 85.82 & 84.44 & 84.99 \\ Mcnd-Small-ComVar & 4.53 & 4.22 & 4.02 & 82.09 & 81.66 & 82.39 \\ \hline Mcnd-Big-Com40 & 3.45 & 3.61 & 3.70 & 76.37 & 76.11 & 75.84 \\ Mcnd-Big-ComVar & 5.01 & 4.58 & 4.52 & 78.06 & 78.38 & 78.38 \\ \hline Cfl & 16.57 & 16.87 & 16.93 & 46.84 & 47.65 & 48.02 \\ \hline \hline \end{tabular} \end{table} Table 1: Evaluation results for both metrics are averaged over the different folds of the dataset. solution is not given as input features to the network. This is challenging because the network does not have access to a good starting point. The last variant, -sample_latent, use CR as full but does not sample representations \(\mathbf{z}_{c}\) in the latent domain during training, but rather sample a LM deviation \(\delta_{c}\) directly. We can see that performance of -sum are on par with full for the GAP metrics, an a little lower for GAP-CR, while -cr cannot return acceptable bounds. This indicates that the CR solution passed as input features is essential for our architecture, whereas the computation of the deviation instead the full LM directly is not an important trait. However, we found that training showed more stability when the objective was to predict deviation, see Appendix F for more details. Learning with sampling in the latent space (full) is slightly better than learning with sampling deviations directly (-sample-latent), and performance are also more stable along the training process (cf. Appendix F). ## 5 Conclusion We have presented a novel method to compute good Lagrangian dual solutions for MILPs sharing common attributes, by predicting Lagrangian multipliers. We cast this problem as an encoder-decoder prediction, where the probabilistic encoder outputs one distribution per dualized constraint from which we can sample constraint vector representation. Then a decoder transforms these representation into Lagrangian multipliers. We showed experimentally that this method gives bounds significatively better that the commonly used heuristics, _e.g._ it is able to reduce the gap to the optimal bound by \(85\%\), and that when used to warmstart an iterative solver tom compute the optimal Lagrangian lower bound, the predicted point can reduce the solving times by a large margin. Our predictions could be exploited in primal heuristics, possibly with auxiliary losses predicting values from variable nodes, or to efficiently guide a Branch-and-Bound exact search.
2309.01451
On Translation Hyperovals in Semifield Planes
In this paper we demonstrate the first example of a finite translation plane which does not contain a translation hyperoval, disproving a conjecture of Cherowitzo. The counterexample is a semifield plane, specifically a Generalised Twisted Field plane, of order $64$. We also relate this non-existence to the covering radius of two associated rank-metric codes, and the non-existence of scattered subspaces of maximum dimension with respect to the associated spread.
Kevin Allen, John Sheekey
2023-09-04T09:01:30Z
http://arxiv.org/abs/2309.01451v1
# On Translation Hyperovals in Semifield Planes ###### Abstract In this paper we demonstrate the first example of a finite translation plane which does not contain a translation hyperoval, disproving a conjecture of Cherowitzo. The counterexample is a semifield plane, specifically a Generalised Twisted Field plane, of order 64. We also relate this non-existence to the covering radius of two associated rank-metric codes, and the non-existence of scattered subspaces of maximum dimension with respect to the associated spread. ## 1 Introduction Hyperovals are extremal combinatorial objects in projective planes; namely a hyperoval is a set of \(q+2\) points in which no three lie on a common line. We refer to Section 2 for formal definitions. Hyperovals have attracted much attention over the years, particularly in the case of Desarguesian planes. Papers regarding existence, construction, and classification abound. While full classifications appear out of reach, the addition of extra assumptions on the symmetry of the plane, the hyperoval, or both, leads to interesting questions and more potential for classification. In particular, the most well-studied non-Desarguesian planes are the _translation planes_, while the best understood hyperovals in Desarguesian planes are the _translation hyperovals_. Therefore it is natural to consider the question of existence and classification for translation hyperovals in translation planes. It is known that translation hyperovals exist in certain translation planes; for example Desarguesian planes [18], Andre planes [8], Hall planes [14], and Knuth's binary semifield planes [10]. Cherowitzo [6] computationally classified all hyperovals (translation and otherwise) in each of the nine translation planes of order 16. In particular he showed that every translation plane of order 16 contains a translation hyperoval, which lead him to make the following statement. **Conjecture 1**.: _These results... lead one to the natural conjecture that translation hyperovals exist in all translation planes of even order._ In this paper we will disprove this conjecture, by exhibiting a projective plane of order 64 containing no translation hyperoval. Specifically, we show that the _twisted field plane_ of order 64, which is a _semifield plane_, contains no translation hyperovals. We will also relate this problem to the (non-)existence of so-called _scattered subspaces_ with respect to a spread associated to the translation plane, as well as the covering radius of the rank-metric code (spread set) associated to the translation plane. ## 2 Definitions and Background In this section we collect the necessary definitions and background for this article. We refer to [7, 12, 6, 13, 15] for further details on these largely well-known topics. ### Projective Planes A _projective plane_\(\pi\) is an incidence structure \((\mathcal{P},\mathcal{L},\mathcal{I})\) consisting of a set of _points_\(\mathcal{P}\), a set of lines \(\mathcal{L}\), and an incidence relation \(\mathcal{I}\subset\mathcal{P}\times\mathcal{L}\) such that every two distinct points are both incident with precisely one common line, and every pair of distinct lines are incident with precisely one common point. The _dual_ of \(\pi\), denoted \(\pi^{d}\), is the incidence structure \((\mathcal{L},\mathcal{P},\mathcal{I}^{d})\), where \(\mathcal{I}^{d}\) is the reverse relation of \(\mathcal{I}\). Since the terms points and lines are interchangeable in the definition of a projective plane, \(\pi^{d}\) is again a projective plane. It is well known that for any finite projective plane there exists a positive integer \(q\), called the _order_ of \(\pi\), such that the plane contains \(q^{2}+q+1\) points and \(q^{2}+q+1\) lines, every line is incident with \(q+1\) points, and every point is incident with \(q+1\) lines. If \(q\) is a prime power, there exists a projective plane of order \(q\); the converse is a famous open problem. The classical examples of projective planes are the _Desarguesian planes_\(\mathrm{PG}(2,F)\) where \(F\) is some (skew) field, in which points are one-dimensional vector subspaces of \(V(3,F)\), lines are two-dimensional vector subspaces of \(V(3,F)\), and incidence is given naturally by inclusion. This plane can also be realised as the _completion_ of the _affine plane_\(\mathrm{AG}(2,F)\), in which points are vectors in \(V(2,F)\), and lines are translations of one-dimensional subspaces; that is, cosets \(u+\langle v\rangle\) for some \(u,v\in V(2,F),v\neq 0\). Then the addition of a _line at infinity_ consisting of the _directions_\(\langle v\rangle\) returns a projective plane isomorphic to the previous description. ### Translation Planes Translation planes are projective planes with extra symmetry. They arise from affine planes sharing some natural properties of \(\mathrm{AG}(2,F)\). In order to introduce them formally, we need to recall some technical terminology. An _isomorphism_ from a plane \(\pi_{1}=(\mathcal{P}_{1},\mathcal{L}_{1},\mathcal{I}_{1})\) to another \(\pi_{2}=(\mathcal{P}_{2},\mathcal{L}_{2},\mathcal{I}_{2})\) is a bijection \(\phi\) from \(\mathcal{P}_{1}\cup\mathcal{L}_{1}\) to \(\mathcal{P}_{2}\cup\mathcal{L}_{2}\) which preserves type and incidence; that is, \(\phi(\mathcal{P}_{1})=\mathcal{P}_{2}\), \(\phi(\mathcal{L}_{1})=\mathcal{L}_{2}\), and \(p\in\ell\Leftrightarrow\phi(p)\in\phi(\ell)\). A _collineation_ of a plane \(\pi\) is an isomorphism from \(\pi\) to itself; a _correlation_ is an isomorphism from \(\pi\) to its dual \(\pi^{d}\). An _relation_ of \(\pi\) with centre \(p\) and axis \(\ell\ni p\) is a collineation of \(\pi\) fixing every point on \(\ell\) and every line containing \(p\). A projective plane \(\pi\) is said to be a _translation plane_ if there exists a line \(\ell\) such that the group of elations with axis \(\ell\) acts transitively on the points of \(\pi\backslash\ell\). If the plane is not Desarguesian, then the line \(\ell\) is unique, and is called the _translation line_ of \(\pi\). We will usually denote the translation line by \(\ell_{\infty}\). Any collineation of \(\pi\) must then fix \(\ell_{\infty}\). The name _translation plane_ can be more easily understood in the affine setting; in the affine plane \(\mathrm{AG}(2,F)\), a translation \(\tau_{u}:v\mapsto v+u\) clearly maps lines to lines, preserves direction, and fixes all lines with direction \(\langle u\rangle\). The natural extension of this map to \(\mathrm{PG}(2,F)\) then satisfies the definition of an elation given above, with \(\ell\) the line at infinity, and \(p\) the point at infinity corresponding to \(\langle u\rangle\). Furthermore, the group of translations clearly acts transitively on the points of \(\mathrm{AG}(2,F)\). Thus the concept of an elation and a translation plane is a natural generalisation of this example. The dual of a translation plane is not necessarily a translation plane. If both a plane \(\pi\) and its dual \(\pi^{d}\) are translation planes, then we call it a _semifield plane_. If a semifield plane \(\pi\) is not Desarguesian, then there is a unique point \(p_{\infty}\in\pi\) such that the dual of \(p\) is the translation line of \(\pi^{d}\). We call this point the _shears point_. Any collineation of \(\pi\) must then fix \(p_{\infty}\). It is known that we must have \(p_{\infty}\in\ell_{\infty}\), that is, the shears point must lie on the translation line. Furthermore, the collineation group of a semifield plane has precisely three orbits on points: the shears point, the points of the translation line other than the shears point, and the remaining points of the plane. ### Hyperovals A _hyperoval_ in a finite projective plane \(\pi\) (of even order \(q\)) is a set \(\mathcal{H}\) of \(q+2\) points such that no three points of \(\mathcal{H}\) are incident with a common line. Hyperovals can only exist in planes of even order; in planes of odd order, the maximum size of a set with this property is \(q+1\), and a famous result of Segre tells us that all sets attaining this bound are equivelent to the set of points of a _conic_. The study of hyperovals in planes \(\mathrm{PG}(2,\mathbb{F}_{q})\), \(q\) even, has a long history, with connections to important objects in coding theory, namely _MDS codes_ with certain parameters and properties. We refer to [22] for an up-to-date list of the known constructions for Desarguesian planes, as well as the largest plane with a complete computer classification. Hyperovals in general planes have also received much attention. It was conjectured that every projective plane of even order contained a hyperoval; this was disproved by the computer classification of [19], where a projective plane of order 16 containing no hyperovals was exhibited. We are now ready to formally introduce translation hyperovals. **Definition 1**.: A _translation hyperoval_ is a hyperoval \(\mathcal{H}\) such that there exists a line \(\ell\) for which the group of elations with axis \(\ell\) acts transitively on \(\mathcal{H}\backslash\ell\). Payne [18] showed that every translation hyperoval in \(\mathrm{PG}(2,2^{n})\) is equivalent to one defined by the set of vectors \(\{(0,1,0),(0,0,1),(1,x,x^{2^{i}}):x\in\mathbb{F}_{2^{n}}\}\) for some positive integer \(i\) relatively prime to \(n\). These translation hyperovals were first constructed by Segre [21]. It is known (see e.g. [6]) that for a translation hyperoval in a (non-Desarguesian) translation plane, the line \(\ell\) must be the translation line \(\ell_{\infty}\), and \(|\mathcal{H}\cap\ell_{\infty}|=2\). Since a semifield posesses a distinguished point at infinity, namely the shears point, it makes sense to consider whether or not a translation hyperoval contains the shears point. **Definition 2**.: A translation hyperoval in a semifield plane is said to be of _shears type_ if it contains the shears point, and of _non-shears type_ if it does not. Translation hyperovals in translation planes were studied by Cherowitzo in [6] where he computationally classified all hyperovals (translation and otherwise) in each of the nine translation planes of order 16. In particular he showed that every translation plane of order 16 contains a translation hyperoval, which lead him to Conjecture 1. ## 3 Quasifields, Spreads, Spread Sets and Translation Planes In this section we outline various well-known correspondences between quasifields, spreads, spread sets and Translation Planes. We refer to [13] for details, proofs, and further references. ### Quasifields and Semifields A _quasifield_ is an algebraic structure similar to a finite field, without the requirement that multiplication be associative, and assuming only one distributive law. In the finite case, a quasifield must have order \(q^{n}\) for some prime power \(q\) and some positive integer \(n\). In this case we may take the additive structure to be \((\mathbb{F}_{q^{n}},+)\), with multiplication \(\circ:\mathbb{F}_{q^{n}}\times\mathbb{F}_{q^{n}}\to\mathbb{F}_{q^{n}}\) satisfiying * \((x+x^{\prime})\circ y=x\circ y+x^{\prime}\circ y\) for all \(x,x^{\prime},y\in\mathbb{F}_{q^{n}}\); * For every \(a,b\in\mathbb{F}_{q^{n}}\), \(a\neq 0\), there exist unique \(x,y\in\mathbb{F}_{q^{n}}\) such that \(x\circ a=a\circ y=b\). A _semifield_ is a quasifield in which the second distributive law also holds: * \(x\circ(y+y^{\prime})=x\circ y+x\circ y^{\prime}\) for all \(x,y,y^{\prime}\in\mathbb{F}_{q^{n}}\). Note that in the literature quasifields and semifields are assumed to contain a multiplicative identity, with the terms _prequasifield_ and _presemifield_ used to describe the case where an idenitity is not assumed. Since this distinction does not have any relevance for this paper, we will abuse terminology and drop the prefix. There are many known constructions for quasifields and semifields. We refer to [13, 16] for examples of constructions, and [20] for the classification of semifields of order 64. One of the most well-studied families, and the example most relevant to this paper, are the _generalised twisted fields_ of Albert [1]. These are semifields with multiplication \[x\circ y:=xy-jx^{q^{i}}y^{q^{k}},\] where \(j\) is a fixed element of \(\mathbb{F}_{q^{n}}\) satisfying \(N_{\mathbb{F}_{q^{n}}\mathbb{F}_{q^{(n,i,k)}}}(j)\neq 1\), with \((n,i,k)\) denoting the greatest common divisor of these three integers. For example, for \(q=2,n=6,i=2,k=4\), the multiplication \[x\circ y:=xy-jx^{2^{2}}y^{2^{4}},\] defines a semifield if and only if \(N_{\mathbb{F}_{2^{6}}\mathbb{F}_{2^{2}}}(j)\neq 1\). Such elements certainly exist, for example any \(j\) satisfying \(j^{6}+j+1=0\). Two semifields with respective multiplications \(\circ\) and \(\star\) are _isotopic_ if there exist invertible additive maps \(A,B,C\) from \(\mathbb{F}_{q^{n}}\) to itself such that \(A(x\circ y)=B(x)\star C(y)\) for all \(x,y\in\mathbb{F}_{q^{n}}\). Every presemifield is isotopic to a semifield via _Kaplansky's trick_[16]. ### Spreads and Spread Sets A _spread_ (or _\(n\)-spread_) in \(V=V(2n,F)\) is a set \(\mathcal{D}\) of \(n\)-dimensional vector subspaces of \(V\) such that every nonzero element of \(V\) is contained in precisely one element of \(\mathcal{D}\). Let us identify the elements of \(V(2n,q)\) with elements of \(V(2,q^{n})\), and let \(S_{\infty}=\{(0,x):x\in\mathbb{F}_{q^{n}}\}\). The \(n\)-dimensional spaces meeting \(S_{\infty}\) trivially are precisely those of the form \[S_{f}:=\{(x,f(x)):x\in\mathbb{F}_{q^{n}}\},\] where \(f(x)\in\mathbb{F}_{q^{n}}[x]\) is a _linearised polynomial_, i.e. a polynomial of the form \(f(x)=\sum_{i=0}^{n-1}f_{i}x^{q^{i}}\). These are the polynomials which define \(\mathbb{F}_{q}\)-linear maps from \(\mathbb{F}_{q^{n}}\) into itself. We denote the set of linearised polynomials of degree at most \(q^{n-1}\) as \(\mathcal{L}\). To a spread \(\mathcal{D}\) containing \(S_{\infty}\) we can associate a unique set of linear maps (or linearised polynomials) \(C(\mathcal{D})\) by \[C(\mathcal{D})=\{f:S_{f}\in\mathcal{D}\}.\] These satisfy the property that \(C(\mathcal{D})=q^{n}\) and for any \(f,g\in C(\mathcal{D}),f\neq g\), we have \(\operatorname{rank}(f-g)=n\), where rank denotes the usual linear algebra rank of an \(\mathbb{F}_{q}\)-linear map. This is called the _spread set_ associated to \(\mathcal{D}\). The definition of a spread set coincides with that of a (not necessarily linear) _maximum rank distance (MRD) code_. Conversely, any set \(C\) satisfying this property defines a spread by \[\mathcal{D}(C):=\{S_{\infty}\}\cup\{S_{f}:f\in C\}.\] For a linear map \(f\), we can define \[\phi_{f}:(x,y)\mapsto(x,f(x)+y).\] If \(C\) is additively closed, then each of the maps in the set \(\phi_{C}:=\{\phi_{f}:f\in C\}\) fix the spread \(\mathcal{D}(C)\). Moreover, each \(\phi_{f}\) fixes \(S_{\infty}\) pointwise, and \(\phi_{C}\) is an abelian group acting transitively on \(\mathcal{D}(C)\backslash\{S_{\infty}\}\). The spread is then called a _semifield spread_, and \(C\) a _semifield spread set_, for reasons that will shortly become apparent. We refer to the distinguished element \(S_{\infty}\) as the _shears element_ of \(\mathcal{D}\). Given a quasifield \(Q\), we can define a spread set and a spread as follows. Define \[R_{y}(x):=x\circ y.\] Then each \(R_{y}\) is an additive map. Moreover \(R_{y}-R_{y^{\prime}}\) is invertible for all \(y\neq y^{\prime}\), since otherwise there would exist some nonzero \(a\) such that \(a\circ y=a\circ y^{\prime}\), contradicting one of the axioms of a quasifield. Thus \[C(Q):=\{R_{y}:y\in\mathbb{F}_{q^{n}}\}\] defines a spread set, and \(\mathcal{D}(Q):=\mathcal{D}(C(Q))\) is a spread. If \(Q\) is a semifield, then \(R_{y+y^{\prime}}=R_{y}+R_{y^{\prime}}\), and so \(C(Q)\) is additively closed. Conversely, from a spread or spread set we can define a quasifield; note however that the quasifield is not uniquely determined by the spread or spread set. However it is uniquely determined up to isotopy. ### Translation Planes from Spreads From a spread \(\mathcal{D}\) we can define an affine plane as follows: * Points: elements of \(V(2n,q)\); * Lines: translations of elements of \(\mathcal{D}\), i.e. \(u+S\) for \(u\in V(2n,q),S\in\mathcal{D}\). We can complete this to a projective plane by adding a line at infinity \(\ell_{\infty}\), whose points are the elements of \(\mathcal{D}\); to a line \(u+S\) we add the point at infinity \(S\). We denote this plane as \(\pi(\mathcal{D})\). It is straightforward to check that this is indeed a translation plane. Moreover, Andre showed that every translation plane arises from a spread [2]. By the discussion in the previous subsections, we can define a translation plane from a quasifield, and from a spread set. We may denote these naturally as \(\pi(Q)\), \(\pi(C)\) respectively. The dual of a translation plane defined by a semifield is again a translation plane, and so the plane is a semifield plane. The shears point corresponds to the shears element of the spread. As mentioned in Section 2.2, the collineation group of a semifield plane has precisely three orbits on points: the shears point, the points of the translation line other than the shears point, and the remaining points of the plane. The transitivity on the points of the translation line other than the shears point is demonstrated by the maps \(\phi_{f}\) defined in the previous section. It is well known that two semifields define isomorphic planes if and only if they are isotopic. ### Scattered Subspaces It is known (see e.g. [15, 9]) that translation hyperovals correspond to _scattered subspaces_ with respect to spreads. We recall the relevant notions and demonstrate this fact here. A subspace \(U\) is said to be _scattered_ with respect to a spread \(\mathcal{D}\) if \[\dim(U\cap S)\leq 1\quad\text{for all }S\in\mathcal{D}.\] The study of scattered subspaces with respect to a spread originates from [4]; we refer to [15] for a recent survey of the various applications of scattered subspaces, including that of translation hyperovals. We note that the interest in this notion is not restricted to spreads in \(V(2n,q)\); the more general case of spreads of \(n\)-dimensional subspaces of \(V(kn,q)\) is also of interest. However, here we deal only with the case \(k=2\). In this case it is straightforward to obtain an upper bound on the dimension of a scattered subspace. **Proposition 1**.: _[_4_, Lemma 3.1 and Theorem 4.1]_ _Let \(\mathcal{D}\) be a spread in \(V(2n,q)\). Then the dimension of a scattered subspace is at most \(n\). Moreover, if \(\mathcal{D}\) is Desarguesian, then there exists a scattered subspace of dimension \(n\)._ Suppose \(U\) is a scattered subspace of dimension \(n\) with respect to a spread in \(V(2n,2)\). Then \(U\) must intersect \(2^{n}-1\) distinct elements of \(\mathcal{D}\) nontrivially, and hence since \(|\mathcal{D}|=q^{n}+1\), there exist precisely two elements of \(\mathcal{D}\) which intersect \(U\) trivially, say \(S_{1}\) and \(S_{2}\). We claim that the set of points of the translation plane \(\pi(\mathcal{D})\) defined by the affine points \(U\) and the two points at infinity \(S_{1},S_{2}\) form a translation hyperoval, which we will denote by \(\mathcal{H}_{U}\). Consider a line \(v+S\) of \(\pi(\mathcal{D})\) with \(S\notin\{S_{1},S_{2}\}\). Then the points of intersection of \(v+S\) and \(\mathcal{H}_{U}\) are affine, and so the number of them is equal to \(|(v+S)\cap U|\), which is either \(0\) or \(2\), since \(|U\cap S|=2\). Next consider a line \(v+S_{i}\), \(i\in\{1,2\}\). Then this line meets \(\mathcal{H}_{U}\) in the point at infinity corresponding to \(S_{i}\), as well as unique affine point \(v+s_{i}\), where \(s_{i}+u_{i}=v\) for a unique \(u_{i}\in U\). The uniqueness here follows from the fact that \(V(2n,2)=S_{i}\oplus U\). Finally the line at infinity clearly meets \(\mathcal{H}_{U}\) in the two points \(\{S_{1},S_{2}\}\), proving that \(\mathcal{H}\) is a hyperoval. Then the group of translations \(\tau_{u}:u\in U\) clearly acts transitively on \(U=\mathcal{H}_{U}\backslash\ell_{\infty}\), showing that \(\mathcal{H}_{U}\) is indeed a translation hyperoval. Conversely, it has been shown that (up to equivalence) every translation hyperoval arises in this way from a scattered subspace. **Proposition 2**.: _[_15_, Theorem 1.7]_ _Suppose \(\mathcal{H}\) is a translation hyperoval in a translation plane \(\pi\) of order \(2^{n}\), and suppose \(\mathcal{D}\) is a spread in \(V(2n,2)\) such that \(\pi\) is isomorphic to \(\pi(\mathcal{D})\). Then there exists a translation hyperoval in \(\pi\) if and only if there exists an \(n\)-dimensional scattered subspace with respect to \(\mathcal{D}\)._ Hence the existence of translation hyperovals corresponds to the existence of scattered subspaces. Recently in [11] the following lower bound on the largest dimension of a scattered subspace was shown. **Proposition 3**.: _[_11_, Proposition 4.15]_ _Let \(\mathcal{D}\) be a spread in \(V(2n,q)\), \(q\geq 8\). Then there exists a scattered subspace of dimension at least \(n/2-1\)._ Clearly this does not guarantee the existence of translation hyperovals, though it does indicate the dimension at which scattered subspaces become hard to find, and impossible to guarantee by combinatorial methods. Various generalisations of the notion of scattered subspaces have been put forward in recent years, such as _evasive subspaces_[3]. One which is of particular relevance to this paper is that of \((\mathcal{D},h)\)-scattered subspaces: A subspace \(U\) is said to be \((\mathcal{D},h)\)_-scattered_ if \[\dim(U\cap S)\leq h\quad\text{for all }S\in\mathcal{D}.\] Clearly the property of a subspace being \((\mathcal{D},1)\)-scattered coincides with it being scattered with respect to \(\mathcal{D}\). The following specialisation of [11, Theorem 6.7] guarantees the existence of \((\mathcal{D},h)\)-scattered subspaces in certain circumstances. **Proposition 4**.: _Let \(\mathcal{D}\) be a spread in \(V(2n,q)\). Then there exists a \((\mathcal{D},h)\)-scattered subspace of dimension \(n\) for any \(h\geq\lceil\sqrt{n+1}-1\rceil\) if \(q\geq 4\), and for any \(h\geq\lceil\sqrt{n+2}-1\rceil\) if \(q<4\)._ For the case \(n=6\), which is the case for spreads arising from semifields of order \(q^{6}\), we obtain that there exists a \(2\)-scattered subspace of dimension \(6\) with respect to any \(6\)-spread \(\mathcal{D}\) in \(V(12,q)\). In particular, we note that the existence of a \(1\)-scattered subspace of dimension \(n\), and hence a translation hyperoval, is not guaranteed. Indeed, we will demonstrate a counterexample. ### Covering Radius Let us assume that \(\mathcal{D}=\mathcal{D}(\mathbb{S})\), where \(\mathbb{S}\) is a semifield. Suppose \(U\) is an \(n\)-dimensional scattered subspace with respect to \(\mathcal{D}\). Then without loss of generality we may assume that one of the following two occur. (Shears Type) \(U\cap S_{\infty}=U\cap S_{0}=0\); (Non-Shears Type) \(U\cap S_{\infty}\neq 0,U\cap S_{0}=U\cap S_{y}=0\) for some \(y\neq 0\). These two cases correspond to whether or not the translation hyperorval \(\mathcal{H}_{U}\) contains the shears point \(S_{\infty}\). The _covering radius_ of a set or subspace \(C\) of the space of linear maps \(M=\operatorname{End}_{\mathbb{F}_{q^{n}}}(\mathbb{F}_{q^{n}})\) is a notion arising naturally from coding theory in the rank metric; we refer to [5] for background. We denote it by \(\rho(C)\) and define it as \[\rho(C)=\min\{i:\forall g\in M,\exists f\in C\text{ s.t. }\text{rank}(f-g)\leq i\}.\] For a spread set \(C\subset M\), which has cardinality \(q^{n}\) and is such that every nonzero element of \(C\) is invertible, we define \(C^{-1}=\{f^{-1}:f\in C,f\neq 0\}\cup\{0\}\). If \(C\) is a spread set, then \(\rho(C)\leq n-1\), since otherwise there would exist \(g\notin C\) such that \(\text{rank}(g-f)=n\) for all \(f\in C\). But for any nonzero \(a\in\mathbb{F}_{q^{n}}\) there exists a unique \(y\in\mathbb{F}_{q^{n}}\) such that \(a\circ y=g(a)\), and so there exists \(R_{y}\in C\) such that \(\text{rank}(g-R_{y})<n\). **Theorem 1**.: _Let \(\mathbb{S}\) be a semifield of order \(2^{n}\), \(\pi(\mathbb{S})\) the translation plane it defines, and \(C=C(\mathbb{S})\) the spread set it defines in \(V(2n,2)\). Then there exists a translation hyperoval of shears type in \(\pi(\mathbb{S})\) if and only if \(\rho(C)=n-1\), and there exists a translation hyperoval of non-shears type in \(\pi(\mathbb{S})\) if and only if \(\rho(C^{-1})=n-1\)._ Proof.: The plane \(\pi(\mathbb{S})\) contains a translation hyperoval of shears type if and only if there exists an \(n\)-dimensional subspace \(U\) which is scattered with respect to \(\mathcal{D}(\mathbb{S})\) such that \(U\cap S_{\infty}=0\). For any \(n\)-dimensional subspace such that \(U\cap S_{\infty}=0\) there exists an \(\mathbb{F}_{q}\)-linear map \(f\) from \(\mathbb{F}_{q^{n}}\) to itself such that \[U=\{(x,f(x)):x\in\mathbb{F}_{q^{n}}\}.\] Let \(y\in\mathbb{F}_{q^{n}}\). Then \(U\cap S_{y}=\{(x,f(x))|f(x)=R_{y}(x)\}\), and so \(\dim(U\cap S_{y})=n-\text{rank}(f-R_{y})\). Hence there exists an \(n\)-dimensional subspace which is scattered with respect to \(\mathcal{D}(\mathbb{S})\) if and only if there exists \(f\) such that \(\text{rank}(f-R_{y})\geq n-1\) for all \(y\in\mathbb{F}_{q^{n}}\), if and only if \(\rho(C)\geq n-1\), if and only if \(\rho(C)=n-1\), proving the first claim. Similarly, \(\pi(\mathbb{S})\) contains a translation hyperoval of shears type if and only if there exists an \(n\)-dimensional subspace \(U\) which is scattered with respect to \(\mathcal{D}(\mathbb{S})\) such that \(U\cap S_{0}=0\). For any such subspace there exists an \(\mathbb{F}_{q}\)-linear map \(f\) from \(\mathbb{F}_{q^{n}}\) to itself such that \(U=\{(f(x),x):x\in\mathbb{F}_{q^{n}}\}\). Then for any nonzero \(y\in\mathbb{F}_{q^{n}}\) we have \(\dim(U\cap S_{y})=n-\text{rank}(f-R_{y}^{-1})\), while \(\dim(U\cap S_{\infty})=n-\text{rank}(f)\), and so arguing as before we obtain the second claim. Note that we do not have such a result for general quasifields, since the lack of a distinguished element at infinity nor transitivity on the remaining points at infinity means that there is not a _canonical_ choice for the spread set. Note also that the connection between the existence of scattered subspaces with respect to semifield spreads and the covering radius of \(C\) and \(C^{-1}\) are also valid for \(q>2\); however in this case we do not obtain translation hyperovals. ### Linearised Polynomials and Dickson Matrices In order to explicitly determine whether or not translation hyperovals exist, we need a practical method for determining the existence of a linear map \(f\) such that \(\text{rank}(f-R_{y})\geq n-1\) for all \(y\in\mathbb{F}_{q^{n}}\). We do this by utilising linearised polynomials and Dickson matrices; this is the approach used by Payne to classify translation hyperovals in Desarguesian planes, and also used productively in recent years in the construction of MRD codes. To a linearised polynomial \(f(x)=\sum_{i=0}^{n-1}f_{i}x^{q^{i}}\) we associate the _Dickson (or autocirculant) matrix_\(D_{f}\) defined as follows: \[D_{f}:=\begin{pmatrix}f_{0}&f_{1}&\cdots&f_{n-1}\\ f_{n-1}^{q}&f_{0}^{q}&\cdots&f_{n-2}^{q}\\ \vdots&\ddots&\ddots&\vdots\\ f_{1}^{q^{n-1}}&f_{2}^{q^{n-1}}&\cdots&f_{0}^{q^{n-1}}\end{pmatrix}\] It is well known that the assignment \(f\mapsto D_{f}\) is linear, and \(\operatorname{rank}(f)=\operatorname{rank}(D_{f})\)[17]. Hence we can translate Theorem 1 to this setting. **Lemma 1**.: _The plane \(\pi(\mathbb{S})\) contains a translation hyperoval of shears type if and only if there exists \(f\in\mathcal{L}\backslash C(\mathbb{S})\) such that_ \[\operatorname{rank}(D_{R_{y}}-D_{f})\geq n-1\] _for all \(y\in\mathbb{F}_{q^{n}}\). The plane \(\pi(\mathbb{S})\) contains a translation hyperoval of non-shears type if and only if there exists \(g\in\mathcal{L}\backslash C(\mathbb{S})\) such that_ \[\operatorname{rank}(D_{R_{y}}^{-1}-D_{g})\geq n-1\] _for all \(y\in\mathbb{F}_{q^{n}}^{\times}\) and \(\operatorname{rank}(D_{g})=n-1\)._ Now the entries of \(D_{R_{y}}\) are polynomials in \(y\), and since \(\det(D_{R_{y}})=1\) for all non-zero \(y\), we can regard \(D_{R_{y}}^{-1}\) as a matrix whose entries are polynomials in \(y\). Hence for \(f,g\in\mathcal{L}\), the functions \(d_{f}(y):=\det(D_{R_{y}}-D_{f})\) and \(d_{g}^{\operatorname{inv}}(y):=\det(D_{R_{y}}^{-1}-D_{g})\) are both polynomials in \(y\) (whose coefficients are expressions in the unknown coefficients of \(f\) and \(g\) respectively). Note that \(d_{g}(0)=\det(D_{g})\). If necessary we can replace \(d_{f}(y)\) and \(d_{g}^{\operatorname{inv}}(y)\) with their reduction modulo \(y^{2^{n}}-y\). **Lemma 2**.: _The plane \(\pi(\mathbb{S})\) of order \(2^{n}\) contains a translation hyperoval of shears type if and only if there exists \(f\in\mathcal{L}\backslash C(\mathbb{S})\) such that \(d_{f}(y)=y^{2^{n}-1}+1\), and contains a translation hyperoval of non-shears type if and only if there exists \(g\in\mathcal{L}\backslash C(\mathbb{S})\) such that \(\operatorname{rank}(g)=n-1\) and \(d_{g}^{\operatorname{inv}}(y)=\frac{y^{2^{n}}+y}{y+a}\) for some \(a\in\mathbb{F}_{q^{n}}^{\times}\)._ Proof.: Let \(U=\{(x,f(x)):x\in\mathbb{F}_{q^{n}}\}\), and suppose \(U\) defines a translation hyperoval of shears type. Then we may assume without loss of generality that \(U\cap S_{0}=0\), and so \(U\cap S_{y}\neq 0\) for all \(y\neq 0\). Then \(\operatorname{rank}(D_{R_{y}}-f)=n-1\) for all nonzero \(y\in\mathbb{F}_{q^{n}}\), and so \(d_{f}\) is zero at all nonzero elements of \(\mathbb{F}_{q^{n}}\) and nonzero at \(y=0\). Clearly this implies that \(d_{f}(y)=y^{2^{n}-1}+1\). Now let \(W=\{(g(x),x):x\in\mathbb{F}_{q^{n}}\}\), and suppose \(W\) defines a translation hyperoval of non-shears type. Then we may assume without loss of generality that \(W\cap S_{0}=0\), and there exists a unique \(a\in\mathbb{F}_{q^{n}}^{\times}\) such that \(W\cap S_{a}=0\) and \(W\cap S_{y}\neq 0\) for all nonzero \(y\neq a\). Furthermore \(W\cap S_{\infty}\neq 0\), and so \(d_{g}^{\operatorname{inv}}(0)=0\). Hence \(d_{g}^{\operatorname{inv}}(y)\neq 0\) if and only if \(y=a\), and so \(d_{g}^{\operatorname{inv}}(y)=\frac{y^{2^{n}}+y}{y+a}\) as claimed. Note that this is the approach used by Payne in his classification of translation hyperovals in \(\operatorname{PG}(2,q^{n})\). He showed that for the case \(R_{y}(x)=xy\), the requirement that \(d_{f}(y)=y^{2^{n}-1}+1\) implies that \(f(x)\) is monomial, that is, \(f(x)=f_{i}x^{2^{i}}\). The classification of translation hyperovals in \(\operatorname{PG}(2,q^{n})\) then follows easily. ## 4 Translation Hyperovals in the Generalised Twisted Field plane of order \(64\) In this section we analyse the conditions from Lemma 1 and Lemma 2 for the case of the Generalised Twisted Field plane of order \(64\). We choose this plane due to the fact that the Dickson matrices \(D_{R_{y}}\) and \(D_{R_{y}^{-1}}\) are sparse, making the equations manageable. Furthermore the known symmetries of these semifields allow us to further reduce the necessary computation. The multiplication in this presemifield, which we will denote by \(\mathbb{T}\), is given by \[x\circ y=xy-jx^{2^{2}}y^{2^{4}}=:R_{y}(x),\] where \(j\) is a solution to \(j^{6}+j+1=0\). We choose this representation to match that in [20]. Any semifield isotopic to this presemifield has centre isomorphic to \(\mathbb{F}_{4}\). In particular, each of the maps \(R_{y}\) are \(\mathbb{F}_{4}\)-linear. Note furthermore that we have the following identity, which will prove useful in the subsequent calculations: \[\alpha R_{y}(\beta x)=R_{\alpha\beta y}(x)\] for all \(\alpha,\beta\in\mathbb{F}_{2^{6}}\) such that \(\alpha\beta^{2^{2}}=\alpha^{2^{4}}\beta\neq 0\). Hence for any \(\alpha,\beta\) satisfying this condition, the map \(\phi_{\alpha,\beta}:(x,y)\mapsto(\beta^{-1}x,\alpha y)\) fixes \(\mathcal{D}(\mathbb{T})\); in particular, it fixes \(S_{\infty}\) and \(S_{0}\), and maps \(S_{y}\) to \(S_{\alpha\beta y}\). Note that any such \(\phi_{\alpha,\beta}\) fixes one further element of \(\mathcal{D}(\mathbb{T})\) if and only if it fixes every element of \(\mathcal{D}(\mathbb{T})\), and this occurs precisely if \(\alpha^{9}=1\) and \(\alpha\beta=1\). Furthermore, letting \(U_{f}=\{(x,f(x)):x\in\mathbb{F}_{q^{n}}\}\) and \(W_{g}=\{(g(x),x):x\in\mathbb{F}_{q^{n}}\}\), we get that \(\phi_{\alpha,\beta}(U_{f})=U_{h}\) where \(h(x)=\alpha f(\beta x)\), and \(\phi_{\alpha,\beta}(W_{g})=W_{k}\) where \(k(x)=\beta^{-1}g(\alpha^{-1}x)\). ### Shears Type Suppose \(\pi(\mathbb{T})\) contains a translation hyperoval of shears type. By Lemma 2, we require the existence of some \(f\in\mathcal{L}\backslash C(\mathbb{S})\) such that \(d_{f}(y):=\det(D_{R_{y}}-D_{f})=y^{2^{6}-1}+1\). This leads to a system of equations in six unknowns \(f_{0},\ldots,f_{5}\) over \(\mathbb{F}_{2^{6}}\). Note furthermore that for any \(\alpha,\beta\in\mathbb{F}_{2^{6}}\) such that \(\alpha\beta^{2^{2}}=\alpha^{2^{4}}\beta\neq 0\), if \(h(x)=\alpha f(\beta x)\), then \[d_{f}(y)=d_{h}(\alpha\beta y)=d_{h}(y),\] and so \(f\) defines a translation hyperoval if and only if \(h\) defines a translation hyperoval. Note that \(h_{0}=\alpha\beta f_{0}\), and the set \(\{\alpha\beta:\alpha,\beta\in\mathbb{F}_{2^{6}},\alpha\beta^{2^{2}}=\alpha^{2 ^{4}}\beta\neq 0\}\) is precisely the set of solutions to \(x^{21}=1\). Thus we may assume without loss of generality that \(f_{0}\in\{0,1,j,j^{2}\}\). Furthermore if \(\alpha\beta=1\) then \(\beta^{9}=1\) and \(h_{1}=\beta f_{1}\), and so we can assume without loss of generality that \(f_{1}^{8}=f_{1}\), i.e. \(f_{1}\in\mathbb{F}_{8}\). From the coefficients of \(y^{62}\) and \(y^{58}\) respectively, we get that \[0 =j^{21}f_{0}+j^{38}f_{2}^{4},\] \[0 =j^{22}f_{0}^{16}f_{4}^{4}+j^{21}f_{0}^{5}+j^{22}f_{2}^{20}+j^{21} f_{2}f_{4}^{4}.\] Thus we have that either \(f_{0}=f_{2}=0\), or \(f_{0}=j^{17}f_{2}^{4}\) and \(f_{4}=j^{16}f_{2}^{52}\). We plug these expressions into the coefficients of \(y^{57}\) and \(y^{54}\) and set them equal to zero. It turns out that we get the same pair of equations regardless of whether or not \(f_{2}=0\), and we also observe that \(f_{2}\) does not appear in either of the resulting equations: \[0 =j^{39}f_{1}^{16}f_{3}^{8}+f_{1}^{2}f_{5}^{4}+j^{5}f_{3}^{16}f_{5 }^{2}+j^{34}f_{3}^{4}f_{5}^{8},\] \[0 =j^{10}f_{1}^{33}+j^{17}f_{1}^{12}+f_{3}^{9}+j^{27}f_{5}^{36}.\] Taking into account that \(f_{1}\in\mathbb{F}_{8}\), and raising to an appropriate power of \(2\), we get that \[0 =f_{1}(j^{51}f_{3}^{4}+f_{5}^{2})+j^{34}f_{3}^{8}f_{5}+j^{17}f_{3 }^{2}f_{5}^{4}, \tag{1}\] \[0 =j^{36}f_{1}^{5}+f_{3}^{9}+j^{27}f_{5}^{36}.\] This leads to the following. **Theorem 2**.: _The Generalised Twisted Field plane of order \(64\) does not contain a translation hyperoval of shears type._ Proof.: The following MAGMA code verifies that the system (1) has no nontrivial solutions. q := 2; F := GF(q); P<x> := PolynomialRing(F); L<j> := ext<F|x^6+x+1>; F8 := {x:x in L|x^8 eq x}; S<f1,f3,f5> := PolynomialRing(L,3); g := f1*(j^51*f3^4 + f5^2) + j^34*f3^8*f5 + j^17*f3^2*f5^4; h := j^36*f1^5 + f3^9 + j^27*f5^36; s1 := {[a,b,c]:a in F8,b,c in L|Evaluate(g,[a,b,c]) eq 0}; s2 := {[a,b,c]:a in F8,b,c in L|Evaluate(h,[a,b,c]) eq 0}; s1 meet s2 eq {[L|0,0,0]}; Hence we have that \(f_{1}=f_{3}=f_{5}=0\), implying \(f(x)\) is in fact an \(\mathbb{F}_{4}\)-linear map. But since each \(R_{y}\) is also \(\mathbb{F}_{4}\)-linear, then the rank of \(f-R_{y}\) as an \(\mathbb{F}_{2}\)-linear map must be even; in particular it cannot be \(n-1=5\), contradicting Theorem 1. Hence this plane does not contain a translation hyperoval of shears type. The MAGMA code used in this proof runs in less than one second. ### Non-shears Type Suppose \(\pi(\mathbb{T})\) contains a translation hyperoval of non-shears type. By Lemma 1, we require the existence of some \(g\in\mathcal{L}\backslash C(\mathbb{S})\) such that \(\mathrm{rank}(D_{R_{y}}^{-1}-D_{g})\geq n-1\) for all \(y\in\mathbb{F}_{q^{n}}^{\times}\) and \(\mathrm{rank}(D_{g})=n-1\). Similar to the shears case, we may assume without loss of generality that \(g_{0}\in\{0,1,j,j^{2}\}\) and \(g_{1}\in\mathbb{F}_{8}\). Note if \(R_{y}(x)=yx+jy^{2^{4}}x^{2^{2}}\) for \(y\neq 0\), then \(R_{y}\) is \(\mathbb{F}_{4}\)-linear and so \(R_{y}^{-1}\) must also be \(\mathbb{F}_{4}\)-linear. It is straighforward then to calculate \(R_{y}^{-1}\), which we find to be \[R_{y}^{-1}(x)=y^{62}j^{21}x+y^{11}j^{22}x^{4}+y^{59}j^{26}x^{16}.\] Due to the complexity of the coefficients of \(d_{g}^{\mathrm{inv}}(y)\) and the unknown element \(a\) such that \(d_{g}^{\mathrm{inv}}(a)\neq 0\), there is little that can be done from a theoretical point of view utilising Lemma 2, beyond the above restrictions on the coefficients of \(g\). Hence we must rely on a long computation using Lemma 1. **Theorem 3**.: _The Generalised Twisted Field plane of order \(64\) does not contain a translation hyperoval of non-shears type._ Proof.: The following MAGMA code verifies that there are no tuples \((g_{0},g_{1},g_{2},g_{3},g_{4},g_{5})\) with \(g_{0}\in\{0,1,j,j^{2}\},g_{1}\in\mathbb{F}_{8}\), and \(g_{i}\in\mathbb{F}_{64}\) for \(i=3,4,5,6\) such that \(\mathrm{rank}(D_{R_{y}}^{-1}-D_{g})\geq n-1\) for all \(y\in\mathbb{F}_{q^{n}}\) and \(\mathrm{rank}(D_{g})=n-1\). q := 2; n := 6; F := GF(q); P<x> := PolynomialRing(F); L<j> := ext<F|x^6+x+1>; F8 := {x:x in L|x^8 eq x}; DicksonMatrix := function(v,n,q); return Matrix([Rotate([a^(q^i):a in v],i):i in [0..n-1]]); end function; Cinv := {DicksonMatrix([j^21*y^62,0,j^22*y^11,0,j^26*y^59,0],n,q):y in L}; time nonshears := {<g0,g1,g2,g3,g4,g5>:g0 in {0,1,j,j^2},g1 in F8,g2,g3,g4,g5 in L| forall{z:z in Cinv|Rank(z-f) ge n-1} where f is DicksonMatrix([g0,g1,g2,g3,g4,g5],n,q)}; #nonshears eq 0; Hence by Lemma 1, there does not exist a translation hyperoval of non-shears type in this plane. The calculation used in this proof takes approximately 8.5 hours on a single CPU. We note that this computation could clearly be parallelised and optimised further, but we do not attempt any improvements beyond the above restrictions on \(g_{0}\) and \(g_{1}\). Without these restrictions, the computation would take approximately three weeks. ## 5 Conclusion and Remarks This culminates in the following theorem, disproving Cherowitzo's conjecture. **Theorem 4**.: _There does not exist a translation hyperoval in the Twisted Field Plane of order \(64\)._ **Remark 1**.: Due to the previously described equivalences, we have also demonstrated the existence of a \(6\)-spread in \(V(12,2)\) not admitting a scattered subspace of dimension \(6\), and an MRD code (semifield spread set) in \(M_{6}(\mathbb{F}_{2})\) with minimum distance \(6\) and covering radius less than \(5\). **Remark 2**.: The situation for the remaining semifield planes of order \(64\) is more difficult to analyse theoretically. Instead we would need to rely on exhaustive computer searches. For translation hyperovals of shears type, this can be done relatively efficiently by exploiting the additivity of \(C(\mathbb{S})\); a naive implementation can perform an exhaustive search in about \(8\) hours (as opposed to less than a second for the generalised twisted field). In fact, it turns out that many semifield planes of order \(64\) do not contain a translation hyperoval of shears type. However, for the non-shears case we do not have additivity, and for the majority of semifields we do not have enough symmetries to constrain the coefficients \(g_{i}\) as in Section 4.2, and so exhaustive computation takes much longer. Hence further theoretical reductions, or a more significant parallelised computation, would be necessary in order to determine the existence or non-existence of translation hyperovals for these planes. **Remark 3**.: Although hyperovals cannot exist in planes of odd order, scattered subspaces of maximum dimension with respect to spreads can still exist. The corresponding point set in the associated projective plane is a set of \(q\) points not contained in the translation line meeting each line in \(0,1\) or \(q\) points upon which a group of translations acts transitively. We can repeat the arguments from this paper in part; however, since \(\frac{q^{n}-1}{q-1}<q^{n}-1\) for \(q>2\), we cannot conclude much about \(d_{f}(y)\). It remains an open question whether or not spreads defined by generalised twisted fields possess a scattered subspace of dimension \(n\) for general \(q\).
2308.16204
Four interacting spins: addition of angular momenta, spin-spin correlation functions, and entanglement
We study four spins on a ring coupled through competing Heisenberg interactions between nearest neighbors, $J$, and next-nearest neighbors, $J_2\equiv\alpha J>0$. The spectrum is obtained in a simple way by using the rules for addition of 4 angular momenta. This allows us to follow the evolution of the ground state with $\alpha$, characterized by level crossings and by analyses of spin-spin correlation functions. Further insight is obtained by examining the entanglement between different parts of the system: we observe that the entanglement entropy is strongly dependent on how the system is partitioned.
Raimundo R. dos Santos, Lucas Alves Oliveira, Natanael C. Costa
2023-08-29T21:34:22Z
http://arxiv.org/abs/2308.16204v1
# Four interacting spins: addition of angular momenta, ###### Abstract We study four spins on a ring coupled through competing Heisenberg interactions between nearest neighbors, \(J\), and next-nearest neighbors, \(J_{2}\equiv\alpha J>0\). The spectrum is obtained in a simple way by using the rules for addition of 4 angular momenta. This allows us to follow the evolution of the ground state with \(\alpha\), characterized by level crossings and by analyses of spin-spin correlation functions. Further insight is obtained by examining the entanglement between different parts of the system: we observe that the entanglement entropy is strongly dependent on how the system is partitioned. ## I Introduction Some many-body systems are characterized by the presence of a quantum critical point (QCP) occurring at zero absolute temperature, which generically separates an ordered (or quasi-ordered) phase from a disordered one. This phase transition is driven by some control parameter [1; 2], which for magnetic systems may be an external transverse field, competing interactions, doping fraction, pressure, and so forth. Despite being a zero temperature (that is, ground state) phenomenon, the presence of a QCP influences the behavior of measurable quantities at finite temperatures [1; 2]. Given that the singularities appearing in second order phase transitions only set in in the thermodynamic limit, a widely used theoretical strategy to study these phenomena is to extract information from small-sized systems and use finite-size scaling ideas [3; 4] to predict the large system behavior. To this end, at zero temperature we traditionally have at our disposal properties such as spectral gaps, response functions, and correlation functions; further, in recent years entanglement measures, which have been at the heart of proposals for quantum computation [5; 6], have been used as signatures of quantum critical behavior [7]. While one usually resorts to numerical techniques to calculate these properties for systems ranging from typically tens to hundreds of spins, the possibility of obtaining them analytically for just a few spins proves extremely useful. Indeed, this leads to crucial insights from exploring symmetries which render calculations with great simplicity, while it may provide data to check the numerical codes devised for larger system sizes. It also sheds light into exploring less familiar probes, such as entanglement. From the pedagogical point of view, dealing with few spins provides examples of how to add more than two angular momenta in a systematic way. Within this context, a particularly interesting example of such a system consists of four spins-\(1/2\) fixed in positions on a ring; see Fig. 1. We assume they are coupled through competing exchange interactions \(J\) and \(J_{2}=\alpha J\) between nearest- and next-nearest neighbors, respectively; quantum effects are brought about by considering scalar interactions involving the three cartesian spin components. The Hamiltonian may therefore be expressed as \[\mathcal{H}=J\left[\sum_{i=1}^{4}\mathbf{S}_{i}\cdot\mathbf{S}_{i+1}+\alpha \sum_{i=1}^{4}\mathbf{S}_{i}\cdot\mathbf{S}_{i+2}\right], \tag{1}\] with \(\mathbf{S}_{5}\equiv\mathbf{S}_{1}\) and \(\mathbf{S}_{6}\equiv\mathbf{S}_{2}\), thus setting up periodic boundary conditions, as in Fig. 1; for reasons which will become apparent soon, we always consider \(J_{2}\geq 0\), so that the sign of \(\alpha\) is the same as that of \(J\). A semiclassical analysis for \(\alpha=0\) immediately reveals that if \(J<0\) the ground state corresponds to all spins aligned (ferromagnetic state), e.g. \(|\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! for both \(\alpha>0\) and \(\alpha<0\). Here we will use addition of four angular momenta to obtain simple expressions for both the eigenstates and energies for \(\mathcal{H}\), from which we discuss the different 'phase transitions' based on level crossings, correlation functions, and entanglement entropy. As discussed later, this example with four spins allows a much more diverse and illustrative analysis of entanglement than with just two spins. The layout of the paper is as follows. In Sec. II we present a diagonalization of \(\mathcal{H}\) based on the addition of four spins-1/2, and discuss its spectrum. This solution allows us to obtain in Sec. III the spin-spin correlation functions for the different ground states, when we also highlight their physical content. In Sec. IV we obtain the entanglement entropies for the different ways we can split the system in two parts; the dependence with \(\alpha\) also reveals their ability of pinpointing QCP's. In Section V we briefly discuss the main features of this model, as unveiled by calculations on larger systems. And, finally, Sec. VI presents our conclusions. ## II The spectrum We start by recalling the notation relative to eigenstates of the spin operator for particle \(i\), \(i=1,...4\), \[\mathbf{S}_{i}^{2}|s_{i}m_{i}\rangle = s_{i}(s_{i}+1)\hbar^{2}|s_{i}m_{i}\rangle, \tag{2}\] \[S_{i}^{z}|s_{i}m_{i}\rangle = m_{i}\hbar|s_{i}m_{i}\rangle. \tag{3}\] Since we only consider here spin-1/2 particles, \(s_{i}=s=1/2,\forall i\), and \(m_{i}=\pm 1/2\) (or \(\uparrow\), \(\downarrow\), for short), \(\forall i\). A possible basis for this 16-dimensional state space is provided by \(\{|s_{1}m_{1}s_{2}m_{2}s_{3}m_{3}s_{4}m_{4}\rangle\}\). Further, since the labels \(s_{i}\) reflect an intrinsic attribute of the particles, they remain fixed and may be omitted to simplify the notation; that is, we refer to this basis as \(\{|m_{1}m_{2}m_{3}m_{4}\rangle\}\). In this basis, the Hamiltonian appears in block diagonal form, each characterized by the value of \(M=\sum_{i=1}^{4}m_{i}\); this block structure reflects the built-in axial symmetry of this basis due to the conservation of the \(z\)-component of the total spin, \(S_{T}^{z}\equiv S_{1}^{z}+S_{2}^{z}+S_{3}^{z}+S_{4}^{z}\). However, the Hamiltonian symmetry is actually higher than this: it is invariant under a simultaneous rotation of _all_ spins by any angle around _any_ axis. That is, the total angular momentum is conserved, as it can be recalled that classically the absence of an external torque leads to the conservation of the total angular momentum vector. Nonetheless, quantum-mechanically the non-commutation of the components of angular momentum operators does not allow their simultaneous determination. At any rate, further simplicity should arise by changing to a higher-symmetry basis, e.g. to one labelled by the total angular momentum quantum number, \(S_{T}\), in addition to \(M\), which is the quantum number associated with \(S_{T}^{z}\). The total spin operator is \[\mathbf{S}_{T}=\mathbf{S}_{1}+\mathbf{S}_{2}+\mathbf{S}_{3}+\mathbf{S}_{4}, \tag{4}\] whose square may be written as \[\mathbf{S}_{T}^{2}=K_{0}+2K_{1}+K_{2}, \tag{5}\] where we have introduced \[K_{0} \equiv \sum_{i=1}^{4}\mathbf{S}_{i}^{2}, \tag{6}\] \[K_{1} \equiv \sum_{i=1}^{4}\mathbf{S}_{i}\cdot\mathbf{S}_{i+1},\] (7) \[K_{2} \equiv \sum_{i=1}^{4}\mathbf{S}_{i}\cdot\mathbf{S}_{i+2}. \tag{8}\] Thus, the Hamiltonian, Eq. (1), may be expressed as \[\mathcal{H}=J[K_{1}+\alpha K_{2}], \tag{9}\] where we note that periodic boundary conditions for 4 sites imply that the coupling between any two second-neighbor spins appears twice in the Hamiltonian. Our aim now is to set up a basis in which \(\mathcal{H}\) is expressed solely in terms of eigenvalues of the operators \(K_{r}\), \(r=0,1,2\). According to the rules for addition of more than two angular momenta, one adds two spins at a time, and subsequently add the results; see, e.g. Ref. [8]. We may then evaluate \(K_{2}\) by taking the square of the partial sums \(\mathbf{S}_{13}\equiv\mathbf{S}_{1}+\mathbf{S}_{3}\) and \(\mathbf{S}_{24}\equiv\mathbf{S}_{2}+\mathbf{S}_{4}\), \[\mathbf{S}_{13}^{2} = \mathbf{S}_{1}^{2}+\mathbf{S}_{3}^{2}+2\,\mathbf{S}_{1}\cdot \mathbf{S}_{3} \tag{10}\] \[\mathbf{S}_{24}^{2} = \mathbf{S}_{2}^{2}+\mathbf{S}_{4}^{2}+2\,\mathbf{S}_{2}\cdot \mathbf{S}_{4}, \tag{11}\] and adding them to obtain, with the aid of Eq. (8), \[K_{2}=\mathbf{S}_{13}^{2}+\mathbf{S}_{24}^{2}-K_{0}. \tag{12}\] Taking this into Eq. (5) leads to \[K_{1}=\frac{1}{2}[\mathbf{S}_{T}^{2}-(\mathbf{S}_{13}^{2}+\mathbf{S}_{24}^{2})]. \tag{13}\] \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline \(s_{13}\) & 0 & 1 & 0 & 1 & 1 & 1 \\ \hline \(s_{24}\) & 0 & 0 & 1 & 1 & 1 & 1 \\ \hline \(S\) & 0 & 1 & 1 & 0 & 1 & 2 \\ \hline \(g_{S}\) & 1 & 3 & 3 & 1 & 3 & 5 \\ \hline \end{tabular} \end{table} Table 1: Quantum numbers for the addition of four spins-1/2. The partial sum of two spins-1/2 yield the quantum numbers \(s_{13}=0,1\) and \(s_{24}=0,1\); A given pair of values of \(s_{13}\) and \(s_{24}\) yield the total spin quantum number as \(S=|s_{13}-s_{24}|,|s_{13}-s_{24}|+1,\ldots s_{13}+s_{24}\). Since the total angular momentum is conserved, the energy degeneracy for each total spin multiplet is \(g=2S+1\). We note that \(\mathbf{S}_{T}^{2}\), \(\mathbf{S}_{i}^{2}\), \(i=1-4\), and \(\mathbf{S}_{ii+2}^{2}\), \(i=1,2\) are scalar operators; as such, they commute with each other as well as with \(S_{T}^{z}\). Therefore \(\mathscr{C}\equiv\{\mathbf{S}_{T}^{2},S_{T}^{z},\mathbf{S}_{13}^{2},\mathbf{S }_{24}^{2},\mathbf{S}_{1}^{2},\mathbf{S}_{2}^{2},\mathbf{S}_{3}^{2},\mathbf{S}_ {4}^{2}\}\) forms a complete set of commuting observables (CSCO, see, e.g., Refs. [8; 9]). The Hamiltonian can then be cast in the form \[\mathcal{H}=\frac{J}{2}[\mathbf{S}_{T}^{2}-2\alpha K_{0}+(2\alpha-1)(\mathbf{ S}_{13}^{2}+\mathbf{S}_{24}^{2})], \tag{14}\] and since (14) is expressed solely in terms of operators in \(\mathscr{C}\), we may replace the operators by their respective eigenvalues when using this basis. The eigenvalues can be determined by systematically adding spins, starting with the partial sums, \(\mathbf{S}_{13}\) and \(\mathbf{S}_{24}\): the possible quantum numbers for the square of these partial sums are \(s_{ii+2}=0,1\), \(i=1,2\). These partial sums then add to make up the total spin, \(\mathbf{S}_{T}=\mathbf{S}_{13}+\mathbf{S}_{24}\), whose possible quantum numbers for its square are \(S_{T}=0,1,2\). We finally arrive at an expression for the eigenenergies in terms of the good quantum numbers: \[\frac{E}{|J|\hbar^{2}} = \text{sign}(\alpha)\frac{1}{2}\left\{S_{T}(S_{T}+1)-8\alpha s(s+1)\right. \tag{15}\] \[\left.+(2\alpha-1)[s_{13}(s_{13}+1)+s_{24}(s_{24}+1)]\right\},\] where \(\text{sign}(\alpha)\equiv\alpha/|\alpha|=J/|J|\) is the sign function, which is \(>0\,(<0)\) if \(\alpha>0\,(\alpha<0)\). As expected, the energies do not depend on the orientation of the ring, since there is no preferred direction in space; the degeneracy in \(M=-S_{T},-S_{T}+1,\ldots S_{T}\) of each level is therefore \(g=2S_{T}+1\). Table 1 lists all possible combinations of quantum numbers entering in the evaluation of the energies, which are plotted as functions of \(\alpha\) in Figure 2. The eigenvectors may then be denoted by \(|S_{T}Ms_{13}s_{24}\rangle\), where, for the same reasons as before, we keep omitting \(s_{1},\ldots,s_{4}\) from the list. Figure 2 suggests that one should discuss the cases \(\alpha<0\) and \(\alpha>0\) in turn. In addition, it will often prove illustrative to express the lowest energy eigenstates in terms of a'split basis', formed by \(\{|s_{13}m_{13}\rangle\otimes|s_{24}m_{2}\rangle\}\); see Appendix A. If \(-1/4<\alpha<0\) the ground state is the ferromagnetic quintuplet, \(|2M11\rangle\), \(M=-2,-1,0,1,2\); see Eqs. (11), (12)-(13). By contrast, if \(\alpha<-1/4\), the ground state is the singlet \[|0000\rangle=|00\rangle\otimes|00\rangle. \tag{16}\] Thus we may say that the nature of the ground state changes at \(\alpha_{c,1}=-1/4\), and it is suggestive that this evolves into a quantum critical point as the number of sites, \(N\), increases; more on this below. If \(\alpha>0\), the second neighbor coupling frustrates the tendency of forming a Neel-like state, \(|\uparrow\downarrow\uparrow\downarrow\rangle\). Indeed, Figure 2 shows that if \(0<\alpha<1/2\), the ground state is a different singlet, \[|0011\rangle=\frac{1}{\sqrt{3}}[|1-1\rangle\otimes|11\rangle-|10\rangle \otimes|10\rangle+|11\rangle\otimes|1-1\rangle] \tag{17}\] [see Eq. (14)], which is a superposition of pairs (1,3) and (2,4) forming triplets adding in such way to yield a global singlet; note that each term in the superposition has \(M=m_{13}+m_{24}=0\). Beyond \(\alpha=\alpha^{*}=1/2\), the ground state is the same singlet as for \(\alpha<-1/4\); see Eq. (16). As we will see, the correlation functions highlight the difference between the two singlets appearing when \(\alpha>0\). ## III Spin-spin correlation functions The spin-spin correlation function is defined as \[\mathscr{S}(r)\equiv\langle\mathbf{S}_{i}\cdot\mathbf{S}_{i+r}\rangle, \tag{18}\] thus measuring the influence a spin variable at site \(i\) exerts on the value of the spin variable at a site separated by a distance \(r\); at zero temperature, the averages \(\langle\cdots\rangle\) are understood as ground state expectation values. Translational invariance allows us to write \[\mathscr{S}(r)=\frac{1}{4}\langle K_{r}\rangle,\quad r=0,1,2, \tag{19}\] where the \(K_{r}\)'s are given by Eqs. (6)-(8), and we note that \(r=2\) (we consider unit lattice spacing) is the maximum distance in this case due to periodic boundary conditions. In a macroscopic system, the behavior at large distances, \(r\gg 1\), probes the existence of some degree of ordering, as well as its nature. Since here we are dealing with only 4 sites, we cannot make any claims about the long distance behavior of \(\mathscr{S}(r)\). Nonetheless, even for 4 sites \(\mathscr{S}(r)\) Figure 2: Energy eigenvalues for the 4-site \(J\)-\(J_{2}\) Heisenberg model as a function of \(\alpha\equiv J_{2}/J\), with \(J_{2}>0\). The curves are labelled by a simplified notation, \(|S_{T}s_{13}s_{24}\rangle\), since for each \(S_{T}\) they are degenerate in \(M\); \(S_{T}\) is the total spin quantum number, and \(s_{13}\) and \(s_{24}\) are the quantum numbers specifying the partial sums. Still within this notation, we recall that the states \(|110\rangle\) and \(|101\rangle\) are degenerate. The energies of some states display a discontinuity at \(\alpha=0\), which have been purposely smoothed for easier identification. provides useful insights into the nature of the different ground states, as we will see below. Similarly to the eigenenergies, the \(\langle K_{r}\rangle\)'s may be expressed in terms of the eigenvalues of the operators in the set \(\mathscr{C}\), to yield \[\mathscr{S}(0) =s(s+1)\hbar^{2}, \tag{20}\] \[\mathscr{S}(1) =\frac{1}{8}[s(s+1)-s_{13}(s_{13}+1)-s_{24}(s_{24}+1)]\hbar^{2},\] (21) \[\mathscr{S}(2) =\frac{1}{4}[s_{13}(s_{13}+1)+s_{24}(s_{24}-1)-4s(s+1)]\hbar^{2}. \tag{22}\] The correlation functions for the different ground states (in different regimes of \(\alpha\)) are displayed in Fig. 3. Figure 3(a) shows that when the ground state is the singlet given by Eq. (16), spins on the same sublattice are maximally anticorrelated, since they are in two-spin singlet states; spins on different sublattices, on the other hand, are uncorrelated. This should be contrasted with Fig. 3(b), for which the overall singlet, Eq. (17), is made up from two-spin triplets adding to yield \(S=0\). Indeed, members of the two-spin triplets are positively correlated, but negatively correlated with a spin in the other sublattice; classically the triplets would be antiparallel to each other. Also noteworthy is the overall decay in the magnitude of \(\mathscr{S}(r)\). The correlation function for the quintuplet does not depend on \(M\), as a result of this basis satisfying rotational invariance. Had we expressed the ground state in terms of \(\{|m_{1}m_{2}m_{3}m_{4}\rangle\}\), the equality \(\langle S^{z}_{i}S^{z}_{i+r}\rangle=(1/4)\langle S^{+}_{i}S^{-}_{i+r}+S^{-}_{i }S^{+}_{i+r}\rangle\) would only hold by averaging each side of this equation over \(M\). Figure 3(c) shows the correlation function for any member of the quintuplet: all spins are positively correlated, in accordance with the classical picture of a ferromagnetic state; another feature distinguishable from the singlet cases is the tendency of \(\mathscr{S}(r)\) being a constant beyond \(r=1\). ## IV Entanglement The possible states for two spins-\(1/2\), \(|S_{T}M\rangle\), may be expressed as a singlet \(|00\rangle\) and a triplet \(|11\rangle\), \(|1-\rangle\), and \(|10\rangle\); while the states with \(M\neq 0\) are _separable_, that is \(|11\rangle=|\uparrow\rangle\otimes|\uparrow\rangle\) and \(|1-1\rangle=|\downarrow\rangle\otimes|\downarrow\rangle\), the \(M=0\) states cannot be written as a direct product of single-particle states, since \(|10\rangle=(1/\sqrt{2})[|\uparrow\rangle\otimes|\downarrow\rangle+|\downarrow \rangle\otimes|\uparrow\rangle]\) and \(|00\rangle=(1/\sqrt{2})[|\uparrow\rangle\otimes|\downarrow\rangle-|\downarrow \rangle\otimes|\uparrow\rangle]\). In the latter cases we say the spins are _entangled_: in a bipartite entangled spin state, determination of the state of one part of the system necessarily implies the knowledge of the outcome of measuring the other part; see, e.g. Refs. [10; 11]. The above example with the \(M=0\) states suggests that the first spin is in a mixed state, since it has probability \(1/2\) of being either up or down. A qualitative measure of the entanglement between spins \(1\) and \(2\) is given by the reduced density matrix for one of the spins, obtained by taking the partial trace of the density operator. For instance, considering \(\rho=|10\rangle\langle 10|\), we trace out the second spin, \[\tilde{\rho}(1)\equiv\mathrm{Tr}_{2}\ \rho, \tag{23}\] by calculating the matrix elements [9], \[\langle m_{1}|\tilde{\rho}(1)|m_{1}^{\prime}\rangle=\sum_{m_{2}=\uparrow, \downarrow}\ \langle m_{1}m_{2}|10\rangle\langle 10|m_{1}^{\prime}m_{2}\rangle, \tag{24}\] with the result \(\tilde{\rho}(1)=\nicefrac{{1}}{{2}}\cdot\mathbb{1}\), where \(\mathbb{1}\) is the (\(2\times 2\) in this case) identity matrix. Since \([\tilde{\rho}(1)]^{2}\neq\tilde{\rho}(1)\), spin \(1\) is not in a pure state: it is therefore entangled with spin \(2\). One may also use a quantitative measure of entanglement, namely the von Neumann entropy associated with the reduced density operator [11], \[S(1)=-\sum_{i=1}^{2}\lambda_{i}\ln\lambda_{i}, \tag{25}\] Figure 3: Correlation functions for a 4-site ring in the \(|S_{T}Ms_{13}s_{24}\rangle\) states as functions of the distance between spins: (a) \(|0000\rangle\), (b) \(|0011\rangle\), and (c) \(|2M11\rangle\); in the latter case, the quintuplet, they are degenerate in \(M=-2,-1,0,1,2\). where \(\lambda_{1}\) and \(\lambda_{2}\) are the two eigenvalues of \(\tilde{\rho}(1)\); in the present case, \(\lambda_{1}=\lambda_{2}=1/2\), so that \(S(1)=\ln 2\), which is the maximum entanglement possible for two spins-\(1/2\)[11]. By contrast, the entropy is zero for separable states such as \(|\!\uparrow\uparrow\rangle\) and \(|\!\downarrow\downarrow\rangle\). In extending these ideas to the present case of 4 spins, we note from the outset that there are three inequivalent ways of partitioning the system, namely (13)-(24), (12)-(34), and (1)-(234); note that the first partition embraces the good quantum numbers \(s_{13}\) and \(s_{24}\), so one expects features different from the (12)-(34) partition. We now consider each of these bipartite cases in turn. ### Subsystems (13) and (24) Starting with \(|0000\rangle=|00\rangle_{13}\otimes|00\rangle_{24}\), we see that it is obviously separable between two pairs of singlets, (13) and (24). This is also manifest by taking the partial trace [9] over spins 2 and 4 to obtain the reduced density operator, \[\tilde{\rho}(13) =\text{Tr}_{s_{24},m_{24}}|00\rangle_{13}\otimes|00\rangle_{24} \leavevmode\nobreak\ _{13}\langle 00|\otimes{}_{24}\langle 00|\] \[=|00\rangle_{13}\leavevmode\nobreak\ _{13}\langle 00|\leavevmode \nobreak\ \text{Tr}_{s_{24},m_{24}}|00\rangle_{24}\leavevmode\nobreak\ _{24}\langle 00|\] \[=|00\rangle_{13}\leavevmode\nobreak\ _{13}\langle 00|, \tag{26}\] since \(\text{Tr}_{s_{24},m_{24}}|00\rangle_{24}\leavevmode\nobreak\ _{24}\langle 00|=1\). Thus, \[[\tilde{\rho}(13)]^{2}=\tilde{\rho}(13), \tag{27}\] so that the (13) subsystem is in a pure state, hence separable from (24). This is hardly surprising, given that the ground state is generated by adding spins 1 and 3 simultaneously with adding 2 and 4. Accordingly, the eigenvalues of \(\tilde{\rho}(13)\) are 1 and 0 (3-fold degenerate) which leads to a vanishing von Neumann entropy. We now discuss the ground state for \(0<\alpha<1/2\), namely \(|0011\rangle\), as given by Eq. (111). Unlike \(|0000\rangle\) [Eq. (110)], one sees by inspection that spins 1 and 3 are entangled with spins 2 and 4. Further, following steps similar to those in Eq. (26) we obtain the reduced density operator, \[\tilde{\rho}(13)=\frac{1}{3}\left[|1-1\rangle\langle 1-1|+|10\rangle\langle 1 0|+|11\rangle\langle 11|\right], \tag{28}\] which, in the \(|s_{13}m_{13}\rangle\) basis, is represented by \[\tilde{\rho}(13)=\frac{1}{3}\begin{pmatrix}1&0&0&0\\ 0&1&0&0\\ 0&0&1&0\\ 0&0&0&0\end{pmatrix}, \tag{29}\] and is such that \([\tilde{\rho}(13)]^{2}\neq\tilde{\rho}(13)\), so that the partitions (13) and (24) are entangled. With the eigenvalues of \(\tilde{\rho}(13)\) being \(\lambda=0\) and \(\lambda=1/3\) (3-fold degenerate), the von Neumann entropy becomes \[S(13)=-\sum_{i=1}^{4}\lambda_{i}\ln\lambda_{i}=\ln 3, \tag{30}\] yet another signature of entanglement. In the interval \(-1/4<\alpha<0\), the ground state is the quintuplet \(S_{T}=2\), with 2 separable states, \(|2\pm 211\rangle=|1\pm 1\rangle_{13}\otimes|1\pm 1\rangle_{24}\), and three entangled states, \(|2\pm 111\rangle\) and \(|2011\rangle\); see Eqs. (100), (101), and (111). The reduced density matrices (now with a subscript \(m_{13}\)) are \[\tilde{\rho}_{1}(13)=\frac{1}{2}\begin{pmatrix}1&0&0&0\\ 0&1&0&0\\ 0&0&0&0\\ 0&0&0&0\end{pmatrix},\tilde{\rho}_{-1}(13)=\frac{1}{2}\begin{pmatrix}0&0&0&0 \\ 0&0&0&0\\ 0&0&1&0\\ 0&0&0&1\end{pmatrix}, \tag{31}\] and \[\tilde{\rho}_{0}(13)=\frac{1}{6}\begin{pmatrix}1&0&0&0\\ 0&4&0&0\\ 0&0&1&0\\ 0&0&0&0\end{pmatrix}. \tag{32}\] The von Neumann entropies for the five states are \[S_{\pm 2}(13) =0, \tag{33}\] \[S_{\pm 1}(13) =\ln 2,\] (34) \[S_{0}(13) =\ln 3-\frac{1}{3}\ln 2, \tag{35}\] so that amongst this quintuplet, the maximally entangled state is the one with \(M=0\), corresponding to the largest number of possibilities for \(m_{23}\) in the linear combination, Eq. (100). ### Subsystems (12) and (34) Let us then investigate what happens when we consider as subsystems the pairs (12) and (34). In this case, however, we must express the density operators in the basis of individual spins (see Appendix B) and trace out spins 3 and 4. The matrix elements of the reduced density operator, \(\tilde{\rho}_{12}\), are \[\langle m_{1}m_{2}|\tilde{\rho}_{12}|m_{1}^{\prime}m_{2}^{\prime}\rangle=\sum_ {m_{3}m_{4}}\langle m_{1}m_{2}m_{3}m_{4}|\rho|m_{1}^{\prime}m_{2}^{\prime}m_{3 }m_{4}\rangle. \tag{36}\] For \(\rho=|0000\rangle\langle 0000|\), we take Eq. (102) into (36) to obtain \[\tilde{\rho}_{12}=\frac{1}{4}\mathbb{1}, \tag{37}\] where \(\mathbb{1}\) is the identity matrix (\(4\times 4\) in this case). Since \(\tilde{\rho}_{12}^{2}\neq\tilde{\rho}_{12}\), the subsystem (12) is not pure, meaning that (12) and (34) are entangled. The eigenvalues of \(\tilde{\rho}_{12}\) are \(\lambda=1/4\) (4-fold degenerate), so that the von Neumann entropy is \(S(12)=2\ln 2\). In the interval \(0<\alpha<1/2\) the density operator is \(\rho=|0011\rangle\langle 0011|\) [see Eq. (111)], and the trace over and \(m_{4}\) yields \[\tilde{\rho}(12)=\frac{1}{12}\begin{pmatrix}1&0&0&0\\ 0&5&-4&0\\ 0&-4&5&0\\ 0&0&0&1\end{pmatrix}, \tag{38}\] whose eigenvalues are \(\lambda=1/12\) (3-fold degenerate) and \(3/4\), so that the von Neumann entropy is \[S(12)=2\ln 2-\frac{1}{2}\ln 3. \tag{39}\] And, finally, for \(-1/4<\alpha<0\) the same structure found for \(\tilde{\rho}_{m_{13}}\) applies: the \(M=\pm 2\) states are separable, while the states with \(M=0,\pm 1\) are entangled. Again with the aid of the basis of individual spins, Eq. (36) yields the reduced density operators as \[\tilde{\rho}_{1}(12)=\frac{1}{4}\begin{pmatrix}2&0&0&0\\ 0&1&1&0\\ 0&1&1&0\\ 0&0&0&0\end{pmatrix},\tilde{\rho}_{-1}(12)=\frac{1}{4}\begin{pmatrix}0&0&0&0 \\ 0&1&1&0\\ 0&1&1&0\\ 0&0&0&2\end{pmatrix}, \tag{40}\] and \[\tilde{\rho}_{0}(12)=\frac{1}{6}\begin{pmatrix}1&0&0&0\\ 0&2&2&0\\ 0&2&2&0\\ 0&0&0&1\end{pmatrix}. \tag{41}\] The associated von Neumann entropies are then \[S_{\pm 2}(12) =0, \tag{42}\] \[S_{\pm 1}(12) =\ln 2,\] (43) \[S_{0}(12) =\ln 3-\frac{1}{3}\ln 2, \tag{44}\] which are identical to the ones obtained for the partition (13)-(24). This indicates that the entanglement properties of the ground states in this quintuplet are not sensitive to the way the system is partitioned in half. ### Subsystems (1) and (234) Consider now the partition into, say spin 1 and spins 2, 3, and 4. The elements of the single-spin reduced matrix, \(\tilde{\rho}_{1}\), are then given by \[\langle m_{1}|\tilde{\rho}_{1}|m_{1}^{\prime}\rangle=\sum_{m_{2}m_{3}m_{4}} \langle m_{1}m_{2}m_{3}m_{4}|\rho|m_{1}^{\prime}m_{2}m_{3}m_{4}\rangle. \tag{45}\] For the state \(|0000\rangle\) we have \[\tilde{\rho}(1)=\frac{1}{4}\mathbb{1}, \tag{46}\] where now \(\mathbb{1}\) is the \(2\times 2\) identity matrix. Since \([\tilde{\rho(1)}]^{2}\neq\tilde{\rho}(1)\), spin 1 is entangled with the remaining spins, and the entropy is \(S(1)=\ln 2\). Similarly, for the state \(|0011\rangle\) Eq. (45) yields \[\tilde{\rho}(1)=\frac{1}{2}\mathbb{1}, \tag{47}\] which leads to the same entropy as for the state \(|0000\rangle\). For the quintuplet of states, we start with \(|2\pm 211\rangle\): \[\tilde{\rho}_{2}(1)=\begin{pmatrix}1&0\\ 0&0\end{pmatrix}\text{ and }\tilde{\rho}_{-2}(1)=\begin{pmatrix}0&0\\ 0&1\end{pmatrix}, \tag{48}\] both of which lead to \(S_{\pm 2}(1)=0\), as it should for a separable state. For the states \(|2\pm 111\rangle\), we take Eqs. (14) and (15) into Eq. (45) to obtain, respectively, \[\tilde{\rho}_{1}(1)=\frac{1}{4}\begin{pmatrix}3&0\\ 0&1\end{pmatrix}\text{ and }\tilde{\rho}_{-1}(1)=\frac{1}{4}\begin{pmatrix}1&0\\ 0&3\end{pmatrix}, \tag{49}\] whose entropies are \(S_{\pm 1}(1)=2\ln 2-(3/4)\ln 3\). Finally, the reduced density matrix for \(|2011\rangle\) becomes \(\tilde{\rho}_{0}(1)=(1/2)\mathbb{1}\), and the associated entropy is \(S_{0}(1)=\ln 2\). ### Overall discussion of the entropies Figure 4 shows the entropy as a function of \(\alpha\) for each partition. When comparing the entropies, one should keep in mind that in the FM region, \(-1/4<\alpha<0\), there are always three different values of the entropy, such that the entropy vanishes for \(|M|=2\) (separable), and increases as \(|M|\) decreases. When the system is partitioned according to the partial sums, \(\mathbf{S}_{13}\) and \(\mathbf{S}_{24}\), one may set up separable states for all \(\alpha\), except in the range \(0<\alpha<1/2\). By contrast, by choosing the (12)-(34) partition, the system is maximally entangled when the ground state is \(|0000\rangle\), as given by Eq. (16): this was to be expected, since spins 1 and 3, as well as 2 and 4, already form singlets themselves. And, finally, when the partition involves one spin and three spins, we may have at our disposal a uniform entanglement for all values of \(\alpha\). These findings indicate that coupling q-bits in a controllable way may give rise to a wider range of entanglement outcomes to be explored in quantum computing. ## V Comments on larger systems For a 6-site ring a convenient CSCO is \(\mathscr{C}_{6}\equiv\{\mathbf{S}_{7}^{2},S^{z},\mathbf{S}_{14}^{2},\mathbf{S }_{25}^{2},\mathbf{S}_{36}^{2},\mathbf{S}_{2536}^{2},\mathbf{S}_{1}^{2},\mathbf{ S}_{2}^{2},\mathbf{S}_{3}^{2},\mathbf{S}_{4}^{2},\mathbf{S}_{5}^{2},\mathbf{S}_{6}^{2}\}\), where \(\mathbf{S}_{i,i+3}=\mathbf{S}_{i}+\mathbf{S}_{i+3}\), \(i=1,2,3\), \(\mathbf{S}_{2536}\equiv\mathbf{S}_{25}+\mathbf{S}_{36}\), and \(\mathbf{S}_{T}=\mathbf{S}_{1}+\mathbf{S}_{2}+\mathbf{S}_{3}+\mathbf{S}_{4}+ \mathbf{S}_{5}+\mathbf{S}_{6}\). However, it turns out that this CSCO only leads to a direct evaluation of the spectrum for \(\alpha=1\), to wit: \[E_{6}=\frac{1}{2}J\hbar^{2}\left[S_{T}(S_{T}+1)-s_{14}(s_{14}+1)-s _{25}(s_{25}+1)\right.\] \[\left.-s_{36}(s_{36}+1)\right], \tag{50}\] where \(s_{ii+3}=0,1\), \(i=1,2,3\). Note that \(E_{6}\) does not depend on the partial quantum number \(s_{2536}\), which, after all, is an arbitrary choice, since one could have used \(s_{1425}\) or \(s_{1436}\). The energy is minimized if the state is a total singlet, \(S_{T}=0\), made up by triplet pairs, \(s_{14}=s_{25}=s_{36}=1\); we get \(E_{6}=-3J\hbar^{2}\). However, in order to specify the ground state we note that since there is an odd number of triplet pairs, an overall singlet demands that two of them combine into yet another triplet, say \(s_{2536}=1\), which, in turn, combines with the remaining triplet, \(s_{14}=1\), into an overall singlet. While for the 4-spin system the ground state for \(\alpha=1\) is made up of singlet pairs, here the ground state is made up of triplet pairs. Clearly one can still obtain simple solutions for general \(\alpha\) by combining states \(|m_{1}m_{2}\ldots m_{6}\rangle\) in such way that they are also eigenstates of the translation operator; in this case we end up with a block-diagonal matrix representation of \(\mathcal{H}\) in which the largest block is \(4\times 4\). For completeness, we note that numerical studies of this model have been carried out for much larger lattice sizes, \(L\), focusing on different quantities [12; 13; 14] and extrapolating the spectral data for \(L\to\infty\); for a theory of finite-size scaling involving more than one gap see Ref. [16]. The picture that emerges is that a second order phase transition occurs at \(\alpha_{c}\approx 0.2411\), separating a spin liquid state for \(0<\alpha<\alpha_{c}\), from a dimerized state for \(\alpha>\alpha_{c}\). Figure 5 compares the phase diagrams for the 4-site ring with the one obtained through different numerical methods [12; 13; 14; 15]. Despite the small system size considered here, the two singlet phases we found for \(\alpha>0\) may be interpreted as the precursors of the spin liquid and of the dimerized state. The transitions involving these subtle phases can only be identified on a single system of size \(L\gtrsim\xi\), where \(\xi\) is the correlation length; however, by examining data for _different sizes_, one may extrapolate to the thermodynamic limit using finite-size scaling theories [3; 4]. It is also worth stressing that we found that the singlet-from-singlets state and the singlet-from-triplets state cross at \(\alpha_{\rm MG}=1/2\); this is the so-called Majumdar-Ghosh point [15] whose two-fold degeneracy and location do not depend on the system size. The saturated ferromagnetic phase we found here for \(-1/4<\alpha<0\) persists for much larger system sizes, and the first order critical point at \(\alpha_{F}=-1/4\) also does not depend on the system size. For \(\alpha<-1/4\), however, the \(|00\rangle\) ground state we obtained here evolves in the thermodynamic limit to a more subtle incommensurate state, whose classical representation is a spiral spin state. Finally, it is worth mentioning that competing interactions along the lines considered here have been invoked to explain the magnetic behavior of materials Figure 4: Entanglement entropies as functions of \(\alpha\): (a) \(S(13)\), which measures entanglement between spins on different sublattices, (b) \(S(12)\), which measures entanglement between dimers, and (c) \(S(1)\), which measures entanglement between a single spin and the remaining ones. In all panels, the entropy depends on \(|M|\) in the region \(-1/4<\alpha<0\). Figure 5: (a) The phase diagram for the 4-site ring, where the kets follow the notation of Fig. 2, and FM corresponds to the quintuplets \(|2M11\rangle\); see text. (b) The phase diagram obtained from different numerical approaches allowing for extrapolations to the thermodynamic limit [12; 13; 14; 15]. IC stands for incommensurate spiral phase, FM for the saturated ferromagnetic phase, SL for a spin liquid, and D for dimerized; see text. such as CuGe0\({}_{3}\)[17; 18], SrCuO\({}_{2}\)[19], and, more recently, a Szencisite [20]. ## VI Conclusions We have considered spins-1/2 placed on each of 4 sites of a ring, coupled through competing nearest-neighbor and next-nearest-neighbor exchange interactions, \(J\) and \(J_{2}=\alpha J\), with \(J_{2}>0\) to enforce competition, so that the sign of \(\alpha\) is the same as that of \(J\). This setup allows us to illustrate how addition of more than two angular momenta may simplify the determination of the spectrum of an interacting system, by sequentially combining pairs of spin angular momenta, forming a total spin quantum number \(S_{T}\). We have found that for \(\alpha>0\) the ground state is always a singlet, \(S_{T}=0\), but in such way that in the range \(0<\alpha<\alpha_{\rm MG}=1/2\) this total singlet arises by combining two triplet pairs, while for \(\alpha>\alpha_{\rm MG}=1/2\), the ground state is built from two singlet pairs. At \(\alpha=\alpha_{\rm MG}\) the corresponding levels cross each other, so that the ground state is doubly-degenerate. For \(-1/4<\alpha<0\), we found a five-fold degenerate quintuplet of states with \(S_{T}=2\), while for \(\alpha<-1/4\) the ground state is a total singlet. This procedure to determine the ground states also enormously simplifies the calculation of spin-spin correlation functions, which clearly distinguish the magnetic behaviors. This system is also amenable to illustrate several features of the entanglement (von Neumann) entropy to detect changes in the ground state. Again, the simplicity of the solution allows us to establish that the dependence of the entanglement entropy with \(\alpha\) is crucially influenced by the way the system is divided into two parts; this may have bearings on the way q-bits are manipulated. ###### Acknowledgements. The authors are grateful to the Brazilian Agencies, CAPES, CNPq, and FAPERJ for financial support. ## Appendix A The split basis Here we discuss how to express the states \(|S_{T}Ms_{13}s_{24}\rangle\) as linear combinations of direct products \(|s_{13}m_{13}\rangle\otimes|s_{24}m_{24}\rangle\); we refer to this as _the split basis_. Since \[\langle S_{T}Ms^{\prime}_{13}s^{\prime}_{24}|\left[|s_{13}m_{13} \rangle\otimes|s_{24}m_{24}\rangle\right]=0,\] \[\mbox{if }s^{\prime}_{13}\neq s_{13}\mbox{ or }s^{\prime}_{24}\neq s_{24}, \tag{11}\] for a fixed pair \((s_{13},s_{24})\), we may write \[|S_{T}Ms_{13}s_{24}\rangle=\sum_{m_{13}+m_{24}=M}a_{m_{13},m_{24}}|s_{13}m_{13 }\rangle\otimes|s_{24}m_{24}\rangle, \tag{12}\] so that the Hilbert space is decomposed into subspaces \(\mathcal{E}_{s_{13},s_{24}}\) with dimensions \((2s_{13}+1)\cdot(2s_{24}+1)\); \(a_{m_{13},m_{24}}\) are known as the Clebsch-Gordan coefficients [8], and here will be determined explicitly, for completeness. We start with the state with the largest \(S_{T}\), namely \(S_{T}=2\), and \(M=S_{T}=2\); see the last column in Table 1. The only possible correspondence of this state is \[|2211\rangle=|11\rangle\otimes|11\rangle, \tag{13}\] where from now on we adopt the convention that the first [second] ket on the RHS is relative to the pair (1,3) [(2,4)]. Recalling that each one of the ladder operators, \[S_{T}^{\pm}\equiv S_{13}^{\pm}+S_{24}^{\pm}, \tag{14}\] changes the value of \(m\) by one unit, that is [8] \[S_{T}^{\pm}|sm\rangle=\hbar\,\sqrt{s(s+1)-m(m\pm 1)}\,|s\,m\pm 1\rangle, \tag{15}\] we apply \(S_{T}^{-}\) to both sides of Eq. (13) to obtain \[S_{T}^{-}|2211\rangle = 2\hbar|2111\rangle \tag{16}\] \[= [S_{13}^{-}+S_{24}^{-}]\,\left[|11\rangle\otimes|11\rangle\right]\] \[= \sqrt{2}\hbar[|10\rangle\otimes|11\rangle+|11\rangle\otimes|10 \rangle]\] from which we extract \[|2111\rangle=\frac{1}{\sqrt{2}}[|10\rangle\otimes|11\rangle+|11\rangle\otimes| 10\rangle]. \tag{17}\] Now we further decrease \(M\) by one unit, still considering the \(S_{T}=2\) states. Following the same steps leading to Eq. (17) we arrive at \[|2011\rangle=\frac{1}{\sqrt{6}}[|1-1\rangle\otimes|11\rangle+2|10\rangle\otimes |10\rangle+|11\rangle\otimes|1-1\rangle] \tag{18}\] The states \(|2-111\rangle\) and \(|2-211\rangle\) are obtained by flipping all spins in Eqs. (17) and (13), respectively: \[|2-111\rangle=\frac{1}{\sqrt{2}}[|10\rangle\otimes|1-1\rangle+|1-1\rangle \otimes|10\rangle] \tag{19}\] and \[|2-211\rangle=|1-1\rangle\otimes|1-1\rangle. \tag{20}\] Our next task is to express the states with \(S_{T}=1\) in terms of the split basis. Similarly to Eq. (17), we may write \[|1111\rangle=a|10\rangle\otimes|11\rangle+b|11\rangle\otimes|10\rangle, \tag{21}\] where the coefficients \(a\) and \(b\) are chosen by both orthogonalising the state with respect to Eq. (17), \[\langle 2111|111\rangle=\frac{1}{\sqrt{2}}(a+b)=0\quad\Rightarrow\quad b=-a, \tag{22}\] and normalisation of \(|1111\rangle\). We end up with \[|1111\rangle=\frac{1}{\sqrt{2}}[|10\rangle\otimes|11\rangle-|11\rangle\otimes|10 \rangle], \tag{113}\] Starting from (113), we repeat the procedure above to generate the states corresponding to \(M<1\); we obtain \[|1011\rangle =\frac{1}{\sqrt{2}}[|1\!-\!1\rangle\otimes|11\rangle-|11\rangle \otimes|1\!-\!1\rangle] \tag{114}\] \[|1\!-\!11\rangle =\frac{1}{\sqrt{2}}[|10\rangle\otimes|1\!-\!1\rangle-|1\!-\!1 \rangle\otimes|10\rangle]. \tag{115}\] We finally arrive at the last state arising from \(S_{13}=S_{24}=1\), namely \(|0011\rangle\). It may be obtained by writing \[|0011\rangle=a|11\rangle\otimes|1-1\rangle+b|10\rangle\otimes|10\rangle+c|1-1 \rangle\otimes|11\rangle, \tag{116}\] with the coefficients being determined by orthogonalisation, \[\langle 2011|0011\rangle =\frac{1}{\sqrt{6}}[c+2b+a]=0 \tag{117}\] \[\langle 1011|0011\rangle =\frac{1}{\sqrt{6}}[c-a]=0, \tag{118}\] together with normalisation; thus, \[|0011\rangle=\frac{1}{\sqrt{3}}[|1-1\rangle\otimes|11\rangle-|10\rangle\otimes |10\rangle+|11\rangle\otimes|1-1\rangle]. \tag{119}\] Now we consider the cases with \(S_{13}=1\) and \(S_{24}=0\), which leads solely to \(S_{T}=1\). We may thus identify \[|1110\rangle=|11\rangle\otimes|00\rangle, \tag{120}\] to which we successively apply Eq. (114) to obtain \[|1010\rangle =|10\rangle\otimes|00\rangle \tag{121}\] \[|1\!-\!110\rangle =|1\!-\!1\rangle\otimes|00\rangle \tag{122}\] The 3 states obtained by considering \(S_{13}=0\) and \(S_{24}=1\) are obtained by interchanging the kets on the RHS of Eqs.(120)-(122). Finally, the total singlet made up by the combination of two partial singlets is \[|0000\rangle=|00\rangle\otimes|00\rangle, \tag{123}\] thus completing the 16 states in the split basis. It is worth noting that some of the states in the split basis are separable, or disentangled (i.e., written in terms of a single direct product), while others are not separable, or entangled. ## Appendix B The individual basis We may also write the eigenstates in terms of individual spin states, \(\{|m_{1}m_{2}m_{3}m_{4}\rangle\}\). Indeed, since \(|11\rangle=|\!\uparrow\!\uparrow\rangle\) Eq. (113) yields \[|2211\rangle=|\!\uparrow\!\uparrow\!\uparrow\rangle; \tag{124}\] Similarly, with \(|10\rangle=(1/\sqrt{2})[|\!\uparrow\!\downarrow\rangle+|\!\downarrow\!\uparrow\rangle]\), Eq. (107) yields \[|2111\rangle=\frac{1}{2}[|\!\downarrow\!\uparrow\!\uparrow\rangle+|\! \uparrow\!\downarrow\!\uparrow\rangle+|\!\uparrow\!\uparrow\!\downarrow\rangle +|\!\uparrow\!\uparrow\!\downarrow\rangle], \tag{125}\] and \[|2011\rangle=\frac{1}{\sqrt{6}}[|\!\uparrow\!\downarrow\!\downarrow\rangle+| \!\uparrow\!\downarrow\!\uparrow\rangle+|\!\downarrow\!\uparrow\rangle+|\! \uparrow\!\uparrow\!\downarrow\rangle+(|\!\uparrow\!\uparrow\!\downarrow\rangle +|\!\downarrow\!\uparrow\!\uparrow\rangle)], \tag{126}\] with the remaining states with \(S_{T}=2\) being given by flipping all spins in \(|2111\rangle\) and \(|2-211\rangle\), that is, \[|2\!-\!111\rangle=\frac{1}{2}[|\!\uparrow\!\downarrow\!\downarrow\rangle+| \!\uparrow\!\downarrow\!\downarrow\rangle+|\!\downarrow\!\uparrow\!\downarrow \rangle+|\!\downarrow\!\downarrow\!\uparrow\rangle], \tag{127}\] and \[|2\!-\!211\rangle=|\!\downarrow\!\downarrow\!\downarrow\rangle. \tag{128}\] Still with \(s_{13}=s_{24}=1\), we have the eigenstates with \(S_{T}=1\), namely \[|1111\rangle =\frac{1}{2}[|\!\uparrow\!\uparrow\!\uparrow\rangle+|\!\downarrow \!\uparrow\!\uparrow\rangle-|\!\uparrow\!\uparrow\!\downarrow\rangle-|\! \uparrow\!\uparrow\!\uparrow\rangle] \tag{129}\] \[|1011\rangle =\frac{1}{\sqrt{2}}[|\!\downarrow\!\uparrow\!\downarrow\rangle-| \!\uparrow\!\uparrow\!\downarrow\!\downarrow\rangle]\] (130) \[|1\!-\!111\rangle =\frac{1}{2}[|\!\downarrow\!\downarrow\!\downarrow\rangle+|\! \uparrow\!\downarrow\!\downarrow\rangle-|\!\downarrow\!\downarrow\!\uparrow \rangle-|\!\downarrow\!\downarrow\!\downarrow\rangle], \tag{131}\] as well as the one with \(S_{T}=0\), \[|0011\rangle =\!\frac{1}{\sqrt{3}}[|\!\uparrow\!\downarrow\!\downarrow\rangle+|\! \downarrow\!\uparrow\!\downarrow\!\uparrow\rangle-\frac{1}{2}(|\!\uparrow\! \uparrow\!\downarrow\!\downarrow\rangle+|\!\downarrow\!\downarrow\!\uparrow \!\downarrow\rangle+|\!\downarrow\!\uparrow\!\uparrow\!\rangle)+|\! \downarrow\!\downarrow\!\uparrow\!\uparrow\rangle]. \tag{132}\] For \(s_{13}=1\) and \(s_{24}=0\), Eqs. (120)-(122) yield \[|1110\rangle =\frac{1}{\sqrt{2}}[|\!\uparrow\!\uparrow\!\downarrow\rangle-| \!\uparrow\!\uparrow\!\uparrow\rangle] \tag{133}\] \[|1010\rangle =\frac{1}{2}[|\!\uparrow\!\uparrow\!\downarrow\rangle-|\!\uparrow \!\downarrow\!\downarrow\!\uparrow\rangle+|\!\downarrow\!\uparrow\! \downarrow\!\downarrow\rangle-|\!\downarrow\!\downarrow\!\uparrow\!\uparrow\rangle]\] (134) \[|1\!-\!110\rangle =\frac{1}{\sqrt{2}}[|\!\downarrow\!\downarrow\!\downarrow\! \uparrow\rangle-|\!\uparrow\!\downarrow\!\downarrow\!\downarrow\rangle], \tag{135}\] while for \(s_{13}=0\) and \(s_{24}=1\), we have \[|1101\rangle =\frac{1}{\sqrt{2}}[|\!\uparrow\!\uparrow\!\downarrow\rangle-| \!\uparrow\!\uparrow\!\uparrow\rangle] \tag{136}\] \[|1001\rangle =\frac{1}{2}[|\!\uparrow\!\uparrow\!\downarrow\rangle+|\!\uparrow \!\downarrow\!\uparrow\rangle-|\!\downarrow\!\uparrow\!\downarrow\!\uparrow \rangle-|\!\downarrow\!\downarrow\!\uparrow\!\uparrow\rangle]\] (137) \[|1\!-\!101\rangle =\frac{1}{\sqrt{2}}[|\!\downarrow\!\downarrow\!\uparrow\!\downarrow \rangle-|\!\uparrow\!\downarrow\!\downarrow\!\downarrow\rangle]. \tag{138}\] And, finally, Eq. (123) leads to \[|0000\rangle=\frac{1}{2}[|\!\downarrow\!\downarrow\!\uparrow\rangle-|\!\uparrow \!\downarrow\!\downarrow\!\uparrow\rangle+|\!\uparrow\!\uparrow\!\downarrow\! \downarrow\downarrow\rangle-|\!\downarrow\!\uparrow\!\uparrow\!\downarrow\! \rangle]. \tag{139}\]
2310.03325
Learning Concept-Based Causal Transition and Symbolic Reasoning for Visual Planning
Visual planning simulates how humans make decisions to achieve desired goals in the form of searching for visual causal transitions between an initial visual state and a final visual goal state. It has become increasingly important in egocentric vision with its advantages in guiding agents to perform daily tasks in complex environments. In this paper, we propose an interpretable and generalizable visual planning framework consisting of i) a novel Substitution-based Concept Learner (SCL) that abstracts visual inputs into disentangled concept representations, ii) symbol abstraction and reasoning that performs task planning via the self-learned symbols, and iii) a Visual Causal Transition model (ViCT) that grounds visual causal transitions to semantically similar real-world actions. Given an initial state, we perform goal-conditioned visual planning with a symbolic reasoning method fueled by the learned representations and causal transitions to reach the goal state. To verify the effectiveness of the proposed model, we collect a large-scale visual planning dataset based on AI2-THOR, dubbed as CCTP. Extensive experiments on this challenging dataset demonstrate the superior performance of our method in visual task planning. Empirically, we show that our framework can generalize to unseen task trajectories, unseen object categories, and real-world data. Further details of this work are provided at https://fqyqc.github.io/ConTranPlan/.
Yilue Qian, Peiyu Yu, Ying Nian Wu, Yao Su, Wei Wang, Lifeng Fan
2023-10-05T05:41:21Z
http://arxiv.org/abs/2310.03325v2
# Learning Concept-Based Visual Causal Transition and Symbolic Reasoning for Visual Planning ###### Abstract Visual planning simulates how humans make decisions to achieve desired goals in the form of searching for visual causal transitions between an initial visual state and a final visual goal state. It has become increasingly important in egocentric vision with its advantages in guiding agents to perform daily tasks in complex environments. In this paper, we propose an interpretable and generalizable visual planning framework consisting of i) a novel Substitution-based Concept Learner (SCL) that abstracts visual inputs into disentangled concept representations, ii) symbol abstraction and reasoning that performs task planning via the self-learned symbols, and iii) a Visual Causal Transition model (ViCT) that grounds visual causal transitions to semantically similar real-world actions. Given an initial state, we perform goal-conditioned visual planning with a symbolic reasoning method fueled by the learned representations and causal transitions to reach the goal state. To verify the effectiveness of the proposed model, we collect a large-scale visual planning dataset based on AI2-THOR, dubbed as _CCTP_. Extensive experiments on this challenging dataset demonstrate the superior performance of our method in visual task planning. Empirically, we show that our framework can generalize to unseen task trajectories and unseen object categories. We will release our dataset and codes upon acceptance. ## 1 Introduction As one of the fundamental abilities of human intelligence, planning is the process of insightfully proposing a sequence of actions to achieve desired goals, which requires the capacity to think ahead, to employ knowledge of causality and the capacity of imagination (Walker & Gopnik, 2013), so as to reason and foresee the proper actions and their consequences on the states for all the intermediate transition steps before finally reaching the goal state. Visual planning simulates this thinking process of sequential causal imagination in the form of searching for visual transitions between an initial visual state and a final visual goal state. With its advantages in guiding agents to perform daily tasks in the first-person view, visual planning has become more and more important in egocentric vision (Gupta et al., 2017). In robotics, visual planning could also save large amounts of workforce from manually designing the required specific goal conditions, action preconditions and effects for robots. Previous works for visual planning can be roughly categorized into three tracks, _i.e_., neural-network-based models (Sun et al., 2022; Oh et al., 2015), reinforcement-learning-based models (Rybkin et al., 2021; Ebert et al., 2018) and classic search-based models (Paxton et al., 2019; Liu et al., 2020). Neural-network-based models can be trained in an end-to-end manner, easily adapting to different tasks and domains. This line of works, however, tends to fall short in terms of its interpretability (Gao & Guan, 2023). Reinforcement-learning-based models can perform goal-conditioned decisions, but could suffer from sparse reward, low data efficiency (Ladosz et al., 2022), and low environment and task generalization ability (Packer et al., 2018). Considering these limitations and inspired by human cognition, we conjecture that there exist three key components for visual planning, namely **representation learning, symbolic reasoning, and causal transition modeling**. Representation learning focuses on extracting objects' relevant, dynamic, and goal-oriented attributes. Symbolic reasoning performs action planning at the abstract higher level via self-learned symbols. Causal transition models the visual preconditions and action effects on attribute changes. At the **perception** level, we propose to learn concept-based disentangled representation, and believe such human-like perception ability to abstract visual concepts from observations is vital for visual causal transition modeling (Zhu et al., 2020). The reason is that such representation learning could encode images at a higher semantic level than pixels, distinguish different attribute concepts, extract the "essential" factors of variation, increase robustness and interpretability (Suter et al., 2018; Trauble et al., 2021; Adel et al., 2018), and promise compositional generalization to unseen scenarios with fewer examples in zero-shot inference (Atzmon et al., 2020; Trauble et al., 2021; Higgins et al., 2017; Locatello et al., 2020) as well as serve many real-world down-stream tasks such as causal learning (Trauble et al., 2021). At the **reasoning and planning** level, we argue that understanding the atomic causal mechanisms is crucial and inevitable for task planning. Human infants begin to make causal predictions and provide causal explanations for physical phenomena in the world by 2 years of age (Legare et al., 2010; Hickling & Wellman, 2001). Just as human causal cognition understands causality as events based on the forces of actions and their results (Gardenfors, 2020), the visual causal transition needs to capture the factors of variation in visual observation and anticipate the effects of actions applied to these factors. The understanding and reasoning of the abstract higher-level task planning composed of the lower-level atomic causal transition also have the potential to be more generalizable and interpretable (Edmonds, 2021; Scholkopf, 2022). Thus, we propose a visual causal transition model as well as its abstracted symbolic transition model. The abstracted symbolic transition corresponds to the discrete higher-level task planning, which is more interpretable, more data-efficient, more reliable and robust, easier to generalize, and better for avoiding the problem of "error accumulation" (Garcez et al., 2022). Guided by symbolic transition, the visual transition reconstructs intermediate and final goal images. Technically, there are **three critical modules** in our visual planning framework. First, a novel concept learner (Sec. 4.1) is learned by switching the latent concept representations of a pair of images with different attribute concepts. Second, a set of state symbols are abstracted from clustering low-level concept token representations (Sec. 4.2). The most efficient symbolic transition path can be found via a Markov Decision Process (MDP). Third, a visual transition model (Sec. 4.3) is proposed to learn the action-induced transition of the changeable attributes given the concept representations of the precondition image; it serves to generate the resulting effect image. To verify the effectiveness of the proposed framework, we collect a large-scale visual planning dataset, which contains a concept learning dataset and a causal planning dataset. Extensive comparison experiments and ablation studies on this dataset demonstrate that our model achieves superior performance in the visual planning task and various forms of generalization tests. To summarize our **main contributions**: (i) We propose a novel concept-based visual planning framework, which models both discrete symbolic transition and continuous visual transition for efficient path search and intermediate image generations. Comprehensive experiments show that our method achieves superior performances in visual task planning and generalization tests. (ii) In addition to generalizability, our method has better interpretability by generating a causal chain (the action sequences and the intermediate state images) to explicitly demonstrate the transition process to the goal. (iii) We collect a new large-scale visual planning dataset, which can foster concept and task planning in the community. ## 2 Related work ### Visual planning Visual planning is feasible with the learned representation and atomic causal effects. Lin et al. (2022) proposed a method for long-horizon deformable object manipulation tasks from sensory observations, which relies heavily on differentiable physics simulators. Paxton et al. (2019) performed a tree-search-based planning algorithm on the learned world representation after applying high-level actions for visual robot task planning, but they ignored learning disentangled representations. Sun et al. (2022) learned how to plan a goal-directed decision-making procedure from real-world videos, leveraging the structured and plannable latent state and action spaces learned from human instructional videos, but their transformer-based end-to-end model is hard to generalize to unseen planning tasks. Oh et al. (2015) proposed a model based on deep neural networks consisting of encoding, action-conditional transformation, and decoding for video prediction in Atari Games, but they do not abstract symbols for efficient reasoning. Silver et al. (2021) is the most similar to ours, which learned symbolic operators for task and motion planning, but cannot generate intermediate images. ### Concept disentanglement Concept-based disentangled representation learning has emerged as a popular way of extracting human-interpretable representations (Kazhdan et al., 2021). Discrete and semantically-grounded representation is argued to be helpful for human understanding and abstract visual reasoning, enables few-shot or zero-shot learning, facilitates human-machine interaction, and leads to better downstream task performance (Van Steenkiste et al., 2019; Yu et al., 2022). Automatically learning visual concepts from raw images without strong supervision is challenging in AI research. Previous studies tried to learn disentangled concept representation either in a completely unsupervised manner (Chen et al., 2016; Zhu et al., 2020; Higgins et al., 2016; Yang et al., 2022; Yu et al., 2021), or via weak supervision and implicit prototype representations (Stammer et al., 2022), or by employing supervision from the linguistic space (Saini et al., 2022; Mao et al., 2019). There have been diverse learning techniques, such as Transformer (Yang et al., 2022), (sequential) variational autoencoder (Zhu et al., 2020; Higgins et al., 2016), and information maximizing generative adversarial nets (Chen et al., 2016), etc. Existing techniques have proved successful on objects mostly with limited variation, such as digits, simple geometric objects (Stammer et al., 2022), and faces (Chen et al., 2016). In this work, we propose a variant of (Yang et al., 2022) by imposing more reconstruction constraints, which works very well on more complex household objects (Sec. 3) and benefits for the downstream planning task compared to prior works. ### Causal learning and reasoning Visual reasoning for human task understanding is one of the essential capabilities of human intelligence, and a big challenge for AI with the difficulty of generating a detailed understanding of situated actions, their dependencies, and causal effects on object states (Jia et al., 2022). Various evaluated state-of-the-art models only thrive on the perception-based descriptive task, but perform poorly on the causal tasks (_i.e._, explanatory, predictive, and counterfactual tasks), suggesting that a principled approach for causal reasoning should incorporate not only disentangled and semantically grounded visual perception, but also the underlying hierarchical causal relations and dynamics (Yi et al., 2019). Concept-based disentangled representation learning could benefit causal learning by finding a latent space where important factors could be extracted from other confounding factors, thus facilitating the learning of causal effects (Atzmon et al., 2020). Fire & Zhu (2017) built a sequential Causal And-Or Graph (C-AOG) to represent actions and their effects on objects over time. Our work exploits the disentangled concept representation to ground action to their causal effects on object attributes. ## 3 Environment and dataset To facilitate the learning and evaluation of the concept-based visual planning task, we collect a large-scale RGB-D image sequence dataset named _CCTP_ (Concept-based Causal Transition Planning) based on AI2-THOR simulator (Kolve et al., 2017). We exclude scene transitions in each task by design to focus more on concept and causal transition learning, _i.e._, each task is performed on a fixed workbench, although the workbenches and scenes vary from task to task. The frame resolution is \(384\times 256\), which is converted into \(256\times 256\) at the very beginning of our method. The whole dataset consists of a concept learning dataset and a visual causal planning dataset, which we will illustrate in detail below. ### Concept learning dataset We learn six different kinds of concepts: TYPE, POSITION_X, POSITION_Y, ROTATION, COLOR, and SIZE. TYPE refers to the object category. The dataset has eight different types of objects in total, including _Bread_, _Cup_, _Egg_, _Lettuce_, _Plate_, _Tomato_, _Pot_, and _Dyer_, all of which can be manipulated on the workbench. We manually add the COLOR concept to the target object by editing the color of the object in its HSV space. This leads to 6 different colors in all for each object, and \(20\) samples are provided for each color to avoid sample bias. For SIZE concept, we rescale each target object to 4 different sizes as its concept set. As for the position, we use POSITION_X and POSITION_Y to refer to the coordinates along the horizontal X-axis and the vertical Y-axis w.r.t. the workbench surface. We discretize POSITION_X with 3 values and POSITION_Y with 5. Notably, changes in POSITION_X and POSITION_Y also cause variant perspectives of an object. For ROTATION, we set 0, 90, 180 and 270 rotation degrees for all types of objects. We exhaustively generate all possible target objects with different value combinations of the six concepts, resulting in 234,400 images. Leveraging the masks provided by AI2-THOR, we isolate the foreground images, containing only the target object with a black background. We randomly choose \(40\%\) of the concept combinations for training. For each image \(X_{0,f}\) in the training set and each concept index \(i\), we search for image \(X_{1,f}\) within the training set such that \(X_{0,f}\) and \(X_{1,f}\) differ only in the \(i\)-th concept. We use such paired images and the corresponding label \(i\) for concept learning. ### Causal planning dataset A causal planning task consists of several steps of state transitions, each caused by an atomic action. We define seven different atomic actions in our dataset, including move_front, move_back, move_left, move_right, rotate_left, rotate_right, and change_color. The magnitude of each action is fixed. The target object states (_e.g._, its color) are randomly initialized in each task from our dataset. The task lengths (_i.e._, the number of steps for each task) are not fixed. We collect four subsets of tasks, each representing a difficulty level. In the first level, the workbench has no obstacles, and the ground truth actions involve only movements. In the second level, several fixed obstacles appear on the workbench. In the third level, ayer additionally appears on the workbench and the target object must be moved adjacent to the dyer to change its color if necessary before being moved to the target position. In the fourth level, rotation actions are involved additionally. The action sequence in each task is paired with the corresponding visual observations. Each subset contains 10,000 tasks: 8,000 for training, 1,000 for validation, and 1,000 for testing. We construct additional generalization test benchmarks based on our collection. We provide 4 levels of **Unseen Object** generalization tests for object-level generalization. We generate 1000 tasks for each level in which the target object types are unseen in the training dataset, including object types of _Cellphone_, _Dish Sponge_, _Saltshaker_, and _Potato_. Additionally, we have testbeds for generalization tests for unseen tasks. The training and testing tasks in the **Unseen Task** dataset have different combinations of action types. For example, the training dataset may include tasks that consist of only move_left and move_front actions, as well as tasks that consist of only move_right and move_back actions, while the testing dataset contains tasks from the held-out data with different combinations. **Unseen Task** dataset is limited to the first and the second difficulty levels because limited combinations of actions are not sufficient to accomplish harder tasks. ## 4 Method Given an initial RGB-D state image \(X_{0}\) and a final RGB-D state image \(X_{T}\), our task is to find a valid and efficient state transition path with an inferred sequence of actions \(\mathbf{\Gamma}=\{a_{t}\}_{t=1,\dots,T}\), as well as generating intermediate and final state images \(\mathbf{\tilde{X}}=\{\tilde{X}_{t}\}_{t=1,\dots,T}\). To fulfill this task, we use a concept learner to extract disentangled concept representations for state images, abstract concept symbols for reasoning, and train a visual causal transition model to generate intermediate state images. Figure 1: **Architecture of SCL.** Foreground images \(X_{0,f}\) and \(X_{1,f}\) differ only in the COLOR concept. After extracting their concept tokens, the COLOR concept \(c_{0}^{5}\) of \(X_{0,f}\) is substituted by \(c_{1}^{5}\) from \(X_{1,f}\), which are then fed into the detokenizer and decoder to reconstruct images. ### Substitution-based concept learner The architecture of our Substitution-based Concept Learner (SCL) is illustrated in Fig. 1. Given a pair of foreground images \(X_{0,f}\) and \(X_{1,f}\) as the input, both contain objects that differ only in one concept, _e.g._, a yellow pot and a green pot. A shared encoder \(\phi_{E}\) is applied to the foreground images to obtain the latent embeddings \(Z_{i,f}=\phi_{E}(X_{i,f})\). The embedding \(Z_{i,f}\) is further fed into a concept tokenizer \(\psi_{T}\) to generate the concept tokens \(C_{i}=\{c_{i}^{k}\}_{k=1,\ldots,6}=\psi_{T}(Z_{i,f})\). Here \(k\) is the concept index, and we assume there exist six visual concepts, _i.e._, TYPE, COLOR, SIZE, POSITION_X, POSITION_Y, and ROTATION, representing the visual attributes of the target objects (refer to Sec. 3.1 for details). The concept token \(c_{0}^{i}\) is substituted with \(c_{1}^{i}\) to get a new concept token vector \(C_{0}^{{}^{\prime}}\), where \(i\) indexes the different concepts between the paired images \(X_{0,f}\) and \(X_{1,f}\). For example, the token \(c_{i}^{5}\) represents the color concept in Fig. 1, replacing \(c_{0}^{5}\) with \(c_{1}^{5}\) will change the original yellow pot to a green pot. The token vector \(C_{0}^{{}^{\prime}}\) is fed into a concept detokenizer \(\psi_{D}\) to reconstruct the latent embedding \(Z_{1,f}^{{}^{\prime}}=\psi_{D}(C_{0}^{{}^{\prime}})\), which is further decoded into image \(\tilde{X}_{1,f}=\phi_{D}(Z_{1,f}^{{}^{\prime}})\). After the concept detokenizer and decoder, we obtain a combined reconstruction loss as follows: \[\mathcal{L}_{1}=\mathcal{L}_{MSE}(X_{0,f}^{{}^{\prime}},X_{0,f})+\mathcal{L}_ {MSE}(\tilde{X}_{1,f},X_{1,f}), \tag{1}\] where \(\mathcal{L}_{MSE}\) is the mean squared error. In addition, we add another branch directly connecting the encoder to the decoder. This branch aims to distinguish the role of the encoder from that of the concept tokenizer; it enforces the encoder to learn hidden representations by reconstructing \(X_{0,f}\). The reconstructed image and reconstruction loss of this branch are \(\hat{X}_{0,f}\) and \(\mathcal{L}_{MSE}(\hat{X}_{0,f},X_{0,f})\), respectively. Similar to Yang et al. (2022), a Concept Disentangling Loss (CDL) is employed to reduce interference between the concept tokens. The CDL can be formulated as follows: \[\mathcal{L}_{CDL}=\mathcal{L}_{CE}(\|C_{0}-C_{1}\|_{2},i), \tag{2}\] where \(\mathcal{L}_{CE}\) is the cross-entropy loss. \(\|C_{0}-C_{1}\|_{2}\) calculates the \(l_{2}\) norm of the variation of each concept token. \(i\) is the ground-truth token index and indicates that the \(i\)-th concept token is replaced. The total loss \(\mathcal{L}_{C}\) of concept learner is as follows: \[\mathcal{L}_{C}=\mathcal{L}_{1}+\mathcal{L}_{MSE}(\tilde{X}_{0,f},X_{0,f})+ \mathcal{L}_{CDL}, \tag{3}\] where the equal weights for each loss work well in our experimental settings. ### Symbol abstraction and reasoning Symbol abstraction aims to convert concept tokens into discrete symbols for later symbolic reasoning. Our empirical results in Fig. 6 show that the concept tokens learned in Sec. 4.1 are well-disentangled and can be easily clustered into several categories. Therefore a clustering algorithm could be applied to the concept tokens to generate symbols. Specifically, we collect all the concept tokens extracted from the training data using the substitution-based concept learner and create the concept token spaces: \(\mathbf{C}=\{c_{n}\}\). Then, we employ the K-means algorithm to cluster data points within the concept spaces, resulting in the concept centers \(\{\bar{c}\}\) and a symbol assignment \(\omega=\sigma(c,\{\bar{c}\})\) for each concept token \(c\). Here \(\sigma\) is the nearest neighbor function which assigns the symbol of the nearest concept center to \(c\). This process is applied to six defined concepts separately, abstracting a set of concept symbols \(\Omega=\left\{\omega^{k}\right\}_{k=1,\ldots,6}\) for each image. The symbolic reasoning aims to find the most plausible transition path from the initial state to the goal state at the symbol level, which can be formulated as a Markov Decision Process (MDP). Given the initial concept symbols \(\Omega_{0}=\{\omega_{0}^{k}\}_{k=1,\ldots,6}\) and the action \(a_{0}\), the symbol reasoner computes the distribution of concept symbols at the next timestep \(\Pr\left[\Omega_{1}^{{}^{\prime}}\mid a_{0},\Omega_{0}\right]\). The concept symbol Figure 2: **Symbol abstraction and reasoning**. The symbolic reasoning module generates the most plausible action sequences given the inital and the goal concept symbols. These action sequences are then fed into ViCT to generates effect images. distribution at the timestep \(t\) can be obtained as follows: \[\Pr\left[\Omega_{t}^{{}^{\prime}}\mid a_{0:t-1},\Omega_{0}\right]=\sum_{o\in \mathbf{\Omega}}\Pr\left[\Omega_{t}^{{}^{\prime}}\mid a_{t-1},\Omega_{t-1}^{{} ^{\prime}}=o\right]\cdot\Pr\left[\Omega_{t-1}^{{}^{\prime}}=o\mid a_{0:t-2}, \Omega_{0}\right], \tag{4}\] where \(\mathbf{\Omega}\) denotes the the entire concept symbol space. Additionally, two legality checks are implemented during the reasoning process to ensure the validity of the action sequence, involving action legality and state legality checks. The action legality is defined as \(\mathbf{1}_{\Pr[a|\Omega]\text{-thresh}}\). This check aims to prevent the use of noise-inducing transformations caused by the substitution-based concept learner, thereby modifying Equation 4 to: \[\Pr\left[\Omega_{t}^{{}^{\prime}}\mid a_{0:t-1},\Omega_{0}\right]=\sum_{o\in \mathbf{\Omega}}\mathbf{1}_{\Pr[a_{t-1}|o]\text{-thresh}}\Pr\left[\Omega_{t}^ {{}^{\prime}}\mid a_{t-1},\Omega_{t-1}^{{}^{\prime}}=o\right]\Pr\left[\Omega_ {t-1}^{{}^{\prime}}=o\mid a_{0:t-2},\Omega_{0}\right]. \tag{5}\] The state legality check is designed to eliminate contributions to the distribution originating from invalid states (e.g., collisions with obstacles on the workbench). It can be written as follows: \[\Pr\left[\Omega_{t}^{{}^{\prime}}=o_{0}\mid a_{0:t-1},\Omega_{0};\left\{ \Omega_{\text{env}}\right\}\right]=\frac{\mathbf{1}_{o_{0}\in\mathbf{\Omega }_{\text{valid}}}\cdot\Pr\left[\Omega_{t}^{{}^{\prime}}=o_{0}\mid a_{0:t-1}, \Omega_{0}\right]}{\sum_{o\in\mathbf{\Omega}_{\text{valid}}}\Pr\left[\Omega_ {t}^{{}^{\prime}}=o\mid a_{0:t-1},\Omega_{0}\right]}, \tag{6}\] where \(\mathbf{\Omega}_{\text{valid}}\subseteq\mathbf{\Omega}\) represents the set of valid concept symbols given the concept symbols of other objects in the environment, and \(o_{0}\) is an arbitrary element within \(\mathbf{\Omega}\). To reduce computational complexity, the reasoning process is individually applied to each concept. This approach is effective due to the well-designed disentangled concepts, which ensure that the changes in each concept are independent given a particular action. The MDP aims to discover the most possible action sequence \(a_{0:T-1}\) for which the corresponding distribution of concept symbols \(\Omega_{T}^{{}^{\prime}}\) closely approximates the goal concept symbols \(\Omega_{T}\). This action sequence is then passed into the Visual Causal Transition model (See Sec. 4.3) to generate predicted intermediate images (Fig. 2). ### Visual causal transition learning The aim of visual causal transition model (ViCT) is to generate visual effect images based on visual precondition images and human actions. For example, Fig. 3 shows an action that moves the pot one step to the right. ViCT predicts the low-position image \(X_{1}\) by transforming the high-position image \(X_{0}\) with a put_down action. As seen in Fig. 3, three parts exist in the framework of ViCT. Firstly, the causal transition is the key part of ViCT. This process transforms object concept tokens from \(C_{0}\) to \(C_{1}^{{}^{\prime}}\) with the help of an action embedding \(\mathcal{V}(a)\). The action \(a\) is encoded into a one-hot vector and further embedded via an embedding function \(\mathcal{V}\) to achieve this. The transition process is as follows: \[C_{1}^{{}^{\prime}}=\mathcal{T}(C_{0},\mathcal{V}(a)), \tag{7}\] where \(C_{1}^{{}^{\prime}}\) represents the resulting concept tokens. \(\mathcal{T}\) denotes the transition function involved in this causal transition process. In addition to the causal transition component, two other crucial parts in ViCT are dedicated to managing visual extraction and reconstruction. The second part contains a concept tokenizer to extract foreground object concept tokens \(C_{0}\) for later transitions. This concept tokenizer has been trained as described in Sec. 4.1 and fixed here. This part also involves a background encoder \(\rho_{E}\), which processes the background image to produce latent vectors represented as \(Z_{0,b}\). The vectors Figure 3: **Architecture of ViCT**. The concept tokenizer extracts object concept tokens for causal transition. The causal transition model transforms concept tokens from \(C_{0}\) to \(C_{1}^{{}^{\prime}}\) with the action embedding \(\mathcal{V}(a)\). The background encoder converts the background image into latent vectors, which are then combined with predicted concept tokens \(C_{1}^{{}^{\prime}}\) to generate the effect image \(\tilde{X}_{1}\). \(Z_{0,b}\) store background-related information and will be used to generate the resultant image \(\tilde{X}_{1}\), as illustrated in the rightmost part of Fig. 3. The third part combines foreground object concept tokens and background latent vectors to predict effect image \(\tilde{X}_{1}\) with the background decoder \(\rho_{D}\). Instead of directly using concept tokens, we convert them back to latent embeddings, _i.e_., from \(C_{1}^{{}^{\prime}}\) to \(Z_{1,f}^{{}^{\prime}}\), and then concatenate \(Z_{1,f}^{{}^{\prime}}\) with latent vectors \(Z_{0,b}\) as the input to the decoder. Similarly, we can also combine \(Z_{0,f}^{{}^{\prime}}\) and \(Z_{0,b}\) to obtain a reconstruction image \(X_{0}^{{}^{\prime}}\). Up to now, two losses can be computed during training: a reconstruction loss \(\mathcal{L}_{MSE}(X_{0}^{{}^{\prime}},X_{0})\) and a prediction loss \(\mathcal{L}_{MSE}(\tilde{X}_{1},X_{1})\). In addition to measuring image-level prediction errors, we can also evaluate token-level prediction errors. Given a ground-truth effect image \(X_{1}\), we extract its concept tokens \(C_{1}\), and introduce a token prediction loss \(\mathcal{L}_{MSE}(C_{1}^{{}^{\prime}},C_{1})\). The total loss of ViCT is summarized as follows: \[\mathcal{L}_{T}=\mathcal{L}_{MSE}(C_{1}^{{}^{\prime}},C_{1})+\mathcal{L}_{ MSE}(\tilde{X}_{1},X_{1})+\mathcal{L}_{MSE}(X_{0}^{{}^{\prime}},X_{0}). \tag{8}\] The visual causal transition model is trained on our causal planning dataset (see Sec. 3.2). ## 5 Experiments In our experiments, we aim to answer the following questions: (1) Is our model design effective and applicable to visual planning tasks? (2) How do the proposed key components contribute to the model performance? (3) Are the learned concepts and causal transitions interpretable? (4) Does the proposed method exhibit generalization on novel tasks? To answer these questions, we perform extensive experiments on dataset _CCTP_. As shown, the proposed methods are interpretable, generalizable, and capable of producing significantly better results than baseline methods. ### Evaluating visual planning on dataset _CCTP_ To validate the effectiveness of our model design, we employ PlaTe (Sun et al., 2022), the state-of-the-art method for visually-grounded planning, as our baseline. To probe the contribution of our proposed components, we replace each component with alternative baselines to compare with. We replace the proposed concept learner with strong baselines such as beta-VAE (Higgins et al., 2016) and VCT (Yang et al., 2022) model to verify the effectiveness of our concept learning module. Additionally, we compare our model to a reinforcement-learning-based decision process, noted as "w/ RL". Furthermore, to verify the necessity of our symbolization process, we apply the reasoning process directly to the concept tokens, employing our causal transition model to search for states closest to the goal state within the concept token spaces. We also conduct experiments where we further remove the concept learning process. Instead, we use an autoencoder to extract latent embedding for causal transition. The corresponding results are denoted as "w/o. symbol" and "w/o. concept", respectively. The "w/o. concept" experiments are limited to the level-1 dataset because the method is unable to handle obstacles. Finally, we replace the explicit planning module with a transformer Figure 4: **Qualitative results of our visual planning model**. The top two samples are obtained from the level-3 dataset, and the bottom two are from the level-4 datasets. Our model demonstrates its capability to manage tasks of varying lengths, effectively planning action sequences, and generating intermediate and goal state images. Notably, the first sample from the level-4 dataset generates a different path compared to the ground truth but still achieves success and maintains high efficiency. architecture. It takes the concept symbols of the initial state and goal state, provided by our concept learner and symbolizer, as inputs to generate the action sequence. We refer to this variant as "w/o causal". We also substitute the planning module with random action predictions for each step as an additional baseline for reference. Detailed implementation for baselines is illustrated in Sec. A.3. Evaluation metricsTo thoroughly inspect the performance of visual planning, we employ metrics including Action Sequence Prediction Accuracy (ASAcc), Action Sequence Efficiency (ASE), and Final State Distance (FSD). ASAcc evaluates the sequence prediction accuracy. In level-1 and 2 tasks, a successful prediction entails moving the target object accurately to the position of goal states without encountering any collisions with obstacles (if present). In level-3 tasks, when the target object's color changes, success requires moving the object adjacent to the dyer, applying the change_color action, and then moving it to the goal position. In level-4 tasks, the target object must also be correctly rotated for success. ASAcc is measured as the success rate. During testing, the planning models make 5 attempts for each task. The top-1 accuracy is based on the first attempt, while the top-5 accuracy checks if any of the 5 attempts are successful. ASE measures the efficiency of the planning by comparing the length of the ground truth sequence to that of the predicted sequence. We only take the successfully predicted sequences into consideration. The ASE is defined as follows: \[ASE=\frac{\sum_{i=1}^{N}\mathbb{I}(\mathbf{\Gamma}_{i}^{pred})\ell(\mathbf{ \Gamma}_{i}^{gt})/\ell(\mathbf{\Gamma}_{i}^{pred})}{\sum_{i=1}^{N}\mathbb{I}( \mathbf{\Gamma}_{i}^{pred})}, \tag{9}\] where \(\mathbb{I}\) is a indicator function for a successful prediction, \(\ell\) represents the length of an action sequence. Of note, the ground truth action sequences in _CCTP_ are the most efficient, so the efficiency of a predicted sequence will be no more than 1. FSD calculates the distance between the positions of the foreground object in the final predicted state and in the goal state. The distance is defined based on the object's coordinates w.r.t. the workbench. ResultsWe can see from Tab. 1 that the proposed method achieves significantly higher performance compared with baselines. Specifically, we compare our method with different ablative variants and a strong baseline PlaTe (Sun et al., 2022). Our method outperforms baselines in terms of sequence accuracy (ASAcc) by a large margin and achieves the smallest final state distance (FSD), which demonstrates our method can obtain an accurate planning path to reach the goal state. Our method achieves very competitive ASE if not the best among all the models. Moreover, our model maintains strong performance when encountering hard tasks, while competitive baselines' performances significantly decrease as task difficulty increases. These results demonstrate the effectiveness of our model design. Our full model achieves the best overall performance in all four levels of tests, and each component of our \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline \multirow{2}{*}{ \begin{tabular}{c} Model \\ ID \\ \end{tabular} } & \multicolumn{2}{c}{ASAcc(\(\kappa\))} & \multicolumn{2}{c}{ASEP(\(\kappa\))} & \multicolumn{2}{c}{FSD(\(\kappa\))} & \multicolumn{2}{c}{ASAcc(\(\kappa\))} & \multicolumn{2}{c}{FSD(\(\kappa\))} \\ \cline{2-7} & Top-1 & Top-5 & & & Top-1 & Top-5 & \\ \hline \multicolumn{7}{c}{Dataset level-1} \\ \hline 1 & 1.3 & 7.3 & - & 3.199 & 0.4 & 2.2 & - & 3.699 \\ 2 & 38.9 & - & - & 15.3 & - & - \\ \hline 3 & 0.5 & 3.0 & 0.970 & 23.0 & 0.3 & - & 3.670 \\ 4 & 54.1 & 66.0 & 9.072 & 14.83 & 1.6 & - & 0.988 & 1.294 \\ 5 & 65.8 & 76.9 & 0.983 & 1.997 & + & 4.0 & 5.26 & 0.962 & 1.627 \\ 6 & 56.9 & 77.6 & 0.986 & 1.644 & - & 3.25 & 0.3 & - & - & 3.419 \\ 7 & 14.4 & - & - & 2.18 & 2.5 & 0.6 & **1.000** & 3.150 \\ 8 & 97.8 & **99.2** & 0.971 & **0.025** & **0.974** & **0.965** & **0.981** \\ \hline \multicolumn{7}{c}{Dataset level-3} \\ \hline 1 & 0.0 & 0.4 & - & 3.533 & 0.1 & 0.4 & - & 3.147 \\ 2 & 0.7 & - & - & 0.4 & - & - & - \\ \hline 3 & 0.0 & 0.5 & - & 3.966 & 0.0 & 0.0 & - & 3.107 \\ 4 & 0.7 & 1.2 & 0.968 & 3.442 & 0.2 & 0.3 & 1.000 & 3.193 \\ 5 & 15.4 & 24.1 & 0.970 & 2.278 & 0.8 & 14.0 & 0.981 & 2.149 \\ 7 & 0.0 & - & 3.691 & 0.0 & - & 3.501 \\ 8 & 3.0 & 3.9 & **1.000** & 3.000 & 2.8 & 3.5 & **1.000** & 2.498 \\ **9** & **86.5** & **87.0** & 0.966 & **0.077** & **0.51** & **76.7** & 0.978 & **0.093** \\ \hline \hline \end{tabular} \end{table} Table 1: **Quantitative results for visual task planning.** Models corresponding to the model IDs are: 1. Chance, 2. PlaTe (Sun et al., 2022), 3. Ours w/ \(\beta\)-VAE (Higgins et al., 2016), 4. Ours w/ VCT (Yang et al., 2022), 5. Ours w/o symbol, 6. Ours w/o concept, 7. Ours w/o causal, 8. Ours w/ RL, 9. **Ours**. The best scores are marked in **bold**. Figure 5: **Fine-grained attribute level concept manipulation.** The concept learner generates new images by substituting each concept token \(c_{0}^{i}\) from \(X_{0,f}\) with \(c_{1}^{i}\) from \(X_{1,f}\). model contributes remarkably to the performance improvements. Of note, our experiments demonstrate a large boosted performance by adding symbolic transition. The qualitative results are shown in Fig. 4. ### Interpretable concepts and causal transitions We qualitatively show the interpretability of the concept learned by our model. We randomly choose 2 images \(X_{0,f}\) and \(X_{1,f}\), substituting the concept token \(c_{0}^{i}\) with \(c_{1}^{i}\) for \(i=1,2,3,4,5,6\), which are then fed into the concept detokenizer and the decoder to generate new images. As Fig. 5 shows, with the properly learned concept representations, we could perform fine-grained attribute-level concept manipulation. This indicates that our concept learner is capable of disentangling concept factors and demonstrates the interpretability of our method. We quantitatively demonstrate the interpretability of our learned causal transitions with statistics of the corresponding causal effects. To be specific, we aim to answer the question: do the learned causal transitions have semantic meaning consistent with the corresponding action? Fig. 6 (a) shows the correlation between concepts and actions, measured with \(l_{2}\) norm between the concept vectors before and after each action. A larger \(l_{2}\) norm means a higher correlation. We can see that the learned rotation actions only affect the rotation status in the concept vector. Similarly, the horizontal and vertical movements only affect the x and y coordinates. Fig. 6 (b) shows the distribution of position change induced by 7 displacement actions. For example, the position changes of move_front distribute along the positive y-axis, while those of move_back distribute along the negative y-axis. This evidence indicates that 1) our learned concept is successfully disentangled, which makes it possible for our model to learn causal transitions, and 2) the learned causal transition is consistently grounded to real-world actions with similar semantics. ### Generalization on novel objects and tasks We design two experiments to test the generalizability of our model. Unseen objectsThrough this experiment, we aim to investigate if our model can perform visual planning tasks on objects unseen during training. We test our model on the **Unseen Object** testing dataset (see Sec. 3.2 for details) and compare the results with several baselines to demonstrate the generalizability of our concept-based object representation module. We expect our concept learner to recognize the color, position, and size attributes of unseen object types during testing. If this is the case, the transition model could consequently apply transitions on these visual attributes for successful manipulation tasks. As shown in Tab. 2, our model is significantly more robust than PlaTe and RL-based methods against novel objects. \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline \multirow{2}{*}{Model} & ASAC\({}_{c}\)(\%)(\(\uparrow\)) & ASIC\({}_{l}\) & FSD(\(\downarrow\)) & ASAC\({}_{c}\)(\%)(\(\uparrow\)) & ASIC(\(\downarrow\)) & FSD(\(\downarrow\)) \\ \cline{2-7} & Top-1 & Top-5 & & & Top-1 & Top-5 & \\ \cline{2-7} & \multicolumn{4}{c}{Unseen Object level-1} & \multicolumn{4}{c}{Unseen Object level-2} \\ \hline 1 & 0.6 & 4.7 & - & 3.203 & 1.1 & 3.2 & - & 3.991 \\ 2 & 18.5 & - & 9.7 & - & - & - & - \\ 3 & 44.0 & 59.9 & 0.968 & 1.007 & 29.0 & 43.8 & 0.986 & 1.880 \\ 4 & 37.1 & 60.5 & 0.950 & 1.39 & - & - & - & - \\ 5 & 1.7 & - & - & - & - & - & 3.563 \\ 6 & 30.2 & 35.9 & **0.989** & 1.887 & 2.2 & - & 1.1 & **1.009** & 1.549 \\ \hline 7.2 & **0.74** & **0.792** & 0.887 & **0.470** & **0.732** & **0.736** & **0.978** & **0.491** \\ \hline \hline \multicolumn{7}{c}{**Unseen Object level-3} & \multicolumn{4}{c}{Unseen Object level-4} \\ \hline 1 & 0.6 & 4.7 & - & 3.203 & 1.1 & 3.2 & - & 3.991 \\ 2 & 18.5 & - & 9.7 & - & 9.7 & - & - & - \\ 3 & 44.0 & 59.9 & 0.968 & 1.007 & 29.0 & 43.8 & 0.986 & 1.880 \\ 4 & 37.1 & 60.5 & 0.950 & 1.39 & - & - & - & - \\ 5 & 1.7 & - & - & - & - & - & - & 3.563 \\ 6 & 30.2 & 35.9 & **0.989** & 1.887 & 2.2 & **0.326** & 0.978 & **0.491** \\ \hline \hline \multicolumn{7}{c}{**Unseen Object level-3} & \multicolumn{4}{c}{Unseen Object level-4} \\ \hline 1 & 0.6 & 4.7 & - & 3.203 & 1.1 & 3.2 & - & 3.991 \\ 2 & 18.5 & - & 9.7 & - & - & - & - & - \\ 3 & 44.0 & 59.9 & 0.968 & 1.007 & 29.0 & 43.8 & 0.984 & 0.986 \\ 4 & 37.1 & 60.5 & 0.950 & 1.399 & - & - & - & - \\ 5 & 1.7 & - & - & - & - & - & - & - \\ 6 & 30.2 & 35.6 & **0.989** & 1.887 & 2.2 & **0.326** & 0.978 & **0.491** \\ \hline \hline \multicolumn{7}{c}{**Unseen Object level-4} \\ \hline 1 & 0.4 & 2.1 & - & 3.550 & 0.1 & 0.3 & - & - & - \\ 2 & 1.4 & - & 0.974 & 1.022 & 40.0 & 51.9 & 0.980 & 1.407 \\ 3 & 61.1 & 78.0 & 0.971 & 1.485 & - & - & - & - \\ 4 & 42.7 & 70.7 & 0.971 & 1.485 & - & - & - & - \\ 5 & 0.0 & - & - & 3.536 & 0.0 & - & - & 3.525 \\ 6 & 26.3 & 30.1 & **0.994** & 2.159 & 2.8 & 7.0 & **1.000** & 3.417 \\ 7 & **0.87** & **90.93** & 0.985 & **0.015** & **0.82** & **0.994** & 0.991 & **0.819** \\ \hline \hline \end{tabular} \end{table} Table 2: **Quantitative results for generalization tests.** Models corresponding to the model IDs are: 1. Chance, 2. PlaTe (Sun et al., 2022), 3. Ours w/o symbol, 4. Ours w/o concept, 5. Ours w/o causal, 6. Ours w/ RL, 7. **Ours**. The best scores are marked in **bold**. Figure 6: **Action effects on the learned disentangled concept representations.** (a) \(l_{2}\) norm between the concept vectors before and after each action. (b) Distributions of position change induced by each action. Unseen tasksMoreover, we aim to verify that our model is flexible in processing atomic actions. We train our model on tasks with only limited types of action combinations, _i.e._, the **Unseen Task** dataset. In this experiment, PlaTe only performs at the same level as a random guess, while our model performs as well as it does when being trained on the whole dataset, which demonstrates the generalizability of our method on unseen tasks. ## 6 Conclusion In this paper, we propose a novel visual planning model based on concept-based disentangled representation learning, symbolic reasoning, and visual causal transition modeling. In the future, we plan to extend our model to real world task planning, particularly to robotic manipulation.
2308.01778
Electromagnetic and vacuum tests of the PTAK-RFQ module 0
A new Radio-frequency quadrupole (RFQ), which operates at 800 MHz high frequency and will enable to accelerate of the proton beam efficiently was designed at KAHVELab (Kandilli Detector, Accelerator and Instrumentation Laboratory) at Bo\u{g}azi\c{c}i University in \.Istanbul, Turkey. The so-called PTAK-RFQ, which consists of two modules with a total length of less than one meter will accelerate protons to 2 MeV at the Proton Testbeam at the Kandilli campus, known as the PTAK project. The prototype of the first module of the 800 MHz PTAK-RFQ (called the PTAK-RFQ module 0), which captures and bunches the proton beam injected from the ion source was fabricated by a local manufacturer from ordinary copper material. The PTAK-RFQ module 0 was subjected to various tests to ensure that its mechanics, pressure, field distribution, and frequency are operationally adjusted. The facilitating solutions emerging from the detailed testing of the PTAK-RFQ module 0 will ultimately guide all mechanical, vacuum, rf testing, final design, and manufacturing processes of the final PTAK-RFQ. The PTAK-RFQ module 0 was first subjected to vacuum tests and then to detailed vacuum leak tests. Subsequently, low-power rf measurements were performed for tuning of field and frequency. The tuning algorithm developed by CERN was optimized for 16 tuners and 6 test field points to be adjusted to the PTAK-RFQ module 0 to the desired field distribution. The tuning algorithm is based on a response matrix, whose inputs are created by bead-pull measurements of individual tuner movements. The tuning algorithm gives some predictions for corrective tuner movements to achieve desired field distribution. In the framework of all these RF tuning processes, the field distribution was tuned through the tuning algorithm and then the frequency was tuned manually.
Atacan Kılıçgedik, Aytül Adıgüzel, Aslıhan Çağlar, Emre Çelebi, Şeyma Esen, Mithat Kaya, Ümit Kaya, Veysi Erkcan Özcan, Görkem Türemen, Nafiz Gökhan Ünel, Fatih Yaman
2023-08-03T14:11:03Z
http://arxiv.org/abs/2308.01778v1
# Electromagnetic and vacuum tests of the PTAK-RFQ module 0 ###### Abstract A new Radio-frequency quadrupole (RFQ), which operates at 800 MHz high frequency and will enable to accelerate of the proton beam efficiently was designed at KAHVELab (Kandilli Detector, Accelerator and Instrumentation Laboratory) at Bogazici University in Istanbul, Turkey. The so-called PTAK-RFQ, which consists of two modules with a total length of less than one meter will accelerate protons to 2 MeV at the Proton Testbeam at the Kandilli campus, known as the PTAK project. The prototype of the first module of the 800 MHz PTAK-RFQ (called the PTAK-RFQ module 0), which captures and bunches the proton beam injected from the ion source was fabricated by a local manufacturer from ordinary copper material. The PTAK-RFQ module 0 was subjected to various tests to ensure that its mechanics, pressure, field distribution, and frequency are operationally adjusted. The facilitating solutions emerging from the detailed testing of the PTAK-RFQ module 0 will ultimately guide all mechanical, vacuum, rf testing, final design, and manufacturing processes of the final PTAK-RFQ. The PTAK-RFQ module 0 was first subjected to vacuum tests and then to detailed vacuum leak tests. Subsequently, low-power rf measurements were performed for tuning of field and frequency. The tuning algorithm developed by CERN was optimized for 16 tuners and 6 test field points to be adjusted to the PTAK-RFQ module 0 to the desired field distribution. The tuning algorithm is based on a response matrix, whose inputs are created by bead-pull measurements of individual tuner movements. The tuning algorithm gives some predictions for corrective tuner movements to achieve desired field distribution. In the framework of all these RF tuning processes, the field distribution was tuned through the tuning algorithm and then the frequency was tuned manually. **INTRODUCTION** The first phase of the PTAK project, including the low-energy transmission line, has recently been commissioned at KAHVELab in Turkey [1-4]. In line with the PTAK project's ultimate goals, the PTAK-RFQ was designed [5] to be accelerated protons to 2 MeV to be used in Proton Induced X-Ray Emission (PIXE) experiments, a non-destructive elemental analysis technique [6]. Layouts of the ion source, the low-energy beam transport line, and the 800 MHz four-vane RFQ at KAHVELab can be seen in Fig. 1[2, 7]. The 800 MHz four-vane PTAK-RFQ is the key accelerator component which consists of two modules with a total length of less than one meter that will accelerate protons to 2 MeV. The PTAK-RFQ is assembled using bolts and nuts without the need for brazing. In addition, PTAK-RFQ's design includes 3-D O-rings to prevent vacuum leaks and finger-type RF shields to resist RF leaks [5]. The PTAK-RFQ and similar high-frequency (HF) RFQs, recently developed by CERN [8 -16], are summarized in terms of their key parameters in Table 1. The PTAK-RFQ will be operat Figure 1: Layout of the Proton Test Beam Line at KAHVELab.
2310.17378
Optimization dependent generalization bound for ReLU networks based on sensitivity in the tangent bundle
Recent advances in deep learning have given us some very promising results on the generalization ability of deep neural networks, however literature still lacks a comprehensive theory explaining why heavily over-parametrized models are able to generalize well while fitting the training data. In this paper we propose a PAC type bound on the generalization error of feedforward ReLU networks via estimating the Rademacher complexity of the set of networks available from an initial parameter vector via gradient descent. The key idea is to bound the sensitivity of the network's gradient to perturbation of the input data along the optimization trajectory. The obtained bound does not explicitly depend on the depth of the network. Our results are experimentally verified on the MNIST and CIFAR-10 datasets.
Dániel Rácz, Mihály Petreczky, András Csertán, Bálint Daróczy
2023-10-26T13:14:13Z
http://arxiv.org/abs/2310.17378v2
Optimization dependent generalization bound for ReLU networks based on sensitivity in the tangent bundle ###### Abstract Recent advances in deep learning have given us some very promising results on the generalization ability of deep neural networks, however literature still lacks a comprehensive theory explaining why heavily over-parametrized models are able to generalize well while fitting the training data. In this paper we propose a PAC type bound on the generalization error of feedforward ReLU networks via estimating the Rademacher complexity of the set of networks available from an initial parameter vector via gradient descent. The key idea is to bound the sensitivity of the network's gradient to perturbation of the input data along the optimization trajectory. The obtained bound does not explicitly depend on the depth of the network. Our results are experimentally verified on the MNIST and CIFAR-10 datasets. OPT2023: 15th Annual Workshop on Optimization for Machine Learning at the 37th NeurIPS 2023, New Orleans, LA, USA ## 1 Introduction and related work Deep learning has started soaring in popularity during the last decade and by today it has reached unprecedented heights in terms of practical usage as well as theoretical research. As higher computational capacity is also becoming easier to access, the complexity and size of deep neural networks being used by the community is also dramatically increasing. The number of parameters contained in such networks are usually much higher than the number of training data points or the dimension of the training data. As a result, these models tend to easily interpolate the training data and according to the "classical" theory of machine learning (see e.g. [32]) they should lead to overfitting. However, it has been shown [34] empirically that this is not the case, i.e. highly over-parametrized models trained by gradient descent are capable of generalizing well while fitting the training data. This phenomenon drove the theoretical research of deep learning towards examining over-parametrized models. A big push has been given by the discovery of the Neural Tangent Kernel (NTK, [12]), leading to several convergence theorems in the infinitely wide regime [1, 8, 18] and shedding light on the connection of over-parametrized networks and kernel machines (see [3] for a comprehensive overview). While some interesting related results [4, 16, 25, 27, 31] exist, it is not entirely clear how the properties of the NTK might explain generalization. Deep networks are almost always trained using some form of the gradient descent algorithm which seems to contain a so-called implicit bias in case of over-parametrized neural networks, i.e. finding well generalizing optima despite interpolating the training data. The conditions and properties of implicit bias is still under heavy research [33]. In [10] the authors show that under special conditions on the data, gradient descent finds a solution which generalizes well, but the resulting model is sensitive to adversarial perturbations. Analyzing the trajectories of gradient descent has also been present in the literature [22, 35]. One of the standard methods to obtain generalization bounds for models in statistical learning is the Probably Approximately Correct (PAC) framework [21], which was applied to deep networks in [14] and later in [9, 23]. These papers establish PAC-Bayesian bounds on the generalization error based on the estimation of the KL divergence of the predictor w.r.t. some prior distribution of the parameters. This was further developed in [24] resulting in a bound depending on the norm of the weights and (implicitly) on the depth of the network. PAC bounds on the generalization error are closely connected to bounding the Rademacher complexity. In [9] and [2] the authors also exploited some bounds on the Rademacher complexity of the underlying family of functions represented by ReLU networks. Several other bounds on the Rademacher complexity was derived in [11] depending on yet again the norm of the weight matrices and the width and depth of the network. In [30] a bound on the Rademacher complexity is achieved by defining the learning problem in a Reproducing Kernel Banach Space (RKBS) under some conditions on the learning algorithm. Generalization bounds for deep convolutional networks and Graph Neural Networks have been established in [17] and [20], respectively. ### Our contribution Informally, our main result is an upper bound on the generalization error of feedforward ReLU networks trained with gradient descent under certain conditions, depending on the optimization path. Before starting our journey to precisely state our theorem, we give a brief overview of the most important ingredients of our concept in order to make the rest of the article more traceable. * We will make use of a classical PAC inequality for the generalization error which depends on the Rademacher complexity of the loss values on some (test) sample. * The key step is to upper bound this Rademacher complexity of the family of functions represented by ReLU networks by examining the network's behavior along the optimization trajectory from the perspective of sensitivity to perturbation of the input data. * The basic idea behind this sensitivity measure is the following. We look at the gradient of the network function w.r.t. the parameters as a feature map defined on the input data. What happens to this feature representation if the input data is perturbed by some Gaussian noise? * We reinforce our theoretical bound by performing experiments on the MNIST [15] and CIFAR-10 [13] datasets. 1 Footnote 1: The implementation is available at [https://github.com/danielracz/tansens_public](https://github.com/danielracz/tansens_public) The idea of measuring the change in the network's gradients caused by Gaussian perturbation of the input data has been introduced in [6], where it was empirically shown that in case of the task of classification, the resulting measure correlates with the generalization gap of the network and can be used to estimate the test loss without making use of the labels of the test data. However, previously there was no theoretical connection known to us explaining this phenomenon. Exploiting the representation of the data via the gradient of the network as a feature map is one of the underlying ideas of the Neural Tangent Kernel and has been seen before (e.g. [5, 19]). In [26] such representation is used to induce a similarity function on the data. While the gradient is usually constant over training [19] under heavy over-parametrization, we suspect it has a vital connection to the generalization ability of the network in both the finite and infinite case. ## 2 Problem setup ### Notations Let us consider the framework of Empirical Risk Minimization over the task of binary classification, i.e. we are given a finite set of training data \(D=\{(\mathbf{x}_{i},y_{i});i=\{1,\dots,n\}\}\) drawn from a probability distribution \(\mathcal{X}\times\mathcal{Y}\) on \(\mathbb{R}^{n_{in}}\times\{-1,1\}\). Let \(\mathcal{L}^{D}_{emp}(f)=\frac{1}{n}\sum_{i=1}^{n}l(f(\mathbf{x}_{i}),y_{i})\) denote the empirical loss we want to minimize defined over a class of functions \(\mathcal{F}\). We denote the true error by \(\mathcal{L}(f)=\mathbf{E}_{(\mathbf{x},y)\sim\mathcal{X}\times\mathcal{Y}}[l (f(\mathbf{x}),y]\). The generalization error or gap of a model \(f\in\mathcal{F}\) is defined as \(|\mathcal{L}^{D}_{emp}(f)-\mathcal{L}(f)|\). In practice, we can approximate the generalization gap by the empirical generalization gap, i.e. the loss difference on the training data and some test data (see [7]). Let the function class \(\mathcal{F}\) we would like to optimize over be a family of ReLU networks characterized by a parameter vector \(\theta\in\mathbb{R}^{P}\). We will treat such networks as a function of both the input data and the parameter vectors denoted by \(f:\mathbb{R}^{P}\times\mathbb{R}^{n_{in}}\rightarrow\mathbb{R}^{n_{out}}\), where \(P\) is the number of parameters and \(n_{in}\) is the input dimension. As we are dealing with the binary classification task we have \(n_{out}=1\). Such models are usually trained by using the gradient descent algorithm from the initial point \(\theta_{0}\) defined recursively at time \(T\) as \(\theta_{T}=\theta_{T-1}-\eta(T)\nabla_{\theta}\mathcal{L}^{D}_{emp}(f(\theta _{T-1},\cdot))\), where \(\eta(T)\in\mathbb{R}^{+}\) is the learning rate at time \(T\). We follow the convention of ReLU\({}^{\prime}(0)=0\). For a fixed choice of the learning rate function \(\eta:\mathbb{N}\rightarrow\mathbb{R}^{+}\) we say \(\theta=GD(\theta_{0},\eta,T)\), if \(\theta\) is the output of the gradient descent after \(T\) steps initialized in \(\theta_{0}\) and run with the choice of \(\eta\) as the learning rate. Let \(Traj(\theta_{0})\) denote the set of parameter vectors \(\theta\) for which there exists \(\eta\) and \(T\) such that \(\theta=GD(\theta_{0},\eta,T)\). ### Tangent Sensitivity The central definition of our paper is called Tangent Sensitivity. Initially it was defined in [6] motivated by the following. Consider a small enough Gaussian perturbation around \(\mathbf{x}\sim\mathcal{X}\) with \(\phi(\mathbf{x})=\mathbf{x}+\delta(\mathbf{x})\) where \(\delta(\mathbf{x})\sim\mathcal{N}(0,\sigma\mathbf{I})\) is a random variable. For the expected change in the gradient mapping defined as \(\mathbf{x}\rightarrow\nabla_{\theta}f(\theta,\mathbf{x})\) on the input space we have \[\mathbf{E}_{\delta(\mathbf{x})}[\|\nabla_{\theta}f(\theta,\mathbf{x})-\nabla_ {\theta}f(\theta,\phi(\mathbf{x}))\|_{2}^{2}]\sim\mathbf{E}_{\delta(\mathbf{x })}\left[\left\|\frac{\partial\nabla_{\theta}f(\theta,\mathbf{x})}{\partial x }\delta(\mathbf{x})\right\|_{2}^{2}\right]\leq\sigma\left\|\frac{\partial \nabla_{\theta}f(\theta,\mathbf{x})}{\partial x}\right\|_{2}^{2}.\] The first approximation is based on the Taylor expansion of the gradient mapping and scales with the variance \(\sigma\) of the Gaussian noise. Hence the next definition. **Definition 1**: _The Tangent Sample Sensitivity of a feedforward network \(f\) with output in \(\mathbb{R}\) at input \(x\in\mathbb{R}^{n_{in}}\) is a \(P\times n_{in}\) dimensional matrix, \(S(\theta,\mathbf{x}):=\frac{\partial\nabla_{\theta}f}{\partial x}(\theta, \mathbf{x})=\frac{\partial^{2}f}{\partial x\partial\theta}(\theta,\mathbf{x})\). The Tangent Sensitivity is the expectation of tangent sample sensitivity, i.e. \(S(\theta):=\mathbf{E}_{\mathbf{x}\sim\mathcal{X}}[S(\theta,\mathbf{x})]\)._ Among other interesting properties it is empirically shown in [6] that the Frobenius norm of the Tangent Sensitivity matrix has a close relationship to the generalization error of the network. Theoretical explanation of this phenomenon has been unknown to us in the literature, thus to address this experience, we will establish a PAC bound on the generalization gap in which the norm of the Tangent Sensitivity appears. During the rest of the paper we will abuse the naming convention and shorten Tangent Sample Sensitivity to Tangent Sensitivity in some cases. ## 3 PAC bound Our goal now is to state our main theorem. First we need to introduce a series of assumptions. **Assumption 2**: _The loss function \(l\) has the form \(l(f(\mathbf{x}),y)=\ell(f(\mathbf{x})-y)\), where \(\ell\) is \(K_{\mathcal{L}}\)-Lipschitz._ This is a mild assumption as most of the standard loss functions are Lipschitz on a bounded domain. **Assumption 3**: _For a fixed \(\theta_{0}\in\mathbb{R}^{P}\) and \(\varepsilon>0\) let \(U_{\theta_{0},\varepsilon}=Traj(\theta_{0})\cap B_{\varepsilon}(\theta_{0})\), where \(B_{\varepsilon}(\theta_{0})\) is the \(\varepsilon\)-ball around \(\theta_{0}\). We assume that for any \(\theta\in U_{\theta_{0},\varepsilon}\) the Frobenius norm of the Tangent Sensitivity is bounded on the training set, i.e. \(\sup\limits_{\mathbf{x}\sim D}\|S(\theta,\mathbf{x})\|_{F}\leq C_{TS}\) for some \(C_{TS}>0\)._ Empirical evidence suggests that this is a reasonable assumption around an initialization point \(\theta_{0}\), which in practice usually contains an optimum. In light of Assumption 3 we define the family of models \(\mathcal{F}_{\theta_{0},C,\varepsilon}:=\bigg{\{}f(\theta,\cdot)\biggm{|} \theta\in U_{\theta_{0},\varepsilon},\)\(\sup\limits_{\mathbf{x}\sim D}\|S(\theta,\mathbf{x})\|_{F}\leq C\bigg{\}}\). **Assumption 4**: _We assume the following upper bounds hold: \(\sup\limits_{x\sim\mathcal{X}}|f(\theta_{0},\cdot)|\leq K_{\theta_{0}}\), \(\sup\limits_{x\sim\mathcal{X}}\|\nabla_{\theta}f(\theta_{0},\mathbf{x})\|\leq K _{\nabla_{0}}\) and \(\sup\limits_{x\sim\mathcal{X}}\|\mathbf{x}\|\leq K_{x}\)._ Note, that the these assumptions are widely applied in the literature. The first two refer to the boundedness of the function and its gradient around the initialization, while the third one assumes the input is bounded. **Theorem 5**: _Consider the problem setup from Section 2 and let Assumption 2 - 4 hold for a fixed initialization \(\theta_{0}\) and \(\varepsilon>0\). Furthermore let us suppose for all \(T\in\mathbb{N}\) that along all gradient descent trajectories of length \(T\) starting from \(\theta_{0}\) the quantity \(\sum\limits_{t=1}^{T-1}\eta(t)\frac{1}{n}\sum\limits_{i=1}^{n}\frac{\partial l }{\partial f}(\theta_{t-1},\mathbf{x}_{i})\) is upper bounded by a positive constant \(C_{GD}\). Then for any \(\delta\in]0,1[\) with probability at least \(1-\delta\) over the random sample \(S\) we have_ \[\forall f(\theta,\cdot)\in\mathcal{F}_{\theta_{0},C_{TS},\varepsilon}: \mathcal{L}(f)-\mathcal{L}_{emp}^{S}(f)\leq K_{\mathcal{L}}K_{\theta_{0}}+ \frac{C_{1}}{\sqrt{N}}+K_{\mathcal{L}}H(\theta)+B\sqrt{\frac{2log(\frac{4}{ \delta})}{N}},\] _where \(C_{1}=2K_{\mathcal{L}}K_{x}K_{\nabla_{0}}C_{TS}C_{GD}\) and \(H(\theta)\) is an error term and \(N\) denotes the size of \(S\). The constant \(B\) is an upper bound on the loss \(l(\cdot,\cdot)\). Additionally, along with a properly scaled ReLU activation if the width of the network tends to infinity, \(H(\theta)\) tends to zero._ There are two critical terms in the bound of Theorem 5, namely \(C_{TS}C_{GD}\) and \(H(\theta)\). The former one highlights the connection of the generalization ability of the network and the Tangent Sensitivity along the optimization trajectory and it originates from the estimation of the Rademacher complexity (Definition 6 in Appendix A) of the network on the sample \(S\). While currently we do not have a satisfying theoretical guarantee on the value of \(C_{TS}\), we have strong empirical evidence that the norm of the Tangent Sensitivity is indeed correlated to the empirical generalization gap, see Fig. 1. For more details on our experiments see Appendix C. The second term comes from the well known Taylor-approximation of the network and it is proportional to the norm of the Hessian of the network function. The main intuition behind the proof lies in the following possible approximation of a ReLU network. If \(\theta=\theta_{T}\), then \[f(\theta_{T},\mathbf{x})=f(\theta_{0},\mathbf{x})-\nabla_{\theta}f(\theta_{0 },\mathbf{x})^{T}\left(\sum\limits_{t=1}^{T-1}\eta(t)\frac{1}{n}\sum\limits_{ i=1}^{n}\frac{\partial l}{\partial f}S(\theta_{t-1},\mathbf{x}_{i})\mathbf{x}_{i} \right)+h(\theta_{T},\mathbf{x}),\] where \(h\) is an error term (see Appendix B) which determines the term \(H(\theta)\) in Theorem 5. Let \(w_{\theta}=\sum\limits_{t=1}^{T-1}\eta(t)\frac{1}{n}\sum\limits_{i=1}^{n} \frac{\partial l}{\partial f}S(\theta_{t-1},\mathbf{x}_{i})\mathbf{x}_{i}\). Because \(w_{\theta}\) depends only on the training set and the optimization path, but not the actual input \(\mathbf{x}\), we can approximate a ReLU network by the scalar product \(<w_{\theta},\nabla_{\theta}f(\theta_{0},\mathbf{x})>\) and bound the norm of \(w_{\theta}\) thanks to the various assumptions. Finally, we can apply the standard techniques for bounding the Rademacher complexity of linear classifiers. For a complete proof of Theorem 5 see Appendix B. Figure 1: Pearson correlation3 between the empirical generalization gap and the average norm of the Tangent Sensitivity on the test dataset for a \(3\times 3000\)-wide fully connected ReLU net trained on CIFAR-10. The y-axes are based on the actual loss values. The norm values of tangent sensitivity were linearly scaled for presentational purposes. For more details and more experiments see Appendix C. ## 4 Discussion and future work In this paper we have established a PAC bound on the generalization error of feedforward ReLU networks, crucially depending on the Tangent Sensitivity along the optimization trajectory. Empirical evidence has previously shown the correlation between the two quantities, we believe the obtained bound provides a strong theoretical justification. While the established bound is not tight, we believe that the sensitivity measure might have a connection to the smoothness of the function in some appropriate function space [3], which is a promising direction for the generalization theory of deep networks. The straightforward next step seems to be the convergence analysis of the Tangent Sensitivity matrix and its norm around the initialization and optima. An interesting idea would be to incorporate it in the loss function, as presented in [28], however calculating the Tangent Sensitivity norm is computational expensive, thus it requires to find an efficient approximation. In the infinite width limit, Tangent Sensitivity can be viewed as a partial derivative-like object of the NTK w.r.t. the input data. It would be interesting to examine the connection to the generalization ability of the NTK Kernel Machine. ## 5 Acknowledgement This research was supported by the European Union project RRF-2.3.1-21-2022-00004 within the framework of the Artificial Intelligence National Laboratory and by the C.N.R.S. E.A.I. project "Stabilite des algorithmes d'apprentissage pour les reseaux de neurones profonds et recurrents en utilisant la geometrie et la theorie du controlee via la comprehension du role de la surparametrisation". B.D. was supported by MTA Premium Postdoctoral Grant 2018.
2305.15901
Consistent Optimal Transport with Empirical Conditional Measures
Given samples from two joint distributions, we consider the problem of Optimal Transportation (OT) between them when conditioned on a common variable. We focus on the general setting where the conditioned variable may be continuous, and the marginals of this variable in the two joint distributions may not be the same. In such settings, standard OT variants cannot be employed, and novel estimation techniques are necessary. Since the main challenge is that the conditional distributions are not explicitly available, the key idea in our OT formulation is to employ kernelized-least-squares terms computed over the joint samples, which implicitly match the transport plan's marginals with the empirical conditionals. Under mild conditions, we prove that our estimated transport plans, as a function of the conditioned variable, are asymptotically optimal. For finite samples, we show that the deviation in terms of our regularized objective is bounded by $O(1/m^{1/4})$, where $m$ is the number of samples. We also discuss how the conditional transport plan could be modelled using explicit probabilistic models as well as using implicit generative ones. We empirically verify the consistency of our estimator on synthetic datasets, where the optimal plan is analytically known. When employed in applications like prompt learning for few-shot classification and conditional-generation in the context of predicting cell responses to treatment, our methodology improves upon state-of-the-art methods.
Piyushi Manupriya, Rachit Keerti Das, Sayantan Biswas, Saketha Nath Jagarlapudi
2023-05-25T10:01:57Z
http://arxiv.org/abs/2305.15901v6
# Empirical Optimal Transport between Conditional Distributions ###### Abstract Given samples from two joint distributions, we consider the problem of Optimal Transportation (OT) between the corresponding distributions conditioned on a common variable. The objective of this work is to estimate the associated transport cost (Wasserstein distance) as well as the transport plan between the conditionals as a function of the conditioned value. Since matching conditional distributions is at the core of supervised training of discriminative models and (implicit) conditional-generative models, OT between conditionals has the potential to be employed in diverse machine learning applications. However, since the conditionals involved in OT are implicitly specified via the joint samples, it is challenging to formulate this problem, especially when (i) the variable conditioned on is continuous and (ii) the marginal of this variable in the two distributions is different. We overcome these challenges by employing a specific kernel MMD (Maximum Mean Discrepancy) based regularizer that ensures the marginals of our conditional transport plan are close to the conditionals specified via the given joint samples. Under mild conditions, we prove that our estimator for this regularized transport cost is statistically consistent and derive finite-sample bounds on the estimation error. Application-specific details for parameterizing our conditional transport plan are also presented. Furthermore, we empirically evaluate our methodology on benchmark datasets in applications like classification, prompt learning for few-shot classification, and conditional-generation in the context of predicting cell responses to cancer treatment. ## 1 Introduction Optimal Transport (OT) has emerged as a powerful tool for comparing distributions and has been successfully applied in diverse machine learning applications (Peyre and Cuturi, 2019; Liu et al., 2020; Fatras et al., 2021; Cao et al., 2022; Chen et al., 2023). However, one often needs to compare conditional distributions: for e.g., in supervised learning of (probabilistic) discriminative models, one needs to compare the model's label posterior with that corresponding to the training data. Similar is the case with (implicit) conditional-generative models. In such applications, the observed input covariates are rarely discrete, and hence multiple samples for a given input cannot be assumed. Hence it is not clear how OT can be performed between the relevant conditionals as they are implicitly defined via samples from the input-label joint. Moreover, in medical domain applications like treatment-effect prediction (Hahn et al., 2019), the distribution of input covariates for the treated and untreated patients are different. Hence, merely performing OT between the joint distributions of input and label (treatment outcome) is not the same as comparing the corresponding conditionals. In this paper, we address this challenging problem of performing OT between two distributions when conditioned on a common variable, say \(s(y/x)\) and \(t(y/x)\), using samples from the joint distributions, \(s(x,y),t(x,y)\). As motivated earlier, we do not restrict the variable \(x\) to be discrete, and we do not make the assumption that the marginals of the common variable are the same, i.e., we do not assume \(s(x)=t(x)\). In this setting, we present novel estimators for the corresponding optimal transport cost and transport plan as a function of \(x\). Few earlier works have attempted to solve this problem in some special cases. (Frogner et al., 2015) presents an estimator for the special case \(s(x)=t(x)\) and \(y\) being discrete. Their estimator does not readily generalize to the case where \(y\) is continuous. Also, they do not model the transport map/plan as a function of \(x\) and rather solve individual OT problems at every \(x\). (Tabak et al., 2021) begin with the general problem, but since they employ a KL divergence-based regularizer, their formulation simplifies to performing OT between the joint distributions (refer to equation (4) in their paper). We instead use a MMD (Maximum Mean Discrepancy) based regularizer that maintains the critical distinction between matching conditionals vs joints and also facilitates non-parametric matching with the empirical conditionals. Finally, (Bunne et al., 2022) consider special applications where multiple samples from \(s(y/x),t(y/x)\) are available and learn a transport map as a function of \(x\) by solving standard OT problems between \(s(y/x),t(y/x)\) individually for the given \(x\) samples. Also, their approach assumes the ground cost function to be the squared Euclidean. In contrast, we do not assume that multiple samples from \(s(y/x),t(y/x)\) are available and do not restrict the ground cost to be Euclidean. Further, we also estimate the transport plan rather than the transport map. Owing to these reasons, our methodology is more widely-applicable. In summary, existing works consider special cases of our problem and present estimators for the transport map. To the best of our knowledge, we are the first to address the general problem of OT between conditionals described above leading to provably consistent estimators for the optimal transport cost as well as the transport plan as a function of \(x\). The key idea in our formulation is to employ an MMD-based regularizer for enforcing the marginal constraints. Since these marginal constraints themselves involve the conditionals \(s(y/x)\) and \(t(y/x)\), while samples from the joints, \(s(x,y),t(x,y)\), are only available, we present a kernel regression type estimator for these MMD terms. We then present the overall sample-based formulation for performing OT between conditionals involving these MMD estimators. Under mild assumptions, we are able to show that our estimator for the optimal transport cost (Wasserstein distance) between \(s(y/x)\) and \(t(y/x)\) is statistically consistent. Using standard concentration inequalities, we also derive bounds on the estimation error with finite-samples. Another point of deviation from the earlier works, (Bunne et al., 2022)(Tabak et al., 2021), is that we model the transport plan, \(\pi(y,y^{\prime}/x)\), as a function of \(x\). Moreover, we model the plan via modelling its factors: \(\pi(y^{\prime}/y,x)\pi(y/x)\). This gives a two-fold advantage: (A) This factorization simplifies our formulation when dealing with discriminative/conditional-generative models by allowing us to directly choose \(\pi(y/x)\) to be the same as that defined by the discriminative/conditional-generative model to be learnt. (B) When modelled implicitly, the factor \(\pi(y^{\prime}/y,x)\) enables one-to-many inferences (e.g., see Figure 0(b) in (Korotin et al., 2023)) rather than one-to-one inferences implied by a transport map. We validate the correctness of our estimator through synthetic experiments. We present different variants of our formulations and show the utility of the proposed formulation in diverse applications. #### Contributions * To the best of our knowledge, we are the first to present a consistent estimator for optimal transport cost in the context of the general conditional optimal transport problem. * Unlike the popular KL-based regularization (Frogner et al., 2015), (Tabak et al., 2021), we employ MMD-regularization that helps us match conditionals rather than joints even when the variables involved are continuous. * We present a theoretical analysis of our estimator and prove its statistical consistency. * We model the transport plan rather than the transport map (Tabak et al., 2021), (Bunne et al., 2022), which enables more general inferences. * We empirically evaluate the proposed formulation on applications like classification, prompt learning for few-shot classification, and conditional-generation in the context of predicting cell responses to cancer treatment. ## 2 Background Let \(\mathcal{X},\mathcal{Y}\) be two sets (domains) that form compact Hausdorff spaces. Let \(\mathcal{P}(\mathcal{X})\) be the set of all probability measures over \(\mathcal{X}\). **Optimal Transport (OT)** Given a cost function, \(c:\mathcal{Y}\times\mathcal{Y}\mapsto\mathbb{R}\), OT compares two measures \(s,t\in\mathcal{P}(\mathcal{Y})\) by finding a plan to transport mass from one to the other, that incurs the least expected cost. More formally, Kantorovich's OT formulation is given by: \[W_{c}(s,t)\equiv\min_{\pi\in\mathcal{P}(\mathcal{Y}\times\mathcal{Y})}\int c \,\mathrm{d}\pi,\ \text{s.t.}\ \ \pi_{1}=s,\ \pi_{2}=t, \tag{1}\] where \(\pi_{1},\pi_{2}\) are the marginals of \(\pi\). When the cost is a valid metric over \(\mathcal{Y}\), then \(W(s,t)\) is a valid metric between the measures \(s,t\in\mathcal{P}(\mathcal{Y})\), popularly known as the Wasserstein metric. Maximum Mean Discrepancy (MMD)Given a characteristic kernel function (Sriperumbudur et al., 2011), \(k:\mathcal{Y}\times\mathcal{Y}\mapsto\mathbb{R}\), MMD defines a metric over probability measures given by: \(\text{MMD}^{2}(s,t)\equiv\mathbb{E}_{X\sim s,X^{{}^{\prime}}\sim s}[k(X,X^{{}^{ \prime}})]+\mathbb{E}_{Y\sim t,Y^{{}^{\prime}}\sim t}[k(Y,Y^{{}^{\prime}})]-2 \mathbb{E}_{X\sim s,Y\sim t}[k(X,Y)]\). With \(\mathcal{H}_{k}\) as the RKHS associated with the characteristic kernel \(k\), the dual norm definition of MMD is given by \(\text{MMD}(s,t)=\max_{f\in\mathcal{H}_{k}:\|f\|\leq 1}\mathbb{E}_{s}[f(X)]- \mathbb{E}_{t}[f(Y)]\). Total Variation (TV)Total Variation is another popular metric over probability measures defined by: \(\text{TV}(s,t)\equiv\int_{\mathcal{Y}}\text{d}|s-t|(y)\), where \(|s-t|(y)\equiv\left\{\begin{array}{ll}s(y)-t(y)&\text{if }s(y)\geq t(y)\\ t(y)-s(y)&\text{otherwise}\end{array}\right..\) ## 3 Problem Formulation This section formally defines the Conditional Optimal Transport (COT) problem and presents a consistent estimator for it in the general setting. We begin by recalling the definition of OT between two given measures \(s(y/x),t(y/x)\): \[W_{c}(s(y/x),t(y/x))\equiv\min_{\pi\in\mathcal{P}(\mathcal{Y}\times\mathcal{Y })}\int_{\mathcal{Y}\times\mathcal{Y}}c\,\text{d}\pi,\text{ s.t. }\pi_{1}=s(y/x),\ \pi_{2}=t(y/x), \tag{2}\] If the cost is a valid metric, then \(W_{c}(s(y/x),t(y/x))\) is nothing but the Wasserstein distance between \(s(y/x),t(y/x)\). In typical learning applications, one needs to compare the expected Wasserstein distance over a distribution of inputs, say \(a\sim\mathcal{P}(\mathcal{X})\), rather than at a particular input. Accordingly, we consider \(\mathbb{E}_{X\sim a}\left[W_{c}\left(s(y/X),t(y/X)\right)\right]\): \[\int_{\mathcal{X}} \min_{\begin{subarray}{c}\pi(x)\in\mathcal{P}(\mathcal{Y}\times \mathcal{Y})\\ \forall x\in\mathcal{X}\end{subarray}}\ \int_{\mathcal{Y}\times\mathcal{Y}}c\,\text{d}\pi(x) \text{d}a,\text{ s.t. }\pi_{1}(x)=s(y/x),\ \pi_{2}(x)=t(y/x)\ \forall x\in\mathcal{X},\] \[\equiv\min_{\pi:\mathcal{X}\mapsto\mathcal{P}(\mathcal{Y}\times \mathcal{Y})}\int_{\mathcal{X}}\int_{\mathcal{Y}\times\mathcal{Y}}c\,\text{d} \pi(x)\text{d}a,\text{ s.t. }\pi_{1}(x)=s(y/x),\ \pi_{2}(x)=t(y/x)\ \forall x\in\mathcal{X}. \tag{3}\] In the special case where the auxiliary measure, \(a\), is degenerate (3) gives back (2). Henceforth we consider (3), which we define as the COT problem, and analyze it. Now the key challenge is that the conditionals \(s(y/x),t(y/x)\) are not explicitly given, and only samples from the joints are available. Accordingly, we make an important decision by choosing an expected MMD-based regularizer for matching the conditionals: \[\min_{\pi:\mathcal{X}\mapsto\mathcal{P}(\mathcal{Y}\times\mathcal{ Y})}\int_{\mathcal{X}}\int_{\mathcal{Y}\times\mathcal{Y}}c\,\text{d}\pi(x) \text{d}a,\text{ s.t. }\int_{\mathcal{X}}\text{MMD}^{2}\left(\pi_{1}(x),s(y/x)\right) \text{d}s(x)\leq\epsilon_{1},\] \[\int_{\mathcal{X}}\text{MMD}^{2}\left(\pi_{2}(x),t(y/x)\right) \text{d}t(x)\leq\epsilon_{2}. \tag{4}\] Note that (4) is same as (3) as \(\epsilon_{1},\epsilon_{2}\to 0\) and whenever \(s(x),t(x)\) are supported over entire \(\mathcal{X}\). This is because MMD, being a metric, is always non-negative. Further, we employ a related kernel-regression based regularizer that finally enables us to overcome the challenge of conditionals being implicitly defined through the joint's samples: \[\min_{\pi:\mathcal{X}\mapsto\mathcal{P}(\mathcal{Y}\times\mathcal{ Y})}\int_{\mathcal{X}}\int_{\mathcal{Y}\times\mathcal{Y}}c\,\text{d}\pi(x) \text{d}a,\text{ s.t. }\int_{\mathcal{X}\times\mathcal{Y}}\text{MMD}^{2}\left(\pi_{1}(x), \delta_{y}\right)\text{d}s(x,y)\leq\epsilon_{1}+\rho_{1},\] \[\int_{\mathcal{X}\times\mathcal{Y}}\text{MMD}^{2}\left(\pi_{2}(x), \delta_{y}\right)\text{d}t(x,y)\leq\epsilon_{2}+\rho_{2}. \tag{5}\] \begin{table} \begin{tabular}{l l l l} \hline & [Tabak et al., 2021] & [Bunne et al., 2022] & COT \\ \hline OT between conditionals & \(\mathcal{X}\) & \(\mathcal{\check{}}\) & \(\check{}\) \\ Flexibility with the ground cost & \(\check{}\check{}\check{}\) & \(\check{}\check{}\) & \(\check{}\check{}\) \\ Allows single sample per conditioned variable & \(\check{}\check{}\check{}\check{}\) & \(\check{}\check{}\) & \(\check{}\check{}\check{}\) \\ Models OT plan & \(\mathcal{X}\) & \(\mathcal{X}\) & \(\check{}\check{}\) \\ Flexibility of Implicit modelling & N/A & N/A & \(\check{}\check{}\) \\ \hline \end{tabular} \end{table} Table 1: Summary of related works Here, \(\rho_{1}\equiv\int_{\mathcal{X}\times\mathcal{Y}}\text{MMD}^{2}\left(s(y/x), \delta_{y}\right)\text{d}s(x,y)\), \(\rho_{2}\equiv\int_{\mathcal{X}\times\mathcal{Y}}\text{MMD}^{2}\left(t(y/x), \delta_{y}\right)\text{d}t(x,y)\) and \(\delta_{y}\) is the Dirac measure centred at \(y\). From Theorem 3.1 in [11] we have that (5) is same as (4) as \(\epsilon_{1},\epsilon_{2}\to 0\). Now, for ease of optimization, we consider the Tikhonov regularized version: \[\min_{\pi:\mathcal{X}\mapsto\mathcal{P}(\mathcal{Y}\times\mathcal{ Y})}\int_{\mathcal{X}}\int_{\mathcal{Y}\times\mathcal{Y}}c\text{d}\pi(x) \text{d}a+ \lambda_{1}\int_{\mathcal{X}\times\mathcal{Y}}\text{MMD}^{2}\left( \pi_{1}(x),\delta_{y}\right)\text{d}s(x,y)\] \[+ \lambda_{2}\int_{\mathcal{X}\times\mathcal{Y}}\text{MMD}^{2}\left( \pi_{2}(x),\delta_{y}\right)\text{d}t(x,y). \tag{6}\] where \(\lambda_{1},\lambda_{2}>0\) are regularization hyperparameters. We again note that our formulation (6) is a (valid) regularized version of the original conditional optimal transport problem (3) and (6) is same as (3) as \(\lambda_{1},\lambda_{2}\rightarrow\infty\). In our set-up, in order to solve (6) and perform estimation, we are only provided with samples \(\mathcal{D}^{s}_{m}=\left\{(x_{1},y_{1}),\ldots,(x_{m},y_{m})\right\}\) and \(\mathcal{D}^{t}_{m}=\left\{(x^{\prime}_{1},y^{\prime}_{1}),\ldots,(x^{\prime} _{m},y^{\prime}_{m})\right\}\) from \(s(x,y),t(x,y)\) respectively. Hence we employ a sample-based estimator for the regularizer terms: \(\int_{\mathcal{X}\times\mathcal{Y}}\text{MMD}^{2}\left(\pi_{1}(x),\delta_{y} \right)\text{d}s(x,y)\approx\frac{1}{m}\sum_{i=1}^{m}\text{MMD}^{2}\left(\pi _{1}(x_{i}),\delta_{y_{i}}\right)\). The following lemma shows that this estimator is statistically consistent: **Lemma 1**.: _Assuming \(k\) is a normalized characteristic kernel, with probability atleast \(1-\delta\), we have:_ \[\left|\int_{\mathcal{X}\times\mathcal{Y}}\text{MMD}^{2}\left(\pi_{1}(x), \delta_{y}\right)\text{d}s(x,y)-\frac{1}{m}\sum_{i=1}^{m}\text{MMD}^{2}\left( \pi_{1}(x_{i}),\delta_{y_{i}}\right)\right|\leq 2\sqrt{\frac{2}{m}\log \left(\frac{2}{\delta}\right)}.\] Using this result for the regularization terms, (6) can in-turn be estimated as: \[\min_{\pi:\mathcal{X}\mapsto\mathcal{P}(\mathcal{Y}\times\mathcal{ Y})}\int_{\mathcal{X}}\int_{\mathcal{Y}\times\mathcal{Y}}c\text{d}\pi(x) \text{d}a+\lambda_{1}\frac{1}{m}\sum_{i=1}^{m}\text{MMD}^{2}\left(\pi_{1}(x_{i} ),\delta_{y_{i}}\right)+\lambda_{2}\frac{1}{m}\sum_{i=1}^{m}\text{MMD}^{2} \left(\pi_{2}(x^{\prime}_{i}),\delta_{y^{\prime}_{i}}\right). \tag{7}\] Note that since \(a\) is a known distribution, we are not estimating it as an average over samples. In the following theorem, we prove the statistical consistency of (7). **Theorem 1**.: _Let \(\hat{\mathcal{U}}_{m}[\pi],\mathcal{U}[\pi]\) denote the objectives in (7), (6) respectively. Let \(\hat{\pi}_{m},\pi^{*}\) denote their optimal solutions respectively. Then, the following statements are true:_ 1. \(\left\{\mathcal{U}[\hat{\pi}_{m}]\right\}\xrightarrow[p]{m\rightarrow\infty} \mathcal{U}[\pi^{*}]\) _(converges in probability) whenever_ \(TV\left(\hat{s}_{m}(x,y),s(x,y)\right)\xrightarrow[p]{m\rightarrow\infty}0\) _and_ \(TV\left(\hat{t}_{m}(x,y),t(x,y)\right)\xrightarrow[p]{m\rightarrow\infty}0\)_. Here,_ \(\hat{s}_{m},\hat{t}_{m}\) _denote the empirical measures corresponding to_ \(\mathcal{D}^{s}_{m},\mathcal{D}^{t}_{m}\)_. Under the same conditions,_ \(\left\{\hat{\mathcal{U}}_{m}[\hat{\pi}_{m}]\right\}\xrightarrow[p]{m \rightarrow\infty}\mathcal{U}[\pi^{*}]\)_._ 2. _When_ \(\pi\) _is restricted to some convenient class, say_ \(\Pi\)_, learning bounds can be obtained: with probability atleast_ \(1-\delta\)_,_ \(\mathcal{U}[\hat{\pi}_{m}]-\mathcal{U}[\pi^{*}]\leq 2\left(\lambda_{1}+\lambda_{2} \right)\left(\mathcal{R}_{m}(\Pi)+3\sqrt{\frac{1}{2m}\log\left(\frac{3}{ \delta}\right)}\right)\)_. The Rademacher complexity term,_ \(\mathcal{R}_{m}\)_, is defined and analyzed in Appendix_ 7.2_. Also, with probability atleast_ \(1-\delta\)_,_ \(\left|\hat{\mathcal{U}}_{m}[\hat{\pi}_{m}]-\mathcal{U}[\pi^{*}]\right|\leq 2 \left(\lambda_{1}+\lambda_{2}\right)\left(\mathcal{R}_{m}(\Pi)+2\sqrt{\frac{1}{2m }\log\left(\frac{1}{\delta}\right)}\right)\)_._ We now provide details of modelling the transport plan function, i.e., choices for \(\Pi\), from a pragmatic perspective. Firstly, we model the transport plan \(\pi(y,y^{\prime}/x)\) by modelling its factors: \(\pi(y^{\prime}/y,x)\) and \(\pi(y/x)\). Since the factors can be modelled using simpler models, this brings us computational benefits among other advantages. Employing COT with such a factorization enables us to directly choose \(\pi(y/x)\) as the label posterior of the model to be learnt in discriminative modelling applications. Also, the other factor \(\pi(y^{\prime}/y,x)\) can be readily used for inference (see sections 4.1.1, 4.3). Transport plan with explicit models:Here, we assume \(\mathcal{Y}=\left\{l_{1},\ldots,l_{n}\right\}\) is a finite set. Accordingly, we model the factors \(\pi(y^{\prime}/y,x),\pi(y/x)\) as neural networks with the output layer as softmax over the \(|\mathcal{Y}|=n\) labels. The COT estimator 7 in this case simplifies as: \[\min_{\psi,\theta}\int_{\mathcal{X}}\sum_{i=1,j=1}^{i=n,j=n}c(l_{i}, l_{j})\pi_{\psi}(l_{i}/l_{j},x)\pi_{\theta}(l_{j}/x)\text{d}a(x) + \lambda_{1}\frac{1}{m}\sum_{i=1}^{m}\text{MMD}^{2}\left(\sum_{j=1}^{ n}\pi_{\psi}(l_{i}/l_{j},x_{i})\pi_{\theta}(l_{j}/x_{i}),\delta_{y_{i}}\right)\] \[+ \lambda_{2}\frac{1}{m}\sum_{i=1}^{m}\text{MMD}^{2}\left(\pi_{\theta }(l_{i}/x^{\prime}_{i}),\delta_{y^{\prime}_{i}}\right), \tag{8}\] where \(\psi,\theta\) are the network parameters we wish to learn. Transport plan with implicit models:Here we consider the case where \(\mathcal{Y}\) is uncountable and the variable \(y\) is continuous. Implicit models that can generate samples are better suited for this scenario as it is then easy to estimate the expectations involved. Note that the MMD metric is meaningful even for distributions with potentially non-overlapping support. Thus the MMD-based regularization in COT naturally allows us to employ implicit models for the factors of the transport plan. The COT estimator, in this case, reads as: \[\min_{\theta,\psi}\int_{\mathcal{X}}\frac{1}{m}\sum_{i=1}^{m}c \left(y_{i}\left(x;\theta\right),y_{i}\left(x;\theta,\psi\right)\right)\text{d} a(x) +\lambda_{1}\frac{1}{m}\sum_{i=1}^{m}\text{MMD}^{2}\left(\frac{1}{ m}\sum_{j=1}^{m}\delta_{y_{j}\left(x_{i};\theta,\psi\right)},\delta_{y_{i}}\right)\] \[+\lambda_{2}\frac{1}{m}\sum_{i=1}^{m}\text{MMD}^{2}\left(\frac{1} {m}\sum_{j=1}^{m}\delta_{y_{j}\left(x_{i}^{\prime};\theta\right)},\delta_{y_{ i}^{\prime}}\right), \tag{9}\] where \(y_{i}(x;\theta)\;i=1,\ldots,m\) are samples from the network \(\pi_{\theta}(\cdot/x)\) and \(y_{i}\left(x;\theta,\psi\right)\;i=1,\ldots,m\) are samples from the network \(\pi_{\psi}(\cdot/y_{i}\left(x;\theta\right),x)\). ## 4 Experiments In this section, we showcase the utility of the proposed estimator 6 in various applications. We use \(\lambda_{1}=\lambda_{2}=\lambda\) in all our experiments. ### Verifying Correctness of Estimator In case \(a\sim\delta_{x_{0}}\) and \(\lambda_{1},\lambda_{2}\) are high enough, the transport cost term (first term) in (8)/(9) estimates \(W_{c}(s(y/x_{0}),t(y/x_{0}))\). In order to verify this, we consider a case where the analytical solution for the Wasserstein distance \(W_{c}\) is known and compare it with our estimate. Experiment setupWe consider two distributions \(y\sim\mathcal{N}(2(x-0.5),1)\) and \(y^{\prime}\sim\mathcal{N}(-4(x^{\prime}-0.5),1)\) where \(x\sim\beta(2,4)\) and \(x\sim\beta(4,2)\) generate \(m\) samples from each them. The true Wasserstein distance between them at \(x\) turns out to be \((6(x-0.5))^{2}\) (see eqn (2.39) in [4]). We use two 3 Layer MLP networks to model the factors \(\pi_{\theta}(y/x)\) and \(\pi_{\psi}(y^{\prime}/y,x)\). We use RBF as our kernel and \(l_{2}\) as our ground cost. ResultsAs shown in Figure 1 and the MSE values, the deviation between the empirical and variance decreases as the number of samples increases. Also, the variance of the estimated values decreases. More importantly, the shape of the function (quadratic) is more or less mimicked with our estimator. #### 4.1.1 Barycenter Experiment For further verification of our estimator, we show that the barycenter estimated using our transport plan and the true barycenter converge in Wasserstein distance. Figure 1: \(m\), the number of training samples varies between \(\{50,100,200,400\}\) from left to right. The true Wasserstein distance is plotted in red color, and the estimated distances using our proposed estimator are marked in orange. The corresponding values of MSE \(\{98.238,18.476,20.950,6.995\}\). Experiment setup:Two Independent Gaussian distributions are taken \(y\sim\mathcal{N}(2(x-0.5),1)\) and \(y^{\prime}\sim\mathcal{N}(-4(x^{\prime}-0.5),4)\) where \(x\sim\beta(2,4)\) and \(x^{\prime}\sim\beta(4,2)\). The analytical solution of the barycenter is calculated as \(y_{c}\sim\mathcal{N}(-x+0.5,2.5)\)(Peyre and Cuturi, 2020). Recall that the barycenter can also be computed using the optimal transport plan using the expression: \(B_{x}=\lambda S_{x}+(1-\lambda)T_{x}\), where \(B_{x},S_{x}\) denote the random variables corresponding to the barycenter and source measure conditioned at \(x\). \(T_{x}\)'s distribution is \(\pi(y^{\prime}/x,S_{x})\). Accordingly, samples from the barycenter, \(B_{x_{i}}\), are obtained using: \(\lambda y_{i}+(1-\lambda)y\), where \(y\) is sampled from \(\pi(\cdot/x_{i},y_{i})\). Evaluation Protocol:For evaluation, we generate 500 samples from our transport plan based barycenter and the true barycenter. We plot the Wasserstein distance between these two as a function of the training set size, \(m\). Results:Figure 2 shows that the estimate of barycenter using our transport plan becomes better with \(m\) and is close enough for \(m=600\). ### Classification Since we can efficiently estimate the Wasserstein distance between conditionals solely from samples of joints, (7) can be used as a loss function in the supervised learning of discriminative models. The key advantage of using Wasserstein as a loss function is the fact that the Wasserstein metric lifts the geometry defined by the ground metric, \(c:\mathcal{Y}\times\mathcal{Y}\mapsto\mathbb{R}\), to that between measures in \(\mathcal{P}(\mathcal{Y})\). COT based ClassifierLet the discriminative model to be learnt be \(f_{\theta}\). The idea is to match this conditional to that in the training data using COT. We choose the transport plan factor \(\pi_{\theta}\equiv f_{\theta}\) and \(a\) as the marginal of input covariates in the training data, simplifying our COT estimator, (8), as: \[\min_{\psi,\theta}\frac{1}{m}\sum_{q=1}^{m}\sum_{i=1,j=1}^{i=n,j=n}c(l_{i},l_{ j})\pi_{\psi}(l_{i}/l_{j},x_{q})f_{\theta}(l_{j}/x_{q})+\lambda_{1}\frac{1}{m} \sum_{i=1}^{m}\text{MMD}^{2}\left(\sum_{j=1}^{n}\pi_{\psi}(l_{i}/l_{j},x_{i}) f_{\theta}(l_{j}/x_{i}),\delta_{y_{i}}\right),\] where \(\psi,\theta\) are the network parameters we wish to learn. Note that we do not have the second MMD regularizer term as we directly choose \(\pi_{\theta}\equiv f_{\theta}\). From the technical discussion in Section 3, this formulation can be understood as estimator for: \(\min_{\theta}\mathbb{E}_{X\sim a}\left[W_{c}\left(f_{\theta}(y/X),s(y/X)\right)\right]\). Experimental setupWe consider the task of multi-class classification and experiment on three benchmark datasets MNIST (LeCun and Cortes, 2010), CIFAR-10 (Krizhevsky et al., 2009) and Animals with Attribute (AWA) (Lampert et al., 2009). We compare the performance of COT against entropy regularized Wasserstein (\(\epsilon\)-OT) based loss as in (Frogner et al., 2015) and the standard cross entropy (CE) loss for classification. Following the popular approaches of minibatch OT (Fatras et al., 2020; Fatras et al., 2021), we train \(f_{\theta}\) with COT loss in a minibatch fashion over the covariates. We use the implementation of (Frogner et al., 2015) open-sourced by (Jawanpuria et al., 2021). We maintain the same experimental setup as used in (Jawanpuria et al., 2021). The classifier is a single-layer neural network with Softmax activation which is trained for 200 epochs. We use the cost function, \(c\), between labels as the squared \(l_{2}\) distance between the fastText embeddings (Bojanowski et al., 2017) of the labels. The kernel function used in COT is \(k(x,y)=\frac{e^{-c(x,y)}}{2\sigma^{2}}\). We use SGD with a learning rate of 0.1 and weight decay of \(5\times 10^{-4}\) for optimization. For MNIST and CIFAR-10, we use the standard splits for training and testing and choose a random subset of size 10,000 from the training set for validation. For AWA, we use the train and test splits provided by (Jawanpuria et al., 2021) and randomly take 30% of the training data for validation. Evaluation protocolFollowing (Jawanpuria et al., 2021), we compare all methods using the Area Under Curve (AUC) score of the classifier on the test data after finding the best hyperparameters on the validation data. Based on the validation phase, the best Sinkhorn regularization hyperparameter in \(\epsilon\)-OT (Frogner et al., 2015) is 0.2. For COT, we choose the hyperparameters \(\lambda,\sigma\) based on the validation set: for MNIST and CIFAR-10, we find \(\lambda=0.1,\sigma^{2}=0.1\) and for AWA, we find \(\lambda=30,\sigma^{2}=1\) to be optimal. ResultsAs shown in Table 2, the classifier trained with the COT formulation consistently outperforms the classifier trained with \(\epsilon\)-OT. The AUC scores with the proposed formulation also match that of the AUC scores with the popular cross entropy (CE) loss. We further compare the t-SNE of the representations learnt by the models trained with COT and CE. We plot the t-SNE of the model representations for MNIST test data. The t-SNE plot 4.2 shows better-separated clusters (regions with a specific colour) corresponding to each class in the case of COT compared to the one obtained with CE, where the number of clusters is more than the number of classes in MNIST. The improved t-SNE plot can be attributed to the fact that the COT-based loss respects the geometry defined by the ground metric over the labels. ### Cell Population Dynamics Problem DescriptionThe study of single-cell population dynamics is especially useful in studying the effect of drug dosages in medical applications. However, the current techniques for observing the gene expressions of the cells does so by destroying them. One often has unpaired distributions between control (unperturbed) cells and the cells treated with particular drug dosage. We apply our COT formulation to generate samples from perturbed distributions conditioned on the drug dosage given to an unperturbed cell. DatasetWe consider the dataset used by (Bunne et al., 2022) and (Bunne et al., 2021a) corresponding to the anti-cancer drug Givinostat applied at different dosage levels, \(\{x_{1}=10nM,x_{2}=100nM,x_{3}=1000nM,x_{4}=10000nM\}\). At each dosage level, \(x_{i}\), samples of perturbed cells are given: \(y_{i1},\ldots,y_{im_{i}}\). The total perturbed cells are 3541. Samples of unperturbed cells are also provided: \(y^{\prime}_{1},\ldots,y^{\prime}_{m},m=17,565\). Each of these cells is described by gene-expression levels of \(n=1000\) highly variable genes i.e., \(y_{ij},y^{\prime}_{i}\in\mathbb{R}^{1000}\). COT-based Generative ModelingOur goal is to perform OT between the distribution of the unperturbed cells and the distribution of the perturbed cell conditioned on the drug dosage. Since \(\mathcal{Y}=\mathbb{R}^{1000}\), we employ the implicit model based COT estimator for this purpose. We choose the implicit transport factor \(\pi_{\theta}\) as the empirical distribution of unperturbed cells itself. With this notation and auxiliary \(a\) as the empirical distribution of the dosage levels, our COT estimator, (9), reads as: \[\min_{\psi}\frac{1}{4}\sum_{q=1}^{4}\frac{1}{m}\sum_{i=1}^{m}c\left(y^{\prime} _{i},y_{i}\left(x_{q};\psi\right)\right)+\lambda_{1}\frac{1}{4}\sum_{i=1}^{4} \text{MMD}^{2}\left(\frac{1}{m}\sum_{j=1}^{m}\delta_{y_{j}\left(x_{i};\psi \right)},\frac{1}{m_{i}}\sum_{j=1}^{m_{i}}\delta_{y_{ij}}\right),\] where \(y_{i}\left(x;\psi\right)\ i=1,\ldots,m\) are samples from the network \(\pi_{\psi}(\cdot/y^{\prime}_{i},x)\). Experimental setupSimilar to (Bunne et al., 2022), we take the cost function, \(c\), as squared Euclidean. For the MMD regularization, we use the characteristic inverse multi-quadratic (IMQ) kernel, \(k(x,y)=(h^{2}+\|x-y\|^{2})^{-1/2}\) kernel with the hyperparameter \(h^{2}=10\). For each dosage level, we split the data into 80% for train and 20% for testing and report results on the test set. Evaluation protocolFollowing (Bunne et al., 2022), we evaluate the performance on COT by comparing samples from the predicted and ground truth perturbed distributions. We report the \(l_{2}\) norm between the Perturbation Signatures (Stathias et al., 2018) for 50 marker genes for various dosage levels. The distances are reported for in-sample settings. We compare our performance to reported scores for various baselines CPA (Lotfollahi et al., 2021), ICNN (Makkuva et al., 2020) and CondOT (Bunne et al., 2022). ResultsWe summarize our obtained results in Table 3. We observe that COT outperforms earlier baselines such as CPA (Lotfollahi et al., 2021) and vanilla ICNN (Makkuva et al., 2020) approaches followed by CellOT (Bunne et al., 2021). Our formulation closely matches the performance of (Bunne et al., 2022). However, (Bunne et al., 2022) are restricted to squared Euclidean cost, while our formulation is applicable for any cost metric, thus making it more generalizable over various applications. ### Prompt Learning In order to show the versatility of our framework, we adapt our estimator for learning prompts for large-scale vision-language models and evaluate the performance in limited supervision setting to show its usefulness for downstream tasks. The success of vision-language models in open-world visual understanding has motivated efforts which aim to learn prompts (Zhou et al., 2022; Zhang et al., 2021; Zhou et al., 2022; Chen et al., 2023) to adapt the knowledge from pre-trained models like clip for downstream tasks since it is infeasible to fine-tune such models due to a large number of parameters. Typically, these approaches rely on learning class-specific prompts for each category to better adapt the vision-language model for downstream takes without the need for fine-tuning. A recent approach, PLOT (Chen et al., 2023), proposes to learn a set of prompts by minimizing an OT-based loss between distributions over the set of local visual features and the set of textual prompt features, to learn the downstream classifier. For each image, PLOT computes an OT-based loss between \(M(49)\) visual features of the image and \(N(2)\) textual prompt features per class. As prompts are learnt on a per-class basis, we propose solving the COT problem by incorporating class-level information. COT FormulationWe learn an explicit model \(\pi_{\psi_{r}}(l_{ir}/l_{jqr},x_{qr})\) over the \(N\) textual prompt features \(l_{1r},\ldots,l_{Nr}\) for each class. Here, \(x_{qr}\) is the \(q^{th}\) image from class \(r\) and \(l_{jqr}\) is the \(j^{th}\) visual feature for image \(x_{qr}\). In the \(K\)-shot classification setup, we have \(K\) images per class. Our formulation for prompt learning is as follows. \[\min_{\psi_{r}}\frac{1}{K}\sum_{q=1}^{K}\sum_{i=1,j=1}^{i=N,j=M}c(l_{ir},l_{jqr })\pi_{\psi_{r}}(l_{ir}/l_{jqr},x_{qr})\mathbf{v}_{j}+\lambda_{1}\text{MMD}^{ 2}\left(\sum_{q=1}^{K}\sum_{j=1}^{M}\pi_{\psi_{r}}(./l_{jqr},x_{qr})\mathbf{ v}_{j},\mathbf{u}\right).\] Following the PLOT setup, we take \(\mathbf{v},\mathbf{u}\) are uniform distributions over the \(M\) visual features and the \(N\) prompt features respectively. Our formulation learns a transport plan conditioned on the image but matches the marginals of the plan at a class-level to incorporate the per-class distribution over prompts. Figure 4: Marginals for selected genes ‘ENSG00000175175.5’, ‘ENSG000000173727.12’, ‘ENSG000000165092.12’ where the dosage is 100nM. \begin{table} \begin{tabular}{l l} \hline \hline Method & \(l2\)(PS) \\ \hline CPA & 2.47\(\pm\)2.89 \\ ICNN & 2.37\(\pm\)2.15 \\ CondOT & 0.63\(\pm\)0.09 \\ CondOT & 0.60\(\pm\)0.11 \\ COT & **0.60\(\pm\)0.11** \\ \hline \hline \end{tabular} \end{table} Table 3: \(l_{2}\) (PS) distances between predicted and ground truth distributions Experimental setupWe follow the same experimental setup used in CoOP and PLOT for learning prompts and evaluate the performance of our method on the \(K\)-shot classification task. We report the performance (i.e accuracy) on the EuroSAT benchmark dataset [10] for 2, 4, 8 and 16 shots setting. Following PLOT, we use the cost function, \(c(x,y)=1-x^{\top}y\). For computing MMD, we use the characteristic Dirac kernel \(k(x,y)=\mathbb{I}[x=y]\). We follow the model architectures and common training/evaluation protocol used in CLIP, CoOP and PLOT and report average performance over 3 different seeds. **Results** Table 4 shows that COT achieves better average accuracy than PLOT. ## 5 Discussion and conclusion Often machine learning applications need to compare conditional distributions. Remarkably, our framework enables such a comparison solely using samples from (observational) joint distributions. In this setting, we present consistent estimators for the optimal transport cost and plan between the conditionals. We discuss how our framework can be employed in diverse applications and in OT-style problems beyond COT. ## 6 Broader Impact We believe our framework can have broader implications than described in this work. We speculate our methodology may be useful for conditional two-sample, conditional independence hypothesis testing, handling covariate shift, manifold-valued discriminative/conditional-generative models etc. When used as a loss function, our training is an alternative to the popular MLE estimation in discriminative models. The advantage being that the loss function is domain-geometric-aware, unlike the KL-divergence. ## 7 Supplementary Materials ### Proof of Lemma 1 **Lemma1.** Assuming \(k\) is a normalized characteristic kernel, with probability atleast \(1-\delta\), we have: \[\left|\int_{\mathcal{X}\times\mathcal{Y}}\text{MMD}^{2}\left(\pi_{1}(x),\delta _{y}\right)\text{d}s(x,y)-\frac{1}{m}\sum_{i=1}^{m}\text{MMD}^{2}\left(\pi_{1} (x_{i}),\delta_{y_{i}}\right)\right|\leq 2\sqrt{\frac{2}{m}\log\left(\frac{2}{ \delta}\right)}.\] Proof.: Recall that MMD is nothing but the RKHS norm-induced distance between the corresponding kernel embeddings i.e., \(\text{MMD}(s,t)=\left\|\mu_{k}\left(s\right)-\mu_{k}\left(t\right)\right\|\), where \(\mu_{k}\left(s\right)\equiv\int\phi_{k}(x)\text{d}s(x)\), is the kernel mean embedding of \(s\)[12], \(\phi_{k}\) is the canonical feature map associated with the characteristic kernel \(k\). Let \(\mathcal{H}_{k}\) denote the RKHS associated with the kernel \(k\). Since our kernel is normalized we have that \(\left\|\mu_{k}(b)\right\|\leq 1\ \forall\ b\in\mathcal{P}(\mathcal{Y})\). Hence, \(0\leq\text{MMD}^{2}\left(\pi_{1}(x),s(y/x)\right)\leq 4\). From Chernoff-Hoeffding bound, we have that: with probability atleast \(1-\delta\), \(\left|\int_{\mathcal{X}\times\mathcal{Y}}\text{MMD}^{2}\left(\pi_{1}(x),\delta _{y}\right)\text{d}s(x,y)-\frac{1}{m}\sum_{i=1}^{m}\text{MMD}^{2}\left(\pi_{1} (x_{i}),\delta_{y_{i}}\right)\right|\leq 2\sqrt{\frac{2}{m}\log\left(\frac{2}{ \delta}\right)}\). ### Proof of Theorem 1 **Theorem1.** Let \(\hat{\mathcal{U}}_{m}[\pi],\mathcal{U}[\pi]\) denote the objectives in (7), (6) respectively. Let \(\tilde{\pi}_{m},\pi^{*}\) denote their optimal solutions respectively. Then, the following statements are true: 1. \(\left\{\mathcal{U}[\hat{\pi}_{m}]\right\}\xrightarrow[p]{m\to\infty}\ \mathcal{U}[\pi^{*}]\) (converges in probability) whenever \(TV\left(\hat{s}_{m}(x,y),s(x,y)\right)\xrightarrow[p]{m\to\infty}0\) and \(TV\left(\hat{t}_{m}(x,y),t(x,y)\right)\xrightarrow[p]{m\to\infty}0\). Here, \(\hat{s}_{m},\hat{t}_{m}\) denote the empirical measures corresponding to \(\mathcal{D}_{m}^{*},\mathcal{D}_{m}^{t}\). Under the same conditions, \(\left\{\hat{\mathcal{U}}_{m}[\hat{\pi}_{m}]\right\}\xrightarrow[p]{m\to\infty} \mathcal{U}[\pi^{*}]\). \begin{table} \begin{tabular}{l c c c c} \hline \hline & \(K=2\) & \(K=4\) & \(K=8\) & \(K=16\) \\ \hline PLOT & 64.21 & 71.80 & 77.90 & 82.30 \\ COT & **65.13** & **72.23** & **78.10** & **83.30** \\ \hline \hline \end{tabular} \end{table} Table 4: Prompt Learning experiment: Average accuracy (higher is better) on EuroSAT dataset 2. When \(\pi\) is restricted to some convenient class, say \(\Pi\), learning bounds can be obtained: with probability atleast \(1-\delta\), \(\mathcal{U}[\hat{\pi}_{m}]-\mathcal{U}[\pi^{*}]\leq 2\left(\lambda_{1}+\lambda_{2} \right)\left(\mathcal{R}_{m}(\Pi)+3\sqrt{\frac{1}{2m}\log\left(\frac{3}{\delta} \right)}\right)\). The Rademacher complexity term, \(\mathcal{R}_{m}\), is defined and analyzed in Appendix 7.2. Also, with probability atleast \(1-\delta\), \(\left|\hat{\mathcal{U}}_{m}[\hat{\pi}_{m}]-\mathcal{U}[\pi^{*}]\right|\leq 2 \left(\lambda_{1}+\lambda_{2}\right)\left(\mathcal{R}_{m}(\Pi)+2\sqrt{\frac{1} {2m}\log\left(\frac{1}{\delta}\right)}\right)\). Proof.: We begin by recalling that \[\hat{\mathcal{U}}_{m}[\pi]\equiv\int_{\mathcal{X}}\int_{\mathcal{Y}\times \mathcal{Y}}c\,\text{d}\pi(x)\text{d}a+\lambda_{1}\frac{1}{m}\sum_{i=1}^{m} \text{MMD}^{2}\left(\pi_{1}(x_{i}),\delta_{y_{i}}\right)+\lambda_{2}\frac{1} {m}\sum_{i=1}^{m}\text{MMD}^{2}\left(\pi_{2}(x_{i}^{\prime}),\delta_{y_{i}^{ \prime}}\right)\] is the objective in 7 and \(\hat{\pi}_{m}\) is the corresponding optimal solution. Similarly, \[\mathcal{U}[\pi]\equiv\int_{\mathcal{X}}\int_{\mathcal{Y}\times\mathcal{Y}}c \,\text{d}\pi(x)\text{d}a+\lambda_{1}\int_{\mathcal{X}\times\mathcal{Y}}\text {MMD}^{2}\left(\pi_{1}(x),\delta_{y}\right)\text{d}s(x,y)+\lambda_{2}\int_{ \mathcal{X}\times\mathcal{Y}}\text{MMD}^{2}\left(\pi_{2}(x),\delta_{y}\right) \text{d}t(x,y)\] is the objective in 6 and \(\pi^{*}\) is the corresponding optimal solution. It follows that \(0\leq\mathcal{U}[\hat{\pi}_{m}]-\mathcal{U}[\pi^{*}]\). \[0\leq\mathcal{U}[\hat{\pi}_{m}]-\mathcal{U}[\pi^{*}] =\mathcal{U}[\hat{\pi}_{m}]-\hat{\mathcal{U}}_{m}[\hat{\pi}_{m}]+ \hat{\mathcal{U}}_{m}[\hat{\pi}_{m}]-\hat{\mathcal{U}}_{m}[\pi^{*}]+\hat{ \mathcal{U}}_{m}[\pi^{*}]-\mathcal{U}[\pi^{*}]\] \[\leq\mathcal{U}[\hat{\pi}_{m}]-\hat{\mathcal{U}}_{m}[\hat{\pi}_{m }]+\hat{\mathcal{U}}_{m}[\pi^{*}]-\mathcal{U}[\pi^{*}]\left(\cdot\cdot\hat{ \pi}_{m}\text{ is the solution of }\ref{eq:1}\right)\] \[\leq\max_{\pi:\mathcal{X}\mapsto\mathcal{P}(\mathcal{Y}\times \mathcal{Y})}\,\left(\mathcal{U}[\pi]-\hat{\mathcal{U}}_{m}[\pi]\right)+\hat{ \mathcal{U}}_{m}[\pi^{*}]-\mathcal{U}[\pi^{*}] \tag{10}\] We now separately upper bound the two terms in 10 : \((\hat{\mathcal{U}}_{m}[\pi^{*}]-\mathcal{U}[\pi^{*}])\) and \(\max_{\pi:\mathcal{X}\mapsto\mathcal{P}(\mathcal{Y}\times\mathcal{Y})}\, (\mathcal{U}[\pi]-\hat{\mathcal{U}}_{m}[\pi])\). From Lemma 1, with probability at least \(1-\delta\), \[\hat{\mathcal{U}}_{m}[\pi^{*}]-\mathcal{U}[\pi^{*}]\leq 2(\lambda_{1}+\lambda_{2}) \sqrt{\frac{2}{m}\log\frac{2}{\delta}} \tag{11}\] Proof of Theorem 1.1:We upper-bound the second term as follows. \[\max_{\pi:\mathcal{X}\mapsto\mathcal{P}(\mathcal{Y}\times\mathcal{Y})}\, \left(\mathcal{U}[\pi]-\hat{\mathcal{U}}_{m}[\pi]\right)\leq \left|\lambda_{1}\Bigg{(}\max_{\pi_{1}:\mathcal{X}\mapsto\mathcal{P}( \mathcal{Y})}\Big{(}\int_{\mathcal{X}\times\mathcal{Y}}\text{MMD}^{2}(\pi_{1 }(x),\delta_{y})\,\text{d}s(x,y)-\right.\] \[\left.\int_{\mathcal{X}\times\mathcal{Y}}\text{MMD}^{2}(\pi_{1 }(x),\delta_{y})\,\text{d}\hat{s}_{m}(x,y)\Big{)}\right)\] \[+\lambda_{2}\Bigg{(}\max_{\pi_{2}:\mathcal{X}\mapsto\mathcal{P}( \mathcal{Y})}\Big{(}\int_{\mathcal{X}\times\mathcal{Y}}\text{MMD}^{2}(\pi_{2 }(x),\delta_{y})\,\text{d}t(x,y)-\right.\] \[\left.\int_{\mathcal{X}\times\mathcal{Y}}\text{MMD}^{2}(\pi_{2 }(x),\delta_{y})\,\text{d}\hat{t}_{m}(x,y)\Big{)}\right)\Bigg{|}\] \[= \left|\lambda_{1}\max_{\pi_{1}:\mathcal{X}\mapsto\mathcal{P}( \mathcal{Y})}\int_{\mathcal{X}\times\mathcal{Y}}\text{MMD}^{2}(\pi_{1}(x), \delta_{y})\text{d}(s-\hat{s}_{m})(x,y)\right.\] \[+\lambda_{2}\max_{\pi_{2}:\mathcal{X}\mapsto\mathcal{P}( \mathcal{Y})}\int_{\mathcal{X}\times\mathcal{Y}}\text{MMD}^{2}(\pi_{2}(x), \delta_{y})\,\text{d}(t-\hat{t}_{m})(x,y)\Bigg{|}\] \[\leq\lambda_{1}\max_{\pi_{1}:\mathcal{X}\mapsto\mathcal{P}( \mathcal{Y})}\int_{\mathcal{X}\times\mathcal{Y}}\text{MMD}^{2}(\pi_{1}(x), \delta_{y})\text{d}(|s-\hat{s}_{m}|)(x,y)\] \[+\lambda_{2}\max_{\pi_{2}:\mathcal{X}\mapsto\mathcal{P}( \mathcal{Y})}\int_{\mathcal{X}\times\mathcal{Y}}\text{MMD}^{2}(\pi_{2}(x), \delta_{y})\,\text{d}(|t-\hat{t}_{m}|)(x,y)\] \[\quad\text{(Using triangle inequality)}\] \[\leq 4\lambda_{1}\text{TV}(s,\hat{s}_{m})+4\lambda_{2}\text{TV}(t, \hat{t}_{m})\] \[\quad\text{(}\because\|\phi(.)\|=1,\|\mu_{k}(.)\|\leq 1\text{ with a normalized kernel, }k) \tag{12}\] Using inequalities 11 and 12 in inequality 10, we have that with probability atleast \(1-\delta\), \[0\leq\mathcal{U}[\hat{\pi}_{m}]-\mathcal{U}[\pi^{*}]\leq 2(\lambda_{1}+ \lambda_{2})\sqrt{\frac{2}{m}\log\frac{2}{\delta}}+4\left(\lambda_{1}\text{TV}( s,\hat{s}_{m})+\lambda_{2}\text{TV}(t,\hat{t}_{m})\right)\] Hence, we proved that \(\{\mathcal{U}[\hat{\pi}_{m}]\}\xrightarrow[p]{m\to\infty}\mathcal{U}[\pi^{*}]\) (converges in probability) whenever \(\text{TV}\left(\hat{s}_{m},s\right)\xrightarrow[p]{m\to\infty}0\) and \(\text{TV}\left(\hat{t}_{m},t\right)\xrightarrow[p]{m\to\infty}0\). Further, \(\hat{\mathcal{U}}_{m}[\hat{\pi}_{m}]-\mathcal{U}[\pi^{*}]=\hat{\mathcal{U}}_{m }[\hat{\pi}_{m}]-\mathcal{U}[\pi^{*}]-\hat{\mathcal{U}}_{m}[\pi^{*}]+\hat{ \mathcal{U}}_{m}[\pi^{*}]\leq\max_{\pi:\mathcal{X}\mapsto\mathcal{P}(\mathcal{ Y}\times\mathcal{Y})}\hat{\mathcal{U}}_{m}[\pi]-\mathcal{U}[\pi]\). Also, \(\mathcal{U}[\pi^{*}]-\hat{\mathcal{U}}_{m}[\hat{\pi}_{m}]=\mathcal{U}[\pi^{*} ]-\hat{\mathcal{U}}_{m}[\hat{\pi}_{m}]-\mathcal{U}[\hat{\pi}_{m}]+\mathcal{U} [\hat{\pi}_{m}]\leq\max_{\pi:\mathcal{X}\mapsto\mathcal{P}(\mathcal{Y}\times \mathcal{Y})}-\hat{\mathcal{U}}_{m}[\pi]+\mathcal{U}[\pi]\). It follows that \(\left\{\hat{\mathcal{U}}_{m}[\hat{\pi}_{m}]\right\}\xrightarrow[p]{m\to\infty} \mathcal{U}[\pi^{*}]\) (converges in probability) under the same conditions. Proof of Theorem 1.2:We first show that \(\max_{\pi:\mathcal{X}\mapsto\mathcal{P}(\mathcal{Y}\times\mathcal{Y})}\mathcal{ U}[\pi]-\hat{\mathcal{U}}_{m}[\pi]\) satisfies the bounded difference property. Let \(Z_{i}\) denote the random variable \((X_{i},Y_{i})\). Let \(Z=\{Z_{1},\cdots,Z_{i},\cdots,Z_{m}\}\) be a set of independent random variables. Consider another such set that differs only at the \(i^{th}\) position : \(Z^{\prime}=\{Z_{1},\cdots,Z_{i^{\prime}},\cdots,Z_{m}\}\). Let \(\hat{\mathcal{U}}_{m}[\pi]\) and \(\hat{\mathcal{U}}^{\prime}_{m}[\pi]\) be the corresponding objectives in 7. \[\left|\max_{\pi:\mathcal{X}\mapsto\mathcal{P}(\mathcal{Y}\times \mathcal{Y})}\left(\mathcal{U}[\pi]-\hat{\mathcal{U}}_{m}[\pi]\right)-\max_{ \pi:\mathcal{X}\mapsto\mathcal{P}(\mathcal{Y}\times\mathcal{Y})}\left( \mathcal{U}[\pi]-\hat{\mathcal{U}}^{\prime}_{m}[\pi]\right)\right|\] \[\leq\left|\max_{\pi:\mathcal{X}\mapsto\mathcal{P}(\mathcal{Y} \times\mathcal{Y})}-\hat{\mathcal{U}}_{m}[\pi]+\hat{\mathcal{U}}^{\prime}_{m} [\pi]\right|\] \[\leq\frac{\lambda_{1}}{m}\left|\max_{\pi_{1}:\mathcal{X}\mapsto \mathcal{P}(\mathcal{Y})}\text{MMD}^{2}(\pi_{1}(x_{i}),\delta_{y_{i}})-\text{ MMD}^{2}(\pi_{1}(x^{\prime}_{i}),\delta_{y^{\prime}_{i}})\right|\] \[+\left.\frac{\lambda_{2}}{m}\left|\max_{\pi_{2}:\mathcal{X}\mapsto \mathcal{P}(\mathcal{Y})}\text{MMD}^{2}(\pi_{2}(x_{i}),\delta_{y_{i}})-\text{ MMD}^{2}(\pi_{2}(x^{\prime}_{i}),\delta_{y^{\prime}_{i}})\right|\] (using triangle inequality) \[\leq\frac{8(\lambda_{1}+\lambda_{2})}{m}\text{ (with a normalized kernel based MMD)} \tag{13}\] Using the above in McDiarmid's inequality, \[\max_{\pi:\mathcal{X}\mapsto\mathcal{P}(\mathcal{Y}\times\mathcal{Y})} \mathcal{U}[\pi]-\hat{\mathcal{U}}_{m}[\pi]\leq\mathbb{E}\left[\max_{\pi: \mathcal{X}\mapsto\mathcal{P}(\mathcal{Y}\times\mathcal{Y})}\mathcal{U}[\pi ]-\hat{\mathcal{U}}_{m}[\pi]\right]+4(\lambda_{1}+\lambda_{2})\sqrt{\frac{2}{ m}\log\frac{1}{\delta}} \tag{14}\] We begin with \(\mathcal{D}_{m}=\{(x_{1},y_{1}),\cdots,(x_{m},y_{m})\}\) as an independent set of training data samples. We denote the random variable \((X_{i},Y_{i})\) by \(Z_{i}\) and \(Z=\{Z_{1},\cdots,Z_{m}\}\). Let \((\epsilon_{i})_{i\in\{1,\cdots,m\}}\) be IID Rademacher random variables. We now follow the standard symmetrization trick and introduce the Rademacher random variables to get the following. \[\mathbb{E}\left[\max_{\pi:\mathcal{X}\mapsto\mathcal{P}(\mathcal{Y}\times \mathcal{Y})}\mathcal{U}[\pi]-\hat{\mathcal{U}}_{m}[\pi]\right]\leq 2(\lambda_{1}+ \lambda_{2})\mathbb{E}_{Z,\epsilon}\left[\max_{\pi_{1}:\mathcal{X}\mapsto \mathcal{P}(\mathcal{Y})}\sum_{i=1}^{m}\frac{\epsilon_{i}}{m}\|\mu_{k}(\pi_{1 }(X_{i}))-\phi(Y_{i})\|^{2}\right] \tag{15}\] Let \(\Pi\subseteq\{\pi_{1}:\mathcal{X}\mapsto\mathcal{P}(\mathcal{Y})\}\) be an appropriately restricted model for the conditional transport plan. Let \(\mathcal{R}_{m}(\Pi)\equiv\mathbb{E}\left[\max_{\pi_{1}\in\Pi}\sum_{i=1}^{m} \frac{\epsilon_{i}}{m}\|\mu_{k}(\pi_{1}(X_{i}))-\phi(Y_{i})\|^{2}\right]\). Hence, using 11, 14 and 15, we proved that with probability atleast \(1-\delta\), \[\mathcal{U}[\hat{\pi}_{m}]-\mathcal{U}[\pi^{*}]\leq 2(\lambda_{1}+\lambda_{2}) \left(\mathcal{R}_{m}(\Pi)+3\sqrt{\frac{2}{m}\log\frac{3}{\delta}}\right). \tag{16}\] Further, as shown in the previous part of the proof, we have that \(\hat{\mathcal{U}}_{m}[\hat{\pi}_{m}]-\mathcal{U}[\pi^{*}]\leq\max_{\pi: \mathcal{X}\mapsto\mathcal{P}(\mathcal{Y}\times\mathcal{Y})}\hat{\mathcal{U}}_{m }[\pi]-\mathcal{U}[\pi]\) and \(\mathcal{U}[\pi^{*}]-\hat{\mathcal{U}}_{m}[\hat{\pi}_{m}]\leq\max_{\pi: \mathcal{X}\mapsto\mathcal{P}(\mathcal{Y}\times\mathcal{Y})}-\hat{\mathcal{U }}_{m}[\pi]+\mathcal{U}[\pi]\). Thus, with probability atleast \(1-\delta\), \(|\hat{\mathcal{U}}_{m}[\hat{\pi}_{m}]-\mathcal{U}[\pi^{*}]|\leq 2(\lambda_{1}+ \lambda_{2})\left(\mathcal{R}_{m}(\Pi)+2\sqrt{\frac{2}{m}\log\frac{2}{\delta}}\right)\). **Bounding Rademacher in a special case:** We now upper-bound \(\mathcal{R}_{m}(\Pi)\) for a special case where \(\pi_{1}(x),\pi_{2}(x)\) are implicitly defined using (conditional) generative models, i.e., \(g_{\mathbf{w}}(x,N)\sim\pi_{1}(x)\), where \(g_{\mathbf{w}}\) is a function parameterized by \(\mathbf{w}\) (perhaps a neural network function). Let \(\zeta_{i}(\pi_{1})\equiv\|\mu_{k}(\pi_{1}(x_{i}))-\phi(y_{i})\|^{2}\). Denoting \(\mathcal{H}_{k}\) as the RKHS associated with characteristic kernel \(k\) and \(\mathcal{C}\) as the set of continuous functions and \(\mathcal{W}_{c}=\{f:f\in\mathcal{C};\|f\|_{c}\leq 1\}\), we have that, \[\zeta_{i}(\pi_{1})-\zeta_{i}(\pi_{1}^{\prime}) \leq 4\|\mu_{k}(\pi_{1}(x_{i}))-\mu_{k}(\pi_{1}^{\prime}(x_{i}))\|\] (Using triangle inequality and boundedness of embeddings) \[=4\max_{f\in\mathcal{H}_{k};\|f\|\leq 1}\mathbb{E}[f(g_{ \mathbf{w}}(x_{i},N))]-\mathbb{E}[f(g_{\mathbf{w}^{\prime}}(x_{i},N^{\prime}))]\] (Using dual norm definition of MMD) \[\stackrel{{(1)}}{{\leq}}4\max_{f\in\mathcal{C};\|f\| _{c}\leq 1}\mathbb{E}[f(g_{\mathbf{w}}(x_{i},N))]-\mathbb{E}[f(g_{ \mathbf{w}^{\prime}}(x_{i},N^{\prime}))]\] \[=4\max_{f\in\mathcal{C};\|f\|_{c}\leq 1}\int(f(g_{\mathbf{w}}(x_{i},n))-f(g_{\mathbf{w}^{\prime}}(x_{i},n)))\;\text{d}n\] \[\leq 4\int c\left(g_{\mathbf{w}}(x_{i},n),g_{\mathbf{w}^{\prime}}(x _{i},n)\right)\;\text{d}n \tag{17}\] The last inequality follows from max-integral interchange and \(\|f\|_{c}\leq 1\). Inequality (1) uses the following relation between MMD and 1-Wasserstein when the cost function involved is the kernel distance (\(c(x,y)=\|\phi(x)-\phi(y)\|_{\mathcal{H}_{k}}\)): \[\text{MMD}(s,t)\stackrel{{\text{def.}}}{{=}}\max_{f \in\mathcal{H}_{k}:\|f\|\leq 1}|\mathbb{E}_{s}[f(X)]-\mathbb{E}_{t}[f(X)]|\] \[W_{1}(s,t)\stackrel{{\text{def.}}}{{=}}\max_{f\in \mathcal{C};\|f\|_{c}\leq 1}|\mathbb{E}_{s}[f(X)]-\mathbb{E}_{t}[f(X)]|\] Let \(f\in\mathcal{H}_{k},\|f\|_{\mathcal{H}_{k}}\leq 1\) \[|f(x)-f(y)| =|\left\langle f,\phi(x)-\phi(y)\right\rangle|\text{ (RKHS property)}\] \[\leq\|f\|_{\mathcal{H}_{k}}\|\phi(x)-\phi(y)\|_{\mathcal{H}_{k}^{ \star}}\text{ (Holders Inequality)}\] \[\leq\|\phi(x)-\phi(y)\|_{\mathcal{H}_{k}}\left(\because\|f\|_{ \mathcal{H}_{k}}\leq 1\right)\] \[=c(x,y)\] \[\implies f\in\mathcal{W}_{c}\] \[\implies\text{MMD}(s,t)\leq W_{1}(s,t)\] Now, to simplify the bounds, we further assume the Lipschitz continuity condition: \(\|\phi(g_{\mathbf{w}}(x_{i},n))-\phi(g_{\mathbf{w}^{\prime}}(x_{i},n))\|_{ \mathcal{H}_{k}}\leq L_{i}\|\mathbf{w}-\mathbf{w}^{\prime}\|\). We present a special case for this to hold, as follows. Consider \(g_{\mathbf{w}}(x_{i},n)=\mathbf{M}(x_{i},n)\mathbf{w}\) such that \(\|\mathbf{M}(x_{i},n)\mathbf{w}\|=1\). Now, with the sufficient conditions derived in Section A.3 of [10], \(\left(\text{ i.e. using an RBF kernel }k(x,y)=\exp\left(-s\|x-y\|^{2}\right)\) with \(s\in(0,0.5]\right)\), we have that \(\|\phi(g_{\mathbf{w}}(x_{i},n))-\phi(g_{\mathbf{w}^{\prime}}(x_{i},n))\|_{ \mathcal{H}_{k}}\leq\|g_{\mathbf{w}}(x_{i},n)-g_{\mathbf{w}^{\prime}}(x_{i},n) \|\leq\sigma_{\max}\left(\mathbf{M}(x_{i},n)\right)\|\mathbf{w}-\mathbf{w}^{ \prime}\|\). With this, 17 implies that \(\zeta_{i}(.)\) is a \(4L_{i}\)-Lipschitz continuous function. We also assume \(\|\mathbf{w}\|_{2}\leq 1\) (regularization of parameters). We next use Corollary (4) from [10] with \(r_{ij}\) as an independent doubly indexed Rademacher sequence. \[\mathcal{R}_{m}(\Pi) \leq\frac{4\sqrt{2}}{m}\mathbb{E}\left[\max_{\left\|\mathbf{w} \right\|\leq 1}\sum_{i=1}^{m}L_{i}\mathbf{r}_{i}^{\top}\mathbf{w}\right]\] (From Corollary (4) in [10]) \[=\frac{4\sqrt{2}}{m}\mathbb{E}\left[\max_{\left\|\mathbf{w} \right\|\leq 1}\mathbf{w}^{\top}(\sum_{i=1}^{m}\mathbf{r}_{i}L_{i})\right]\] \[\leq\frac{4\sqrt{2}}{m}\mathbb{E}\left[\|\sum_{i=1}^{m}\mathbf{r }_{i}L_{i}\|_{2}\right]\text{(using Cauchy Schwarz)}\] \[\leq\frac{4\sqrt{2}}{m}\sqrt{\mathbb{E}[\|\sum_{i=1}^{m}\mathbf{ r}_{i}L_{i}\|_{2}^{2}]}\text{(Jensen's inequality)}\] \[=\frac{4\sqrt{2}}{m}\sqrt{\sum_{i=1}^{m}L_{i}^{2}}\text{ (using property of Rademacher variables)}\] \[\leq 4\sqrt{\frac{2}{m}}\text{ (if }L_{i}\leq 1\text{).}\] \[\to 0\text{ as }m\rightarrow\infty. \tag{18}\] Suppose \(\lambda\) grows as \(O(m^{\frac{1}{4}})\), the estimation error would then decay as \(O(m^{-\frac{1}{4}})\), making our estimator consistent. ### More details on experiments This section contains more experimental details along with some additional results. #### 7.3.1 Implicit modelling of the transport plan In continuation to Section 3 in the main paper, we give more details on the implicit modelling approach for the COT transport plan. Figure 5 shows the implicit model used in our work. For inference, we pass different noise samples and average the outputs of \(\pi_{\psi}^{\prime}\). Visualizing the predictionsWe take a synthetic regression data and show the predictions learnt by the implicit conditional generator trained with the COT loss 9. We fix \(\lambda\) to 500, noise dimension to 10. We use Adam optimizer with a learning rate \(5e-3\) and train for 1000 epochs. We use squared Euclidean distance and RBF kernel. Figure 6 shows that we obtain a good fit for \(\sigma^{2}=10,100\). For the same task, we also show the COT training objective over epochs in Figure 7. Figure 5: Illustration of the Implicit Model used #### 7.3.2 Verifying Correctness of Estimator For the results shown in Figure 1 and Figure 2, we use \(\lambda_{1}=\lambda_{2}\) as 600; the noise dimension as 10, and the \(\sigma^{2}\) hyperparameter in RBF kernel as 1. We use Adam optimizer with a learning rate \(3e-2\), weight decay \(1e-10\) and trained our implicit model for 1500 epochs. #### 7.3.3 Classification We generate a 2D blob dataset for 3 classes having the number of samples as 300, 400 and 300. We divide it into train and test splits, maintaining the class ratio's in the two splits. In Figure 8, we first show the decision boundary of the untrained classifier on the test data. We then visualize the classifier's decision boundaries for different values of Figure 8: The test data points, colored according to class labels, are displayed along with the decision boundaries of the untrained classifier. It can be seen that the decision boundary of the untrained classifier is not able to classify points and the resulting test accuracy is \(30\%\). Figure 6: Predictions of the implicit conditional generator trained with the COT loss. The plots show the effect of different \(\sigma^{2}\) hyperparameters used in the RBF kernel: 1 (left), 10 (center), 100 (right). Figure 7: The COT objective over epochs curve while training the implicit conditional generator for the results shown in Figure6 \(\lambda\) in Figure 9 after training on the train split. For training, we use a batch size of 50 and the Adam optimizer with a learning rate of \(5e-3\). Due to the labels being discrete, our cost matrix, as well as Gram matrix, are Identity matrices for classification. Figure 8 shows that on increasing \(\lambda\), the classifier learns with a lesser number of epochs showing the effect of MMD-based regularization used in the COT formulation. In Table 5, we also show the per-epoch computation time taken (on an RTX 2080 Ti GPU) by the COT loss as a function of the size of the minibatch, which shows the computational efficiency of the COT loss. #### 7.3.4 Cell Population Dynamics **Dataset** We use the preprocessed dataset provided by (Bunne et al., 2021). The dataset is publicly available for download using the following link [https://polybox.ethz.ch/index.php/s/RAykIMfDl0qCJaM](https://polybox.ethz.ch/index.php/s/RAykIMfDl0qCJaM). Figure 9: The test data points, colored according to class labels, are displayed along with the decision boundary of the classifier trained for 10 epochs (left column) and 50 epochs (right column). \begin{table} \begin{tabular}{l l} \hline \(B\) & Time (ms) \\ \hline 16 & 9.4 \\ 64 & 9.6 \\ 256 & 9.0 \\ 1024 & 9.1 \\ \hline \end{tabular} \end{table} Table 5: Per-epoch computation time with minibatch size (\(B\)) From this dataset, we extracted unperturbed cells and cells treated with Givinostat. This led to a total of 17565 control cells, and a total of 3541 cells treated with Givinostat. We take the same data splits as in (Bunne et al., 2021a). **Marker Genes** Following (Bunne et al., 2022), we use scanpy's (Wolf et al., 2018) rank_genes_groups function for ranking and obtaining 50 marker genes for each drug dosage. The perturbed cells are grouped by drug dosage and the ranking is computed by keeping the unperturbed (i.e. control) cells as reference. Similar to (Bunne et al., 2022), COT also operates on the entire 1000 genes, and the computed 50 marker genes are only used for evaluation using the \(l_{2}\)(**PS**) metric. Following the in-sample experiment done in (Bunne et al., 2022), we tune our hyperparameters on the training data split. Based on the scale of terms in the COT objective, we chose \(\lambda_{1}\) from the set \(\{250,2500,5000,12500\}\) and found \(\lambda_{1}=5000\) to be the optimal choice. For the IMQ kernel, we choose the hyperparameter from the set \(\{1,10,100\}\) and found 10 to be the optimal choice. We fixed the noise dimension to 20. For all the experiments reported above, we used an MLP with 4 hidden layers, which was trained for 950 epochs. For inference used in Table 3, we average the outputs generated by our implicit network corresponding to 10 noise samples. For inference used in plotting the marginals shown in Figure 4, we average the outputs generated by our implicit network corresponding to 50 noise samples. Following (Bunne et al., 2022), we quantitatively evaluate our performance using the \(l_{2}\) distance between the perturbation signatures, \(l2\)(**PS**) metric. Let \(\mu\) be the set of observed unperturbed cell population, \(\nu\) be the set of the observed perturbed cell population (of size \(m_{1}\)), and \(\nu^{\prime}\) be the set of predicted perturbed state of population \(\mu\) (of size \(m_{2}\)). The perturbation signature \(\mathrm{PS}(\nu,\mu)\) is then defined as \(\frac{1}{m_{1}}\sum_{y_{i}\in\nu}y_{i}-\frac{1}{m_{2}}\sum_{y_{i}\in\mu}y_{i}^ {\prime}\). The \(l_{2}\)(**PS**) metric is the \(l_{2}\) distance between \(\mathrm{PS}(\nu,\mu)\) and \(\mathrm{PS}(\hat{\nu},\mu)\). #### 7.3.5 Prompt Learning We follow the same setup as in (Chen et al., 2023) and replace the OT based training loss with the COT loss described in Section 4.4. In Figure10, we illustrate our approach to learning the conditional transport plans in the prompt learning setup. We train using SGD with a learning rate \(1e-2\) for 50 epochs. Our optimal \(\lambda\) values for different shots (\(K\)) are: 10 for \(K=2\), 0.01 for \(K=4\), 0.01 for \(K=8\) and 0.1 for \(K=16\). ### Reproducibility We have followed standard protocols for ensuring reproducibility in our experiments. We will open-source our code upon acceptance of the paper. Figure 10: Learning the conditional transport plans for Prompt learning ### Future Work In this paper, we validate the correctness of our estimator for conditional OT and show its utility in downstream applications for a biology task and vision-related tasks. However, we believe that the proposed formulation can be applied to a range of tasks across machine learning domains, which we would like to explore in the near future. ### Negative Societal Impact We present a formulation for solving optimal transport between conditional distributions. This problem has many socially beneficial applications, like predicting cell responses to cancer treatment, as shown in our paper. However, if a malicious task is selected, the proposed COT formulation may have a negative societal impact, similar to most other methods in machine learning. ## 8 Funding Disclosure and Acknowledgements The first author is supported by the Google PhD Fellowship. We thank Fujitsu R&D, Japan International Cooperation Agency and IIT-Hyderabad for the provision of GPU servers used for this work. We thank Pratik Jawanpuria for the insightful discussions, which helped us in formulating our method and in shaping the experimental section. We also thank Aditya Saibewar and Amit Chandak for their help during the initial phase of this project.
2306.13820
Efficient equidistribution of periodic nilsequences and applications
This is a companion paper to arXiv:2312.10772. We deduce an equidistribution theorem for periodic nilsequences and use this theorem to give two applications in arithmetic combinatorics. The first application is quasi-polynomial bounds for a certain complexity one polynomial progression, improving the iterated logarithm bound previusly obtained. The second application is a proof of the quasi-polynomial $U^4[N]$ inverse theorem. In work with Sah and Sawhney, we obtain improved bounds for sets lacking nontrivial $5$-term arithmetic progressions.
James Leng
2023-06-24T00:08:39Z
http://arxiv.org/abs/2306.13820v5
The partition rank vs. analytic rank problem for cyclic groups I. equidistribution for periodic nilsequences ###### Abstract. We give improved quantitative equidistribution estimates for nilsequences that are periodic modulo a large prime, obtaining bounds single exponential in dimension, improving a result of Green and Tao (who obtained similar results but with losses double exponential in dimension). To do so, we refine Green and Tao's argument and overcome the "induction on dimensions" obstruction present in several places in their argument. Our results are enough to imply quasi-polynomial type bounds for certain complexity one polynomial Szemeredi theorems that the author proved in [1], improving on the iterated logarithm bound the author obtained in [1]. In subsequent work [1], we extend these results to general and multiparameter nilsequences. The strength of our quantitative bounds are analogous in higher order Fourier analysis over \(\mathbb{F}_{p}^{n}\) to the partition rank being polynomial in the analytic rank. In principle, this also gives a new proof of quantitative equidistribution of nilsequences, although there are many similarities to Green and Tao's proof. ## 1. Introduction In 2001, Gowers [1] introduced _Gowers norms_ as a way to measure to "pseudorandomness" in Szemeredi's theorem and proved an inverse theorem for the Gowers norm stating that obstructions to Gowers uniformity norms are roughly constant on short arithmetic progressions. In 2010, Green-Tao-Ziegler, motivated by parallel work in ergodic theory [11] and [2], in a series of works [1, 1, 1, 2, 3] identified nilsequences as the obstructions to Gowers uniformity, and used this to generalize Vinogradov's approach of using the circle method to count certain linear configurations in the primes. A key result needed in their analysis is a quantitative equidistribution theorem [1, Theorem 1.16], which we list as follows (relevant definitions such as \(\|\cdot\|_{C^{\infty}[N]}\) can be found in Section 2): **Theorem 1**.: _Let \(F(g(n)\Gamma)\) be a nilsequence on a nilmanifold \(G/\Gamma\) with a \(\delta^{-1}\)-rational Mal'cev basis such that \(F\) has Lipschitz parameter (which is the sum of the Lipschitz constant and the \(L^{\infty}\) norm) \(\leq 1\). If_ \[|\mathbb{E}_{n\in[N]}F(g(n)\Gamma)-\int_{G/\Gamma}Fd\mu|\geq\delta\] _then either \(N\ll_{G/\Gamma,\delta}1\) or there exists a nonzero homomorphism \(\eta:G\to\mathbb{C}\) of modulus at most \(\delta^{-O_{G/\Gamma}(1)}\) which annihilates \(\Gamma\) such that \(\|\eta\circ g\|_{C^{\infty}[N]}\leq\delta^{-O_{G/\Gamma}(1)}\)._ We note that a special case of this theorem is the case of when \(G\) is abelian and \(g(n)=g^{n}\) is an orbit, which is as follows: **Proposition 1.1**.: _Let \(\alpha\in\mathbb{R}^{d}/\mathbb{Z}^{d}\) and \(F:\mathbb{R}^{d}/\mathbb{Z}^{d}\to\mathbb{C}\) be a Lipschitz function with Lipschitz parameter \(\leq 1\). If_ \[|\mathbb{E}_{n\in[N]}F(\alpha n)-\int_{\mathbb{R}^{d}/\mathbb{Z}^{d}}Fd\mu|\geq\delta\] _then either \(N\ll_{d,\delta}1\) or there exists a vector \(\eta\in\mathbb{Z}^{d}\) of size at most \(\delta^{-O_{d}(1)}\) such that \(\|k\cdot\alpha\|_{\mathbb{R}/\mathbb{Z}}\ll_{d,\delta}1\)._ A corollary of Theorem 1 is the "Ratner-type factorization theorem" of Green and Tao [1, Corollary 1.20], which is often used in applications (see [1, Definition 1.2] for definition of "\(\delta\)-equidistributed"): **Theorem 2**.: _Let \(G/\Gamma\) be a filtered nilmanifold with complexity \(M\geq 2\) and \(g(n)\) be a polynomial sequence on \(G\). For each \(A\geq 2\), there exists some \(\delta\) with \(M\leq\delta^{-1}\leq M^{O_{G/\Gamma,A}(1)}\) and a factorization \(g=\epsilon g_{1}\gamma\) where_ * \(\epsilon\) _is_ \((\delta^{-1},N)\)_-smooth, meaning that for all_ \(n\in[N]\)_,_ \(d(\epsilon(n-1),\epsilon(n))\leq\frac{\delta^{-1}}{N}\)_;_ * \(\gamma\) _is_ \(\delta^{-1}\)_-periodic; and_ * \(g_{1}\) _is_ \(\delta^{A}\)_-equidistributed inside a subnilmanifold_ \(\tilde{G}/\tilde{\Gamma}\) _with complexity at most_ \(\delta^{-1}\) _where_ \(\tilde{G}\) _is a subgroup of_ \(G\) _with rationality at most_ \(\delta^{-1}\)_._ The corresponding abelian degree one case is the following: **Proposition 1.2**.: _Let \(\alpha\in\mathbb{R}^{d}/\mathbb{Z}^{d}\). Then given \(M\geq 2\) and \(A\geq 2\), there exists some \(\delta\) with \(M\leq\delta^{-1}\leq M^{O_{d,A}(1)}\) and a factorization \(\alpha=\epsilon+\alpha^{\prime}+\gamma\) where_ * \(\|\epsilon\|_{\mathbb{R}/\mathbb{Z}}\leq\frac{\delta^{-1}}{N}\)_;_ * _there exists some integer_ \(k\leq\delta^{-1}\) _such that_ \(k\gamma\in\mathbb{Z}^{d}_{\mathbb{Z}}\)_; and_ * \(n\mapsto\alpha^{\prime}n\) _is_ \(\delta^{A}\)_-equidistributed inside a subgroup_ \(\tilde{G}\) _of_ \(\mathbb{R}^{d}/\mathbb{Z}^{d}\) _of rationality at most_ \(\delta^{-1}\)_._ In [1], the authors work out that bounds "\(M^{O_{G/\Gamma,A}(1)}\)" and "\(\delta^{-O_{G/\Gamma}(1)}\)" corresponds to quantities double exponential in the dimension of \(G\). They raise the question of whether such theorems can be single exponential in the dimension. In the negative direction, a simple example below shows that even in the abelian degree one case, the Ratner-type factorization theorem cannot have bounds single exponential in dimension. **Example**.: _Consider the example \(g(n)=(\alpha n,\delta^{-1}\alpha n,\delta^{-2}\alpha n,\delta^{-4}\alpha n, \ldots,\delta^{-2^{d}}\alpha n)\) (a linear orbit on \(\mathbb{T}^{d+2}\)) with \(\alpha\) highly irrational. The factorization theorem with \(A=\log(1/\delta)\) and \(M=2\), yields that this \(\delta^{2^{d+1}}\)-equidistributes in the subtorus \((x,\delta^{-1}x,\ldots,\delta^{-2^{d}}x)\), which give losses double exponential in dimension. This is because the threshold for equidistribution in \(g(n)\) at each step in the iteration barely keeps up with how fast the rationality of the torus increases, causing us to continue to need to pass to a subtorus. Note that if we worked with \(g_{1}(n)=(\alpha n,\delta^{-2}\alpha n,\delta^{-2^{2}}\alpha n,\ldots,\delta^{- 2^{d}}\alpha n)\), then the factorization theorem would yield equidistribution in \((x,\delta^{-2}x,\ldots,\delta^{-2^{2^{O(1)}}}x,\ldots,x_{d+2})\), which give polynomial losses. In this case, the threshold for equidistribution is far less than how fast the rationality of the torus increases, causing us to not need to pass to a subtorus after \(O(1)\) many steps._ One possible reason the above example occurs is that wanting equidistribution for \(g(n)\Gamma\) is a statement about every single observable (where an "observable" refers to a Lipschitz function), which for a lot of applications is stronger than necessary. Each time we apply an equidistribution result to lower the dimension, we must deal with a finer scale of equidistribution, which causes the number of characters we must consider while applying an equidistribution result to increase. For equidistribution on a torus, this problem can be fixed by looking at a single observable, since each observable may be uniformly approximated by a finite set of Fourier characters. We can then just work with those finite set of characters each time we apply an equidistribution result to lower the dimension of the torus by one to avoid the problem of the space of characters we must consider increasing too fast. The example above also foreshadows future obstructions we must overcome in our quest to improve quantitative bounds for the equidistribution of nilsequences. Many such arguments related to the equidistribution on tori or nilmanifolds proceed via an induction on dimensions. Induction on dimensions is ultimately inevitable, but we must prevent ourselves from incurring losses double exponential in dimension. This means that if \(\delta\) is a parameter in the proof, we cannot allow a seemingly harmless iteration of \(\delta\mapsto\delta^{2}\), since after an induction on dimensions, this would lead to losses of \(\delta^{2^{d}}\). 1 Footnote 1: Another way and perhaps a more satisfying way to rescue the factorization theorem in the abelian case is to use dilated tori as in [10] to counteract the increase in Lipschitz constant when we pass to a rational subgroup. The point is that having a larger Lipschitz constant is more expensive than increasing the volume of the torus as [10, Lemma 7.2] shows. Unfortunately, the author was unable to find such a generalization to arbitrary nilsequences. Despite this example, previous work of Gowers and Wolf [11], Green and Tao [10], and the author [12] show that an equidistribution theory for degree two nilsequences with losses single exponential in dimension is both possible and useful. However, these previous works stop frustratingly short of proving a result of the flavor of Theorem 1 with good quantitative bounds, instead only proving that a non-equidistributed degree two nilsequence is (with losses single exponential in dimension) a degree one nilsequence in _some_ way. In the above theorem, one can deduce a quite explicit description of _how_ a non-equidistributed degree two nilsequence is a degree one nilsequence via the Ratner-type factorization theorem. This extra information is what's necessary for applications such as the complexity one polynomial Szemeredi theorem [12], which the previous equidistribution approaches of [10], [11], and [12] seem to be unable to replicate. To drive this point home, we note that while [11] obtains quantitatively stronger results, it isn't clear from their work how to generalize their main results to higher complexity systems, while Altman's [1, 2], Candela-Sisask's [13], Green-Tao's [10], and Kuca's [14, 15] powerful approaches seem far more flexible and comparatively "easy." While being inflexible, these previous approaches suggest that at least for degree two nilsequences, it may at least be possible to deduce a more robust equidistribution theorem in the flavor of Theorem 1 with good bounds. The proof of our main theorem (in particular Theorem 9) makes this a reality, and it turns out that understanding how the previous approaches of [11], [10] and [12] relate to the equidistribution on nilmanifolds leads to the key of the proof, Lemma 3.1 which we term as the "refined bracket polynomial lemma." Furthermore, the method used to prove the corresponding two-step statement (Theorem 7) generalizes to arbitrary nilsequences: **Theorem 3**.: _Let \(N\) be a prime number, \(0<\delta<\frac{1}{10}\), and \(M\geq 1\) real. Let \(F(g(n)\Gamma)\) be a periodic nilsequence modulo \(N\) (that is, \(g(n+N)\Gamma=g(n)\Gamma\) for all \(n\)) with dimension \(d\), complexity \(M\), Lipschitz parameter \(\leq 1\), step \(s\), and degree \(k\). Suppose \(F\) is a Lipschitz vertical character with nonzero frequency \(\xi\) with \(|\xi|\leq(\delta/M)^{-1}\). If_ \[|\mathbb{E}_{n\in[N]}F(g(n)\Gamma)|\geq\delta\] _then either \(N\ll(\delta/M)^{-O_{s,k}(d)^{O_{s,k}(1)}}\) or else \(n\mapsto F(g(n)\Gamma)\) is a nilsequence on some nilmanifold \(\tilde{G}/\tilde{\Gamma}\) of degree \(\leq k\) and step \(\leq s-1\) with Lipschitz parameter and complexity at most \((\delta/M)^{-O_{s,k}(d)^{O_{s,k}(1)}}\) and dimension \(\leq d-1\)._ It turns out that we can take \(\tilde{G}/\tilde{\Gamma}\) to be a \((\delta/M)^{-O_{s,k}(d)^{O_{s,k}(1)}}\)-rational subnilmanifold of the projection of \(G/\Gamma\) to the image of \(\xi\). See Theorem 9. The corresponding one-step statement is the following: **Proposition 1.3**.: _Let \(N\) be prime and \(p(n)=\alpha_{0}+\alpha_{1}n+\cdots\alpha_{k}n^{k}\) be a polynomial whose coefficients are rational with denominator dividing \(N\). If_ \[|\mathbb{E}_{n\in[N]}\exp(2\pi ip(n))|\geq\delta\] _then either \(N\ll\delta^{-O_{k}(1)}\) or else \(\|\alpha_{i}\|_{\mathbb{R}/\mathbb{Z}}=0\) for all \(i=1,\ldots,k\)._ We note that Theorem 3 sidesteps the above example, since the above example illustrates problems that may occur if one considers _all_ observables, and here, we only consider _one_ observable. This can be thought of as a possible cyclic group analogue of the "partition rank vs. analytic rank problem" that occurs in the study of structure vs. randomness for obstructions to Gowers uniformity in higher order Fourier analysis over \(\mathbb{F}_{p}^{n}\). As will be explained below, the quantitative bounds we obtain here are analogous to the partition rank being polynomial in the analytic rank. In fact, our deduction of Theorem 3 gives single exponential bounds for Theorem 1 in the case of periodic nilsequences, as indicated by Theorem 9. Furthermore, in Theorem 10 we iterate Theorem 3 to obtain an analogue of the Ratner-type factorization theorem for a single observable. In subsequent work [1], the author proves a similar result for general and multiparameter nilsequences. Since the proof for periodic nilsequences is cleaner and conceptually easier for the reader to follow, we have decided to separate the proof of the periodic nilsequences case here. Indeed, in the periodic nilsequences case, as observed in [1], [1, 2], and [1] that the smooth term \(\epsilon\) in the Ratner-type factorization theorem disappears, or rather, becomes a constant term. Furthermore, since our nilsequences are periodic modulo a prime, we may also eliminate the rational part \(\gamma\) via a change of variables without having to pass to subprogressions, using the fact that modular inverses for nonzero elements exist modulo a prime. As will be explained below, the periodic nilsequences case is already interesting and is enough for applications to arithmetic combinatorics. Another benefit of isolating the periodic nilsequences case is that all of the previous literature for periodic nilsequences are _ad hoc_ and use various tricks that are derived from the general nilsequences case, while here, we provide a direct argument for the periodic nilsequences case. Additionally, by [10], it is the periodic nilsequences case that is the "partition rank vs. analytic rank problem" for cyclic groups. ### Relevance to quantitative higher order Fourier analysis In 2018, Manners [10] showed a quantitative Gowers inverse theorem, obtaining the following result: **Theorem 4**.: _Let \(f:\mathbb{Z}/N\mathbb{Z}\to\mathbb{C}\) be one-bounded such that \(\|f\|_{U^{s+1}(\mathbb{Z}/N\mathbb{Z})}\geq\delta\) with \(s\) fixed and \(N\) prime. Then there exists a nilsequence \(F(g(n)\Gamma)\) with dimension \(D(\delta)=\delta^{-O(1)}\) complexity \(M(\delta)=\exp(O(\exp(\delta^{-O(1)})))\), and \(|F(g(n)\Gamma)|\leq 1\) and having Lispchitz constant \(\leq 1\) such that_ \[|\langle f,F(g(n)\Gamma)\rangle|\geq c(\delta)\] _where \(c(\delta)=\exp(-O(\exp(\delta^{-O(1)})))\)._ In fact, a calculation indicates that Manners obtains that the complexity is double exponential in the dimension2, and \(c(\delta)=(\delta/M(\delta))^{O(D(\delta))^{O(1)}}\). The conjectured quasi-polynomial inverse theorem states the following: Footnote 2: The key obstruction to obtaining better complexity bounds also seems to be the difficulty in overcoming an induction on dimensions. The author is not aware, however, of any connection between the obstruction there and the induction on dimensions obstruction we run into. **Conjecture 1**.: _One can take \(D(\delta)=\log(1/\delta)^{O(1)}\), \(M(\delta)=\exp(O(D(\delta))^{O(1)})\), and \(c(\delta)=(\delta/M(\delta))^{O(D(\delta))^{O(1)}}\)._ In some sense, Manners's result is two (iterated) exponentials away from the conjectured quasi-polynomial inverse theorem. The only case where the quasi-polynomial inverse theorem is known is the \(U^{3}\) setting, obtained by Sanders [10]. There, a calculation shows that we can take the complexity of the two-step nilmanifold obtained to be \(O(1)\). Thus, a moral consequence of our work is that assuming a quasi-polynomial inverse theorem, if one wanted to further apply an equidistribution theorem to the nilsequence obtained from the inverse theorem, one would only end up with quasi-polynomial losses. In other words, if one were content with quasi-polynomial losses and if one were to assume Conjecture 1, then applying an equidistribution result of nilsequences \(O(1)\) many times is inexpensive. This is illustrated in the two applications we give below as well as in [11]. One can view the analogous result of Theorem 3 in higher order Fourier analysis in \(\mathbb{F}_{p}^{n}\) is the result that the partition rank of a tensor is polynomial in the analytic rank of the tensor (See e.g., [12] for definitions). The conjectured quasi-polynomial inverse theorem in this setting is as follows: **Conjecture 2**.: _Let \(f:\mathbb{F}_{p}^{n}\to\mathbb{C}\) be one-bounded such that \(\|f\|_{U^{s+1}(\mathbb{F}_{p}^{n})}\geq\delta\) with \(s\ll p\). Then there exists a polynomial \(P:\mathbb{F}_{p}^{n}\to\mathbb{F}_{p}\) of degree at most \(s\) such that_ \[|\langle f,e(P(n))\rangle|\gg_{s,p}\exp(-\log(1/\delta)^{O(1)}).\] It was shown by Janzer [13] and Milicevic [12] that the partition rank is polynomial in the analytic rank. Like the cyclic group case, a moral consequence of the quasi-polynomial inverse theorem and the result of the partition rank being polynomial in the analytic rank is that applying such a partition rank vs. analytic rank result yields at most quasi-polynomial losses since we would likely be able to work with polynomials with analytic rank at most \(\log(1/\delta)^{O(1)}\). An illustration of this is [13] where the authors are able to show strong orthogonality of the Mobius function with degree two polynomials3. Again, this is only a _moral_ consequence since applications of these theorems depend on the context. For progress on Conjecture 2, see [1, 12, 13]. For recent work on the partition rank vs. analytic rank problem, see e.g., [1, 1, 14]. ### Applications In subsequent work, we shall deduce applications of Theorem 3 to relevant problems in arithmetic combinatorics. One such application is the following: **Theorem 5**.: _Let \(N\) be a large prime and \(A\subseteq\mathbb{Z}/N\mathbb{Z}\) be a subset lacking the configuration \((x,x+P(y),x+Q(y),x+P(y)+Q(y))\) where \(P,Q\in\mathbb{Z}[x]\) are linearly independent and have zero constant coefficient and \(y\) ranges over \(\mathbb{Z}/N\mathbb{Z}\). Then_ \[|A|\ll_{P,Q}\frac{N}{\exp(c_{P,Q}\log^{c_{P,Q}}(N))}\] _for some constant \(c_{P,Q}>0\)._ This theorem is an improvement to a previous result of the author [11] which obtained a similar qualitative result as above but with the quantitative bounds \[|A|\ll_{P,Q}\frac{N}{\log_{O_{P,Q}(1)}(N)}\] where \(\log_{O_{P,Q}(1)}(N)\) is an iterated logarithm. There, [11] remarked that an improved bounds for [11, Lemma 6.1] (combined with Sanders' work [12]) would yield Theorem 5. We shall deduce an improvement to [11, Lemma 6.1] here in Appendix B, but postpone further details elsewhere. The result of [11] for general nilsequences also gives a simpler proof of the main result in [11], one which also generalizes if one assumes Conjecture 1: **Theorem 6**.: _Assume Conjecture 1. Then_ \[\|\mu\|_{U^{s+1}[N]}\ll_{A}^{ineff}\log^{-A}(N)\] \[\|\Lambda-\Lambda_{Q}\|_{U^{s+1}[N]}\ll_{A}^{ineff}\log^{-A}(N)\] _where \(\Lambda_{Q}\) is defined in [11]._ This will be deduced in [11] along with the equidistribution theorem for general nilsequences. Additionally, it seems plausible that Theorem 3 can find applications in regularity and counting lemmas. For interesting recent work in this direction, see [1, 1, 12]. ### Organization of the paper In Section 2, we define the notation used in the paper. In Section 3, we consider a warm-up problem for two-step nilsequences, and arrive at a crucial lemma, which is the refined bracket polynomial lemma. We shall deduce this lemma in Section 3 via an argument of T. Tao (private communication). This lemma will be used in subsequent sections. In Section 4, we consider a second warm-up problem for more general two-step polynomial sequences. In Section 5, we deduce the main theorem. In Section 6, we deduce a Ratner-type factorization theorem for a single observable. In Appendix A, we give proofs of some auxiliary lemmas for the proof of the main theorem. In Appendix B, we deduce the improved bounds for [11, Lemma 6.1]. In Appendix C, we give two additional proofs of the refined bracket polynomial lemma, meant to be used in [11]. The first is a more straightforward but more technical proof of the refined bracket polynomial lemma. Readers unfamiliar with the geometry of numbers can refer to the proof there instead of the one in Section 3. The second is a generalization of the proof given in Section 3 adapted to work for arbitrary nilsequences. In Appendix D, we state relevant results in Diophantine approximation and the geometry of numbers. ### Acknowledgements We would like to thank Terry Tao for advisement and for coming up with a simpler proof of the periodic refined bracket polynomial lemma than the proof the author initially came up with. We would also like to thank Jaume de Dios, Ben Johnsrude, Borys Kuca, Freddie Manners, Rushil Raghavan, and David Soukup for helpful discussions and suggestions related to this problem. In particular, when the author was on the verge of giving up on this problem, the preparation of giving an explanation of this problem to Borys led the author to the key inspiration needed to solve this problem. We are grateful to Joni Teravainen as well for helpful comments and raising this question (jointly with Terry Tao) as a remark in [TT] where the author first learned of this problem. The author is supported by an NSF Graduate Research Fellowship Grant No. DGE-2034835. ## 2. Notation and Conventions In this section, we will mostly go over notation regarding nilpotent Lie groups. Most of our notation follows [GT1]. First, we set some general notation. Given a function \(f:A\to\mathbb{C}\) defined on a finite set \(A\), we set \[\mathbb{E}_{a\in A}f(a):=\frac{1}{|A|}\sum_{a\in A}f(a).\] We also denote for \(x\in\mathbb{C}\), \(e(x)=\exp(2\pi ix)\). Given a real number \(r\in\mathbb{R}\), we denote \(\{r\}\in(-1/2,1/2]\) as the difference between \(r\) and a nearest integer to \(r\). Given a general vector \(a=(a_{1},\dots,a_{d})\in\mathbb{R}^{d}\), we denote \(\{a\}=(\{a_{1}\},\dots,\{a_{d}\})\). Nilmanifolds arise naturally as a means to "clean up" bracket polynomials that show up in the inverse theory of Gowers norms. For further reference along these objects and how they relate to higher order Fourier analysis, we refer the reader to [T2] or [HK2]. A nilpotent Lie group \(G\) is a group whose _lower central series_\(G\supseteq G_{(2)}\supseteq G_{(3)}\cdots\) where \(G_{(i)}:=[G,G_{(i-1)}]\) is eventually the identity element after finitely many steps. If \(s\) is the least integer such that \(G_{(s)}\) is the identity, then \(G\) is known as a \(s\)-step nilpotent Lie group. We shall examine \(G/\Gamma\) where \(\Gamma\) is a discrete cocompact subgroup. Objects \(X=G/\Gamma\) are referred to as _nilmanifolds_. \(G_{(s)}\) is known as the _vertical component_4 and \(G/G_{(2)}\) as the _horizontal component_. Following a reduction of Leibman [Le], we shall assume that \(G\) is connected and simply connected. Footnote 4: Note that we use a slightly different convention than [GT1], which work with a filtered nilpotent Lie group \((G_{i})\) of degree \(k\) and the vertical component is \(G_{k}\). A _filtration_ for a nilpotent Lie group \(G\) is a sequence of normal subgroups \((G_{i})_{i=0}^{\infty}\) such that \(G_{0}=G_{1}=G\), \([G_{i},G_{j}]\subseteq G_{i+j}\) and for which \(G_{i}=0\) eventually. The largest \(k\) for which \(G_{k}\) is nonzero is known as the _degree_ of the filtration. Note that each nilpotent Lie group has the _standard filtration_ which is defined in the above paragraph as \(G_{i}=[G,G_{i-1}]\), but not all filtrations have to be the standard filtration. It is true that the standard filtration is minimal in the sense that given a filtration \((G_{i})\), \(G_{i}\supseteq G_{(i)}\). See [T2, Exercise 1.6.2]. The point of specifying a filtration on a nilpotent Lie group is so one can define general _polynomial sequences_ on nilpotent Lie groups. Intuitively, if one thinks of \(G\) has a subgroup of the unipotent matrices, then polynomial sequences are sequences of \(G\) where all matrix coefficients are polynomials. More specifically, given a filtration \((G_{i})_{i=1}^{\infty}\), we define a _polynomial sequence_ on \(G\) as a sequence \[g(n)=g_{0}g_{1}^{n}g_{2}^{n\choose 2}g_{3}^{n\choose 3}\dots\] where \(g_{i}\in G_{i}\). We shall denote the set of all polynomial sequences as \(\operatorname{poly}(\mathbb{Z},G)\). It can be shown that \(\operatorname{poly}(\mathbb{Z},G)\) forms a group under pointwise multiplication. See [12, Section 1.6]. Given a positive integer \(D\), an element \(g\) of \(G\) is _rational with denominator \(D\)_, or _\(D\)-rational_, if \(g^{D}\in\Gamma\). If a metric on \(G\) is specified (as will be shortly), a _nilsequence_ is a sequence of the form \(n\mapsto F(g(n)\Gamma)\) where \(F:G/\Gamma\to\mathbb{C}\) is Lipschitz. The _Lipschitz parameter_ of \(F(g(n)\Gamma)\), denoted \(\|F\|_{Lip(G)}\), or simply \(\|F\|_{Lip}\) if the space it is a Lipschitz function in is clear, is the sum of the Lipschitz constant and the \(L^{\infty}(G)\) norm of \(F\). The point of considering Lipschitz functions is that in the case that \(G\) is abelian, they can be quantitatively Fourier approximated. See Lemma 4.6 for generalizations of that fact. We say a nilsequence is _periodic modulo N_ if \(g(n+N)\Gamma=g(n)\Gamma\). Given a nilpotent Lie group \(G\) with a discrete cocompact subgroup \(\Gamma\), we denote \(\mathfrak{g}\) as its Lie algebra with the maps \(\exp:\mathfrak{g}\to G\) and \(\log:G\to\mathfrak{g}\) the exponential and logarithm maps. Given a filtration \((G_{i})\) of \(G\), a _Mal'cev basis_ of \(G/\Gamma\) is a basis \(\{X_{1},\dots,X_{d}\}\) of \(\mathfrak{g}\) satisfying the following: * For each \(j=0,\dots,d-1\), the subspace \(\mathfrak{h}_{j}=\{X_{j+1},\dots,X_{d}\}\) is an ideal of \(\mathfrak{g}\) and hence \(H_{j}:=\exp(\mathfrak{h}_{j})\) is a normal subgroup of \(G\). * If \(d_{i}=\dim(G_{i})\), we have \(H_{d-d_{i}}=G_{i}\). * Each \(g\in G\) may be written uniquely as \(\exp(t_{1}X_{1})\exp(t_{2}X_{2})\cdots\exp(t_{d}X_{d})\). * \(\Gamma\) consists of the elements in \(G\) for which \(t_{i}\in\mathbb{Z}\). It is a result of Mal'cev [13] that these coordinates exist on a nilmanifold corresponding to a connected and simply connected nilpotent Lie group. Given a Mal'cev basis, we let \(\psi:G\to\mathbb{R}^{d}\) be the coordinate map. A nilmanifold \(G/\Gamma\) together with a filtration \(G_{i}\) and a Mal'cev basis has complexity \(M\) if for all \(i,j\) \[[X_{i},X_{j}]=\sum_{k}c_{ijk}X_{k}\] and \(c_{ijk}\) is rational with numerator and denominator at most \(M\). Similarly, a subgroup \(G^{\prime}\) of \(G\) is \(Q\)-rational if its Lie algebra generators \(X_{i}^{\prime}\) can be written as a rational combination of \(X_{i}^{\prime}s\) with numerator and denominator at most \(Q\). For convenience, when we say that a nilmanifold \(G/\Gamma\) has _degree \(k\)_ and _complexity \(M\)_, we implicitely specify a filtration of degree \(k\) and a Mal'cev basis adapted to that filtration of complexity \(M\). Given a Mal'cev basis, with a coordinate map \(\psi\), we may define a distance map \(d\) to be right invariant and satisfying \[d_{\psi}(x,y):=\inf\{\sum_{i=1}^{n}\min(|\psi(x_{i}^{-1}x_{i-1})|,|\psi(x_{i-1 }^{-1}x_{i})|):x_{0}=x,x_{n}=y\}.\] This defines a metric on \(G/\Gamma\) via \(d(x\Gamma,y\Gamma):=\inf\{d(x^{\prime},y^{\prime}):x^{\prime}\Gamma=x\Gamma,y ^{\prime}\Gamma=y\Gamma\}\). _Horizontal characters_ on \(G/\Gamma\) are homomorphisms \(\eta:G\to S^{1}\) which annihilate \(\Gamma\) and hence also \([G,G]\) since \(S^{1}\) is abelian. They are referred to as such since \(G/[G,G]\Gamma\) is known as the _horizontal torus_. Since we were given a Mal'cev basis, each horizontal character \(\eta\) corresponds to a vector \(k\in\mathbb{Z}^{d}\) and thus embeds in \(\Gamma\). Thus, a horizontal character lifts to a homomorphism \(\eta:G\to\mathbb{R}\) such that \(\eta(\Gamma)\subseteq\mathbb{Z}\). **Definition**.: Given a horizontal character \(\eta\) and an element \(w\in G/[G,G]\), we define an inner product \(\langle\eta,w\rangle:=\eta(w)\) and we say that \(w\) and \(\eta\) are _orthogonal_ (or correspondingly \(w\) is orthogonal to \(\eta\) or \(\eta\) is orthogonal to \(w\)) if \(\langle\eta,w\rangle=0\). Since \(G/[G,G]\) can be identified with its Lie algebra (and by the orthogonal subspace to \([\mathfrak{g},\mathfrak{g}]\) with respect to the chosen Mal'cev basis), it makes sense to say that a set of elements of \(G/[G,G]\) or a set of horizontal characters on \(G\) are _linearly independent_, and it makes sense to talk about the _size_ of an element \(w\in G/[G,G]\). Given a character \(\xi:G_{(s)}\to S^{1}\), a _vertical character_ or _nilcharacter_ of _frequency_\(\xi\) is \(F:G/\Gamma\to\mathbb{C}\) such that for any \(g_{s}\in G_{(s)}\) (\(G\) is \(s\)-step nilpotent), we have \(F(g_{s}x)=e(\xi(g_{s}))F(x)\). These terminology "vertical character" is an association to the _vertical torus_, \(G_{(s)}/(\Gamma\cap G_{(s)})\), which is isomorphic to \(\mathbb{T}^{d_{s}}\). Similarly, if \(H\) is a subgroup of the center of \(G\), we shall define \(H\)-characters with frequency \(\xi\) as functions \(F:G/\Gamma\to\mathbb{C}\) such that \(F(hx)=e(\xi(h)x)\) for any \(h\in H\). Note that given elements \(w_{1},\ldots,w_{s-1}\) in \(\Gamma/(\Gamma\cap[G,G])\), that \(g\mapsto\xi([[[[[g,w_{1}],w_{2}],\ldots],w_{s-1}])\) defines a horizontal character. Since the notation \([[[[g,w_{1}],w_{2}],\ldots],w_{s-1}]\) will appear very often, we introduce the following notation as short-hand: **Definition**.: Given \(g_{1},\ldots,g_{k}\) in a group \(G\), we define \[[g_{1},g_{2},\ldots,g_{k}]:=[[[g_{1},g_{2}],g_{3}],\ldots,g_{k}].\] The _size_ of a horizontal character \(\eta\), denoted \(|k|\) or \(\|k\|_{\infty}\), is the \(L^{\infty}\) norm of the vector \(k\) considered above5. Similarly, the size of a vertical character \(\xi\) is the \(L^{\infty}\) norm of \(\xi\) when considered as a character of \(\mathbb{T}^{d_{s}}\) under the isomorphism of \(G_{d}/(G_{d}\cap\Gamma)\) and \(\mathbb{T}^{d_{s}}\). Footnote 5: Our notion of “size” agrees with the notion in “modulus” given in [GT1] We will now define "smoothness norms." Firstly, recall that for \(x\in\mathbb{R}\) that \(\|x\|_{\mathbb{R}/\mathbb{Z}}\) is the distance from \(x\) to the nearest integer. For a polynomial \(p(n)=a_{0}+a_{1}n+a_{2}n^{2}+\cdots+a_{d}n^{d}\), we define \[\|p(n)\|_{C^{\infty}[N]}:=\max_{1\leq j\leq d}N^{j}\|a_{j}\|_{\mathbb{R}/ \mathbb{Z}}.\] Notice the presence of the variable "\(n\)" in the left hand side of the above equation. When doing calculations with these smoothness norms involving polynomials in more than one variable, we shall perform the calculations of the \(C^{\infty}[N]\) as if the polynomial were a polynomial in \(n\). One trick used often in the proof is the elimination of a rational part. Specifically, if \(g(n)\) is a polynomial sequence of degree \(k<N\) and \(g(n)\Gamma\) is periodic modulo \(N\), if we can factor \(g(n)\Gamma=g_{1}(n)\gamma(n)\Gamma\) for \(\gamma\) that is \(P\)-rational for \(P\) relatively prime to \(N\), \(\gamma(0)=1\), and \(g_{1}\) lying in some subgroup \(G_{1}\) of \(G\) with \(\Gamma_{1}=G_{1}\cap\Gamma\), picking \(m\) such that \(n\equiv Pk!m\pmod{N}\), we obtain \(g(Pk!m)\Gamma=g_{1}(Pk!m)\Gamma\). Since \(g_{1}(Pk!m)\) lies inside \(G_{1}\), we may naturally identify \(g_{1}(Pk!m)\Gamma\) with \(g_{1}(Pk!m)\Gamma_{1}\). Thus, given such a factorization, we may restrict attention to the sequence \(m\mapsto g_{1}(Pk!m)\Gamma_{1}\) which is again a periodic modulo \(N\). In the non-periodic case, one would just focus on subprogressions of common difference \(P\), but this is rather cumbersome and is one of the reasons why we have decided to isolate the proof of the periodic case as a separate part. Furthermore, any horizontal character (induced by \(G\)) that \(g_{1}(Pk!\cdot)\) satisfies, meaning that \(\|\eta\circ g_{1}(Pk!\cdot)\|_{C^{\infty}[N]}=0\), should also be satisfied by \(g(\cdot)\Gamma\) since \(P\) is relatively prime to \(N\). Of course, we used the following observation above: if \(q\) is an integer relatively prime to \(N\) and \(\alpha\) is a rational with denominator \(N\), a prime and if \[\|q\alpha\|_{\mathbb{R}/\mathbb{Z}}=0\] then \[\|\alpha\|_{\mathbb{R}/\mathbb{Z}}=0.\] We highlight the use of this observation here since this observation will be used many more times throughout this document. We record this trick in the following lemma: **Lemma 2.1**.: _Let \(g(n)\) be a polynomial sequence on a nilpotent Lie group \(G\) with \(g(0)=1\) and let \(\Gamma\) be a discrete cocompact subgroup of \(G\) such that \(g(n)\Gamma\) is periodic modulo \(N\). Suppose \(g\) is degree \(k<N\) and that we may factorize \(g(n)=g_{1}(n)\gamma(n)\) with \(g_{1}(0)=1\) and \(\gamma(n)\) is \(P\)-rational and \(\gamma(n)\Gamma\) is periodic modulo \(P\) for some \(P\) relatively prime to \(N\). Then \(\|\eta\circ g\|_{C^{\infty}[N]}=0\) if \(\|\eta\circ g_{1}\|_{C^{\infty}[N]}=0\) for any horizontal character \(\eta\)._ Another trick we use is one used to reduce to the case that \(g(0)=1\) and \(|\psi(g(1))|\leq 1/2\). We encapsulte this in the following lemma: **Lemma 2.2**.: _Given a nilsequence \(F(g(n)\Gamma)\), there exists a nilsequence \(\tilde{F}(\tilde{g}(n)\Gamma)\) such that_ * \(F(g(n)\Gamma)=\tilde{F}(\tilde{g}(n)\Gamma)\) _for all_ \(n\in\mathbb{Z}\)__ * \(\|\tilde{F}\|_{Lip(G)}\leq M^{O_{s,k}(1)}\|F\|_{Lip(G)}\)__ * \(\tilde{g}(0)=1\) _and_ \(|\psi(\tilde{g}(1))|\leq\frac{1}{2}\)__ * _For any horizontal character_ \(\eta\)_,_ \(\|\eta\circ g\|_{C^{\infty}[N]}=\|\eta\circ\tilde{g}\|_{C^{\infty}[N]}\)__ * _If_ \(F\) _is a vertical character of frequency_ \(\xi\)_, then_ \(\tilde{F}\) _is also a vertical character of frequency_ \(\xi\)_._ Proof.: To prove this, we factorize \(g(0)=\{g(0)\}[g(0)]\) where \(|\psi(\{g(0)\})|\leq 1/2\) and \([g(0)]\in\Gamma\) (which is not necessarily unique). Letting \(g_{1}(n)=\{g(0)\}^{-1}g(n)g(0)^{-1}\{g(0)\}\) and \(\tilde{F}(x)=F(\{g(0)\}x)\), it follows that \(\tilde{F}\) has Lipschitz constant \(M^{O_{s,k}(1)}\|F\|_{Lip}\) and \(\tilde{F}\) is a vertical character of frequency \(\xi\). This would allow us to reduce to the case that \(g(0)=1\). To reduce to the case that \(|\psi(g(1))|\leq 1/2\), we once again factorize \(g_{1}(1)=\{g_{1}(1)\}[g_{1}(1)]\) with \([g_{1}(1)]\in\Gamma\). Letting \(\tilde{g}(n)=g_{1}(n)[g_{1}(1)]^{-n}\), we that have that \(|\psi(\tilde{g}(1))|\leq 1/2\). Furthermore, we have \[\eta(\tilde{g})\equiv\eta(g_{1}(n))+\eta([g_{1}(1)]^{-n})\equiv\eta(\{g(0)\}^ {-1})+\eta(g(n))\pmod{1}.\] Since \(\eta(\{g(0)\}^{-1})\) and \(\eta(g(0)^{-1})\) contribute to the constant term of the polynomial which does not affect the smoothness norm, it follows that \(\|\eta\circ g\|_{C^{\infty}[N]}=\|\eta\circ\tilde{g}\|_{C^{\infty}[N]}\) as we desired. One final trick we will mention here is a trick we can use to reduce to the one dimensional vertical torus case: **Lemma 2.3**.: _Suppose \(F\) is a vertical character with oscillation \(\xi\) of size at most \(L\) and \(F(g(n)\Gamma)\) is a nilsequence on a nilmanifold \(G/\Gamma\). Then there exists a nilsequence \(\tilde{F}(\tilde{g}(n)\tilde{\Gamma})\) on a nilmanifold \(\tilde{G}/\tilde{\Gamma}\) such that_ * _The vertical torus of the nilmanifold is one-dimensional._ * \(\tilde{F}(\tilde{g}(n)\tilde{\Gamma})=F(g(n)\Gamma)\) * \(\tilde{G}=G/\text{ker}(\xi)\) _and_ \(\tilde{\Gamma}=\Gamma/(\Gamma\cap\text{ker}(\xi))\)_._ * _The horizontal direction of_ \(G\) _and the horizontal direction of_ \(\tilde{G}\) _are isomorphic._ * \(\|\tilde{F}\|_{\text{Lip}(\tilde{G})}\leq L^{O_{s,k}(d)^{O_{s,k^{(1)}}}}\|F\|_{ \text{Lip}(G)}\) _and_ \(\tilde{G}/\tilde{\Gamma}\) _has complexity at most_ \(ML^{O_{s,k}(d)^{O_{s,k^{(1)}}}}\)_._ Proof.: Since \(F\) is invariant under the kernel of \(\xi\), \(F\) descends to a map \(\tilde{F}\) on \(\tilde{G}\) via a quotient map \(\pi:G\to\tilde{G}\). Thus, defining \(\tilde{g}=\pi(g)\), it follows that \(\tilde{F}(\tilde{g}(n)\tilde{\Gamma})=F(g(n)\Gamma)\). The complexity and Lipschitz parameter bounds follow from choosing invoking Cramer's rule (i.e., Lemma A.7) an orthogonal Mal'cev basis to \(\xi\). To show that the vertical direction is one-dimensional, note that if \(G\) is \(s\)-step, the map \((g_{1},g_{2},\ldots,g_{s})\mapsto[g_{1},g_{2},\ldots,g_{s}]\) descends to a map on \(\tilde{G}\) and is only nonzero if \(\xi([g_{1},g_{2},\ldots,g_{s}])\) is nonzero. Finally, to show that the horizontal directions are isomorphic, we note that (set theoretically) \([\tilde{G},\tilde{G}]=[G,G]/\text{ker}(\xi)\) by considering both sides as cosets of \(\text{ker}(\xi)\). Then by the third isomorphism theorem, it follows that \(G/[G,G]\cong\tilde{G}/[\tilde{G},\tilde{G}]\). Furthermore, in the induced quotient map \(G/[G,G]\to(\tilde{G}/[\tilde{G},\tilde{G}])/(\tilde{\Gamma}/(\tilde{\Gamma} \cap[\tilde{G},\tilde{G}]))\), an element maps to zero if and only if under the original isomorphism, it maps to an element in \(\tilde{\Gamma}/(\tilde{\Gamma}\cap[\tilde{G},\tilde{G}])\), which happens if and only if the element lies in \(\Gamma/(\Gamma\cap[G,G])\). Thus, the horizontal tori are (naturally) isomorphic. _Remark_.: We note that it is not necessarily true that if \(H\) lies in the center of \(G\), the horizontal component of \(G/H\) agrees with the horizontal component of \(G\). This is because the set theoretic equivalence we used gives \([G/H,G/H]=[G,G]H/H\), so \((G/H)/([G/H,G/H])\cong G/H[G,G]\). ### Asymptotic notation We will specify asymptotic notation here. We say that \(f=O(g)\) if there exists some absolute constant \(C\) such that \(|f|\leq C|g|\). If \(h\) is a variable, we say that \(f=O_{h}(g)\) if there exists a constant \(C_{h}\) depending on \(h\) such that \(|f|\leq C_{h}|g|\). We shall also adopt Vinogradov's notation and to \(f=O(g)\) as \(f\ll g\) and \(f=O_{h}(g)\) as \(f\ll_{h}g\). In this paper, \(s\) will often denote "step" of a nilmanifold, and \(k\) the "degree" of the nilmanifold. Since we are in the setting where they are bounded, \(O(g)\) will actually often be \(O_{s,k}(g)\). Since in applications to arithmetic combinatorics, \(s\) and \(k\) are often constant, we make no effort to specify the explicit losses in terms of \(s\) and \(k\), though we anticipate since there is an iteration in \(s\) and \(k\) that losses are double exponential in those parameters. In an effort to shorthand a lot of the exponentials and quantities in [TT, Appendix A], the authors adopted the use of "\(\text{poly}_{m}(\delta)\)" (or denoted in our notation as \(\text{poly}_{d}(\delta)\)) as any quantity lower bounded by \(\gg\exp(\exp(-d^{O_{s,k}(1)}))\delta^{\exp(-d^{O_{s,k}(1)})}.\) Since many of our quantities are bounded above by a similarly cumbersome quantity \((\delta/M)^{-O_{s,k}(d)^{O_{s,k}(1)}}\), we shall adopt a similar practice and instead denote \(c_{1}(\delta)\) as any quantity lower bounded by \(\gg(\delta/M)^{-O_{s,k}(d)^{O_{s,k}(1)}}\). ### Parameters used Given a filtered nilpotent Lie group \(G\) with a discrete cocompact subgroup \(\Gamma\), we will specify some accompanying parameters that will be often used. The dimension of \(G\) will be denoted \(d\), and the dimension of \(G_{i}\) will be denoted \(d_{i}\). We will also specify that \(d_{(j)}\) is the dimension of \(G_{(j)}\) where as we defined above \(G_{(j)}\) is the \(j\)th element of the standard, or lower central filtration. We will also specify the dimension of the horizontal torus as \(d_{horiz}:=d-d_{(2)}\). As stated above, the step of the nilpotent Lie group will often be denoted \(s\) and the degree of the filtration will often be denoted \(k\). The complexity of the filtered nilpotent Lie group will be denoted \(M\). ## 3. The two-step case The proofs of our results will proceed similarly as the proofs in [11], which follow the strategy of "apply the van der Corput inequality and see what happens." We apply a van der Corput inequality and end up with a nilsequence on the joining \(G\times_{G_{2}}G\). It will happen that our Lipschitz function will be invariant under the \((G\times_{G_{2}}G)_{k}\)-torus, so we may reduce the degree of our nilsequence and proceed by induction. We will divert proofs of these facts of \(G\times_{G_{2}}G\) to Appendix A. Morally speaking, these computations involving van der Corput and considering the joining are equivalent to what one would arrive at if one worked with a bracket polynomial, used the van der Corput inequality, and performed various reductions in the flavor of [11, Appendix E]. We in fact encourage the reader to work out the degree two bracket polynomial example, despite the inelegance of the computations involved, and compare with the proofs given here and in [11]. In this section, we shall prove the following theorem (see section 2 for definition of orthogonality): **Theorem 7**.: _Let \(N\) be a prime, \(0<\delta<\frac{1}{10}\), and \(g(n)\Gamma\) be a periodic modulo \(N\) polynomial sequence on a two-step nilmanifold \(G/\Gamma\) with complexity \(M\) equipped with the standard filtration. Let \(F:G/\Gamma\to\mathbb{C}\) be a Lipschitz nilcharacter of nonzero frequency \(\xi\) and Lipschitz parameter \(\leq 1\). Suppose_ \[|\mathbb{E}_{n\in[N]}F(g(n)\Gamma)|\geq\delta.\] _Then either \(N\ll(\delta/M)^{-O_{s,k}(d)^{O_{s,k}(1)}}\) or there exists some integer \(d_{ horiz}\geq r\geq 0\) and elements \(w_{1},\ldots,w_{r}\in\Gamma/(\Gamma\cap[G,G])\) and horizontal characters \(\eta_{1},\ldots,\eta_{d_{horiza}-r}\) all bounded by \((\delta/M)^{-O(d)^{O(1)}}\) such that_ * \(w_{i}\)_'s are linearly independent of each other and_ \(\eta_{j}\)_'s are linearly independent of each other and_ \(\langle\eta_{j},w_{i}\rangle=0\) _for all_ \(i\) _and_ \(j\)_._ * _We have_ \[\|\xi([w_{i},g])\|_{C^{\infty}[N]}=0\] \[\|\eta_{j}\circ g\|_{C^{\infty}[N]}=0.\] It's worth noting that the subgroup \(\tilde{G}=\{g\in G:\eta_{j}(g)=0,[w_{i},g]=0\forall i,j\}\) is an abelian subgroup of \(G\) since given two elements \(g,h\in\tilde{G}\), we see that since \(\eta_{j}(h)=0\) and \(w_{i}\) and \(\eta_{j}\) are orthogonal, it follows that the horizontal component of \(h\) can be spanned by the \(w_{i}\)'s, so to verify that \([g,h]=0\), it just suffices to verify that \([w_{i},g]=0\), which is true by definition. In fact, as we show in Lemma A.8, each abelian rational subgroup of \(G\) is a subgroup of some group of this form. Combining this lemma with Lemma A.1, we obtain **Corollary 3.1**.: _Let \(N\) be a prime, \(0<\delta<\frac{1}{10}\), \(G/\Gamma\) be a two-step nilpotent Lie group with the standard filtration. Let \(g(n)\Gamma\) be a periodic modulo \(N\) with \(g(n)\) a polynomial sequence on \(G\). Let \(F:G/\Gamma\to\mathbb{C}\) be a Lipschitz nilcharacter of nonzero frequency \(\xi\) and Lipschitz parameter \(\leq 1\) with \(|\xi|\leq(\delta/M)^{-1}\). Suppose \(G/\Gamma\) has a one-dimensional vertical torus, and_ \[|\mathbb{E}_{n\in[N]}F(g(n)\Gamma)|\geq\delta.\] _Then either \(N\ll(\delta/M)^{-O_{s,k}(d)^{O_{s,k}(1)}}\) or we can write \(g(n)\Gamma=\epsilon(n)g_{1}(n)\gamma(n)\Gamma\) where \(\epsilon\) is constant, \(g_{1}(n)\) lies on an abelian subgroup of \(G\) with rationality \((\delta/M)^{-O(d)^{O(1)}}\) and \(\gamma\) is \((\delta/M)^{-O(d)^{O(1)}}\)-rational._ We now prove Theorem 7. Suppose \[|\mathbb{E}_{n\in[N]}F(g(n)\Gamma)|\geq\delta.\] Using the van der Corput inequality, we see that there are \(\delta^{O(1)}N\) many \(h\)'s such that for each such \(h\), \[|\mathbb{E}_{n\in[N]}F(g(n+h)\Gamma)\overline{F(g(n)\Gamma)}|\geq\delta^{O(1)}.\] By an argument in Section 2, we can reduce to the case that \(g(0)=id\), and \(|\psi(g(1))|\leq\frac{1}{2}\). By defining \(\tilde{F}_{h}(x,y)=F(\{g(1)^{h}\}x)\overline{F(y)}\), the nonlinear part \(g_{2}(n)=g(n)g(1)^{-n}\), and \(\tilde{g}_{h}(n)=(\{g(1)^{h}\}^{-1}g_{2}(n+h)g(1)^{n}\{g(1)\}^{n},g_{2}(n)g(1)^ {n})\), we see that \[|\mathbb{E}_{n\in[N]}\tilde{F}_{h}(\tilde{g}_{h}(n)\Gamma)|\geq\delta^{O(1)}.\] Since \(\tilde{g}_{h}(n)\) is contained in the group \(G\times_{G_{2}}G\), and \(\tilde{F}_{h}\) is invariant under \(G_{2}^{\triangle}\), the diagonal subgroup of \(G_{2}^{2}\), and since \([G\times_{G_{2}}G,G\times_{G_{2}}G]=G_{2}^{\triangle}\) it follows that \(\tilde{F}_{h}\) descends to a function on \(G\times_{G_{2}}G/G_{2}^{\triangle}\), which is a one-step nilpotent group. Since \(\tilde{F}_{h}\) is a nontrivial character in the direction \((x,-x)\) in the \(G_{2}^{2}\) coordinate (since it is frequency \(\xi\)), it follows that the integral of \(\tilde{F}_{h}\) is zero on the quotient it descended to. By Lemma A.3, we may decompose \(\eta(g_{1}g_{2},g_{1})=\eta_{1}(g_{1})+\eta_{2}(g_{2})\) with \(\eta_{1}\) and \(\eta_{2}\) bounded by \(c_{1}(\delta)^{-1}\). In this case, we see that we can take \(\eta_{2}=\xi\), so \[|\mathbb{E}_{n\in[N]}e(n\eta_{1}(g)+\xi(g_{2}(n+h)g_{2}(n)^{-1})+\xi([\{g(1)^ {h}\},g(1)^{n}]))|\geq(\delta/M)^{O(d)^{O(1)}} \tag{1}\] for \(\delta^{O(1)}N\) many elements \(h\in[N]\). The next lemma is a refinement of what Green and Tao refer to as the "bracket polynomial lemma" (see [1, Proposition 5.3]). **Lemma 3.1** (refined bracket polynomial lemma).: _Let \(\frac{1}{10}>\delta>0\) and \(N\) be a positive integer. Suppose_ \[|\mathbb{E}_{n\in[N]}e(n\beta+an\cdot\{\alpha h\})|\geq K^{-1}\] _for \(\delta N^{\prime}\) many \(h\in[N^{\prime}]\) with \(|a|\leq M\), \(\alpha,a\in\mathbb{R}^{d}\). Then there exists a quantity \(c_{2}(\delta,K,M,d)=(\delta/KM)^{O(d)^{O(1)}}\) such that the following holds. Either one of \(N,N^{\prime}\ll c_{2}^{-O(1)}\), or else there exists linearly independent vectors \(w_{1},\ldots,w_{r}\) and \(\eta_{1},\ldots,\eta_{d-r}\) in \(\mathbb{Z}^{d}\), all having size less than \(c_{2}^{-1}\) such that \(\langle w_{i},\eta_{j}\rangle=0\) and_ \[|w_{i}\cdot a|\leq c_{2}^{-1}/N^{\prime},\ \ \|\eta_{i}\circ\alpha\|_{\mathbb{R}/ \mathbb{Z}}\leq c_{2}^{-1}/N^{\prime}.\] The theorem will then follow from this lemma. We will supply at least two proofs of this fact and we will give the cleaner proof of the refined bracket polynomial lemma here. Appendix C will contain two additional proofs, one which is a generalization of the following proof (which only works for certain real vectors with denominator \(N\)) to arbitrary real vectors, and a different more elementary but cumbersome proof. ### First proof of the refined bracket polynomial lemma The first proof of the refined bracket polynomial lemma is due to T. Tao (private communication) and proceeds via Minkowski's second theorem. Readers unfamiliar with the geometry of numbers may want to either consult [4] or refer to an alternative proof of this fact given in Appendix C. While we assume here that parameters \(a\) and \(\alpha\) are rational with denominator \(N\), this proof can be generalized to drop this requirement. See Section C.1. For our purposes, this case is already enough for the proof of our main theorem and for applications for Theorem 5. It turns out that because \(\xi([g(1),\{g(1)^{h}\}])\) has denominator \(N\) (since \(\xi([g(1),[g(1)^{h}]])\) and since \(\xi([w_{1},w_{2}])=0\) for any \(w_{1},w_{2}\) in \(\Gamma\)) that the following version of the bracket polynomial lemma is sufficient for our purposes. **Lemma 3.2** (periodic refined bracket polynomial lemma).: _Let \(\frac{1}{10}>\delta>0\) and \(N\) be a prime. Suppose \(\alpha,a\in\mathbb{R}^{d}\) are of denominator \(N\), \(|a|\leq M\),_ \[\|\beta+a\cdot\{\alpha h\}\|_{\mathbb{R}/\mathbb{Z}}=0\] _for \(\delta N\) many \(h\in[N]\) with \(N\gg(\delta/M)^{-O(d)^{O(1)}}\). Then there exists linearly independent \(w_{1},\ldots,w_{r}\) and \(\eta_{1},\ldots,\eta_{d-r}\) in \(\mathbb{Z}^{d}\) such that \(\langle w_{i},\eta_{j}\rangle=0\) and_ \[\|\eta_{j}\cdot\alpha\|_{\mathbb{R}/\mathbb{Z}}=0,\ \ |w_{i}\cdot a|=0.\] Proof.: We first reduce to the case of when \(\beta=0\). Using the pigeonhole principle, there exists at least \(\delta N/M\) many \(h\)'s such that \[\beta+a\cdot\{\alpha h\}=k\] for some \(k\in\mathbb{Z}\). By using the pigeonhole principle again, there exists a sign pattern in \(\{-1,1\}^{d}\) such that for proportion of \(\delta/2^{d}M\) many \(h\)'s all satisfying the sign pattern, we have \[\beta+a\cdot\{\alpha h\}=k.\] Taking the difference of any two \(h\)'s, we see that \[a\cdot\{\alpha(h-h^{\prime})\}=0.\] Let \(B\) denote the convex body \(\{|a\cdot x|=0,|x_{i}|<\frac{1}{2}\}\)6 and let \(\Gamma\) denote the lattice \(\alpha\mathbb{Z}+\mathbb{Z}^{d}\). By the pigeonhole principle, at least \(\delta\frac{N}{2^{d}M}\) many elements of the lattice lie in \(B\). By Minkowski's second theorem (or rather Proposition _D.1_), there exists linearly independent vectors \(v_{1},\ldots,v_{d^{\prime}}\) of the lattice corresponding to successive minima \(|\lambda_{1},\ldots,\lambda_{d^{\prime}}|\leq 100^{-1}\) such that denoting \(N_{i}=(2d\lambda_{i})^{-1}\), we have \(N_{1}\cdots N_{d}=\frac{\delta N}{2^{d}M}O(d)^{O(d)}\) and \(\{\ell_{i}v_{1}+\cdots\ell_{d^{\prime}}v_{d^{\prime}}:\ell_{i}\in[N_{i}]\}\subseteq B\). Each \(v_{i}\) is of the form \(\{\alpha h_{i}\}\). Let \(\Gamma^{\prime}\) denote the sublattice of \(\Gamma\) generated by the vectors \(v_{1},\ldots,v_{d^{\prime}}\) and let \(V\) denote the vector space generated by \(v_{1},\ldots,v_{d^{\prime}}\). The ball of radius \(\frac{1}{2}\) around zero contains at most \(N\) points of \(\Gamma\) and therefore at most \(N\) points of \(\Gamma^{\prime}\). Thus, by placing a fundamental parallelopiped of \(\Gamma^{\prime}\) around each lattice point in the ball of radius \(1/2\) (similar in flavor to the argument of Gauss's circle problem), we see that the volume of the ball of radius \(1/2\) in \(V\) is at most \(N(\frac{\delta}{2^{d}M}d^{-O(d)})^{-O(1)}\) times the the volume of the fundamental parallelopiped of \(\Gamma^{\prime}\) in \(V\). By Ruzsa's covering lemma (Lemma _D.1_), using the fact that the ball of radius \(1/2\) is connected, it follows that the the dilation of \(O(d\delta/2M)^{O(d)^{2}}\) of the ball of radius \(\frac{1}{2}\) in \(V\) lies in \(V\cap B\). Footnote 6: actually we should take \(|a\cdot x|<N^{-3}\) to make \(B\) have nonempty interior to apply Minkowski’s second theorem with but this makes no difference in the proof since \(a\) has denominator a large prime Let \(P_{K}\) denote the generalized arithmetic progression \(\{n_{1}v_{1}+\cdots+n_{d^{\prime}}v_{d^{\prime}}:|n_{i}|\leq KN_{i}\}\). This is contained in a ball of radius \(dK\). Letting \(v_{j}=\{h_{j}\alpha\}\), we see that if \(n_{1}h_{1}+n_{2}h_{2}+\cdots n_{d^{\prime}}h_{d^{\prime}}\equiv 0\pmod{N}\), then \(n_{1}v_{1}+\cdots+n_{d^{\prime}}v_{d^{\prime}}\) is a point in \(\mathbb{Z}^{d}\). Thus, by the pigeonhole principle, \(P_{K}\) contains at least \(K^{d^{\prime}}\delta d^{-O(d)}\) many points in \(\mathbb{Z}^{d}\). Consequently, letting \(K\) go to infinity, \(\mathbb{Z}^{d}\cap\Gamma^{\prime}\) is a lattice in \(V\) with covolume at most \((d\delta/2^{d}M)^{-O(d)^{2}}\). This implies, via Minkowski's second theorem again (using the fact that the product of successive minima of the unit ball is bounded by \((\delta/2^{d}M)^{-O(d)^{2}}\) and the fact that all successive minima are at least \(1\)), that the lattice \(\mathbb{Z}^{d}\cap\Gamma^{\prime}\) is generated by \(d^{\prime}\) integer vectors \(w_{1},\ldots,w_{d^{\prime}}\) of size at most \((d\delta/2^{d}M)^{-O(d)^{2}}\). Since the ball of radius \((d\delta/2^{d}M)^{O(d)^{2}}\) is contained in \(B\), it follows that \[|w_{i}\cdot a|=0.\] By Lemma \(A.7\) (basically Cramer's rule), we may pick \(\eta_{1},\ldots,\eta_{d-d^{\prime}}\) orthogonal linearly independent integer vectors of size at most \(O(d\delta/M)^{-O(d)^{3}}\), it follows that \(\eta_{i}\cdot\{\alpha_{i}h\}=0\) for a proportion of \((\delta/2^{d}M)^{O(d)^{3}}\) many \(h\in[N]\) and hence \[\|\eta_{j}\cdot\alpha\|_{\mathbb{R}/\mathbb{Z}}=0.\] A corollary of this is as follows: **Corollary 3.2**.: _Let \(N\) be a prime and \(0<\delta<\frac{1}{10}\). Suppose \(\alpha,a\in\mathbb{R}^{d}\) are of denominator \(N\), \(|a|\leq M\), \(\beta,\gamma\in\mathbb{R}\) and \(\beta\) is of denominator \(N\), and_ \[\|\gamma+a\cdot\{\alpha h\}+\beta h\|_{\mathbb{R}/\mathbb{Z}}=0\] _for \(\delta N\) many \(h\in[N]\). Then either \(N\ll(d\delta/M)^{-O(d)^{O(1)}}\) or there exists linearly independent \(w_{1},\ldots,w_{r}\) and \(\eta_{1},\ldots,\eta_{d-r}\) such that \(\langle w_{i},\eta_{j}\rangle=0\) and_ \[\|\eta_{j}\cdot\alpha\|_{\mathbb{R}/\mathbb{Z}}=0,\ \ \|w_{i}\cdot a\|_{ \mathbb{R}/\mathbb{Z}}=0.\] Proof.: We define \(\tilde{a}=(a,1)\) and \(\tilde{\alpha}=(\alpha,\beta)\). Invoking the Lemma 3.2, there exists \(w_{1},\ldots,w_{r}\) and \(\eta_{1},\ldots,\eta_{d+1-r}\) such that \(|w_{i}(\tilde{a})|=0\), \(\|\eta_{j}(\tilde{\alpha})\|_{\mathbb{R}/\mathbb{Z}}=0\). We denote \(w_{i}=(u_{i},v_{i})\) and \(\eta_{j}=(\alpha_{j},\beta_{j})\) where the second component represents the component of the \(1\) and the \(\beta\). Suppose \(\beta_{1}\neq 0\). Let \(\tilde{\eta_{j}}=\beta_{j}\alpha_{1}-\alpha_{j}\beta_{1}\). We see that \(\|\tilde{\eta_{j}}(\alpha)\|_{\mathbb{R}/\mathbb{Z}}=0\). We claim that the \(\tilde{\eta_{j}}\)'s are independent of each other. Suppose there exists some \(a_{i}\) such that \[\sum_{i\neq 1}a_{i}(\beta_{i}\alpha_{1}-\alpha_{i}\beta_{1})=0.\] We can rewrite this sum as \[\alpha_{1}\left(\sum_{i\neq 1}a_{i}\beta_{i}\right)+\sum_{i\neq 1}(-a_{i}\beta_ {1})\alpha_{i}=0.\] Letting these coefficients of \(\alpha_{i}\) be \(c_{i}\), we see that \[\sum_{i}c_{i}\beta_{i}=\beta_{1}\left(\sum_{i\neq 1}a_{i}\beta_{i}\right)-\sum_ {i\neq 1}a_{i}\beta_{1}\beta_{i}=0.\] Thus, each of these coefficients are zero, and since \(\beta_{1}\) is nonzero, \(a_{i}=0\). Thus, \(\tilde{\eta_{j}}\)'s are independent of each other. We next claim that \(\tilde{\eta_{j}}\) are orthogonal to the \(u_{i}\)'s. This follows since \[\tilde{\eta_{j}}\cdot u_{i} =\beta_{j}\alpha_{1}\cdot u_{i}-\beta_{1}\alpha_{j}\cdot u_{i}\] \[\eta_{j}\cdot w_{i} =\alpha_{j}\cdot u_{i}+\beta_{j}\cdot v_{i}=0\] \[\eta_{1}\cdot w_{i} =\alpha_{1}\cdot u_{i}+\beta_{1}\cdot v_{i}=0\] so subtracting the second and third equations gives that the first expression is equal to zero. Finally, we claim that the \(u_{i}\)'s are linearly independent of each other. To see this, note that \((u_{i},v_{i})\) are orthogonal to \((\tilde{\eta_{j}},0)\) and \((\alpha_{1},\beta_{1})\). Since \((0,1)\) is not orthogonal to \((\alpha_{1},\beta_{1})\), it follows that \((u_{i},v_{i})\) cannot span \((0,1)\), so \((u_{i},v_{i}),(0,1)\) are linearly independent of each other, which implies that \(u_{i}\) are linearly independent of each other. If \(\beta_{i}\neq 0\) for some \(i\), we let \(\beta_{i}\) play the role of \(\beta_{1}\) in the above argument. If \(\beta_{i}=0\) for all \(i\), we simply let \(\tilde{\eta_{j}}=\eta_{j}\) and proceed as follows: we see that \(\|\tilde{\eta_{j}}\cdot\alpha\|_{\mathbb{R}/\mathbb{Z}}=0\), and \(\|u_{i}\cdot a\|_{\mathbb{R}/\mathbb{Z}}=0\) with \(\tilde{\eta_{j}}\) orthogonal to \(u_{i}\). Let us pause now and describe what happens to the "induction on dimensions" obstruction present. In the proof above, the induction on dimensions got pushed to Minkowski's second theorem. The first proof in Appendix C has a more direct approach of handling the induction on dimensions. There, we split variables into several different variables, all but one of the variables increases linearly under each iteration. The last variable (denoted \(K\)) does not iteratively increase. Instead, it increases based on the step of the iteration we are. It turns out that the remaining induction on dimensions that [1] had to run into completely disappear under the use of the refined bracket polynomial lemma. Instead, the rest of the argument relies on an induction on step and degree. This will be clearer to the reader upon reading the proofs in the next two Sections. ### Finishing the proof of Theorem 7 We claim that \(\xi([g(1),[g(1)^{h}])\) is rational with denominator \(N\). To see this, note that \(\xi([g(1),\{g(1)^{h}\}])=\xi([g(1),[g(1)^{h}]])\), and since for \(v,w\in\Gamma\), \(\xi([v,w])=0\), and since \(\xi([g(1),[g(1)^{h}]])^{N}=\xi([g(1)^{N},[g(1)^{h}]])=1\), it follows that \(\xi([g(1),[g(1)^{h}])\) is rational with denominator \(N\). From (1) and Lemma A.10, we have for some \(\beta,\gamma\in\mathbb{R}/\mathbb{Z}\) with \(\beta\) having denominator \(O(N)\), \[\|\beta n+\gamma+\xi([g(1),\{g(1)^{h}\}])\|_{\mathbb{R}/\mathbb{Z}}=0.\] We can write \[\xi([g(1),\{g(1)^{h}\}])=\langle(C-C^{t})g(1),\{g(1)^{h}\}\rangle=\langle a, \{\alpha h\}\rangle\] where \(C-C^{t}\) is the antisymmetric matrix representing the commutator identity in the horizontal torus. Note that \(C-C^{t}\) has numerator and denominator at most \(M\). By a change of variables \(n\mapsto M_{1}n\) in the original hypothesis (using the fact that \(M\) has a modular inverse mod \(N\) since \(N\) is prime) for some \(M_{1}\leq O(M)\) which cancels out with the denominator of \(C-C^{t}\), \[|\mathbb{E}_{n\in[N]}F(g(M_{1}n)\Gamma)|\geq\delta.\] Thus, after applying the change of variables, we can assume that \(a\), \(\alpha\), and \(\beta\) have denominator \(N\). Applying Corollary 3.2 and noticing that \(C-C^{t}\) is antisymmetric, we obtain \(w_{i}\)'s and \(\eta_{j}\)'s which are linearly independent, \(\eta_{j}(w_{i})=0\), \(\|M_{1}\xi([w_{i},g])\|_{\mathbb{R}/\mathbb{Z}}=0\), and \(\|\eta_{j}(g)\|_{\mathbb{R}/\mathbb{Z}}=0\). Using the fact that \(N\) is prime, we obtain \(\|\xi([w_{i},g])\|_{\mathbb{R}/\mathbb{Z}}=0\). This completes the proof. ## 4. The two-step polynomial sequence case As a second warm-up problem, we shall prove the two-step polynomial sequences case: **Theorem 8**.: _Let \(N\) be a prime, \(0<\delta<\frac{1}{10}\), and \(g(n)\) be a polynomial sequence on a two-step nilpotent Lie group that is periodic modulo \(N\). Let \(F:G/\Gamma\to\mathbb{C}\) be a vertical character with frequency \(\xi\) with \(|\xi|\leq(\delta/M)^{-1}\). Suppose_ \[|\mathbb{E}_{n\in[N]}F(g(n)\Gamma)|\geq\delta.\] _Then either \(N\ll(\delta/M)^{-O_{k}(d)^{-O_{k}(1)}}\) or else there exists some integer \(d_{horiz}\geq r\geq 0\) and linearly independent elements \(w_{1},\ldots,w_{r}\in\Gamma/(\Gamma\cap[G,G])\), linearly independent horizontal characters \(\eta_{1},\ldots,\eta_{d_{horiz}-r}\) with \(|w_{i}|,|\eta_{j}|\leq(\delta/M)^{-O_{k}(d)^{O_{k}(1)}}\), \(\langle w_{i},\eta_{j}\rangle=0\), and_ \[\|\xi([w_{i},g])\|_{C^{\infty}[N]},\|\eta_{j}\circ g\|_{C^{\infty}[N]}=0.\] It turns out that the two-step polynomial case breaks down into two cases: one where the filtration is redundant, i.e. when \(\xi([G,G_{2}])\neq 0\), and one where the filtration isn't redundant. We shall take care of the latter case first: **Lemma 4.1**.: _Let \(N\) be a prime, \(0<\delta<\frac{1}{10}\), and \(g(n)\) be a polynomial sequence on a two-step nilpotent Lie group of degree \(k\) and complexity at most \(M\) with \(G_{2}\) lying in the center of \(G\). Let \(F:G/\Gamma\to\mathbb{C}\) be a nilcharacter with frequency \(\xi\) with \(|\xi|\leq(\delta/M)^{-1}\). Suppose_ \[|\mathbb{E}_{n\in[N]}F(g(n)\Gamma)|\geq\delta.\] _Then either \(N\ll(\delta/M)^{-O_{k}(d)^{O_{k}(1)}}\) or else there exists some integer \(d_{horiz}\geq r\geq 0\), linearly independent elements \(w_{1},\ldots,w_{r}\in\Gamma/(\Gamma\cap[G,G])\), and linearly independent horizontal characters \(\eta_{1},\ldots,\eta_{d_{horiz}-r}\) with \(|w_{i}|,|\eta_{j}|\leq(\delta/M)^{-O_{k}(d)^{O_{k}(1)}}\), \(\langle w_{i},\eta_{j}\rangle=0\), and_ \[\|\xi([w_{i},g])\|_{C^{\infty}[N]},\|\eta_{j}\circ g\|_{C^{\infty}[N]}=0.\] Proof.: Instead of Fourier expanding along the vertical torus, we Fourier expand along the \(G_{2}\)-torus, that is, \(G_{2}/(G_{2}\cap\Gamma)\), so we may assume that \(F\) is a \(G_{2}\) character of frequency \(\xi+\xi^{\prime}\) with \(\xi^{\prime}\) orthogonal to the vertical \([G,G]\) direction. By taking a quotient by the kernel of \(\xi+\xi^{\prime}\), we may assume that \(G\) has one dimensional nonlinear component and complexity at most \((\delta/dM)^{-O(d)}\). By Lemma 2.2, we may reduce to the case of when \(g(0)=1\) and \(|\psi(g(1))|\leq 1/2\). Once again, we apply the van der Corput inequality \[|\mathbb{E}_{n\in[N]}F(g(n)\Gamma)\overline{F(g(n+h)\Gamma)}|\geq\delta^{2}\] for \(\delta^{2}N\) many \(h\in[N]\). Then letting \(g_{2}(n):=g(n)g(1)^{-n}\) be the nonlinear part of \(g\), we denote \[g_{h}(n)=(\{g(1)^{h}\}^{-1}g_{2}(n+h)g(1)^{n}\{g(1)^{h}\},g(n))\] and \(F_{h}(x,y)=\overline{F}(\{g(1)^{h}\}x)F(y)\). By Lemma A.3, we see that \(g_{h}\) lies inside \(G\times_{G_{2}}G\) and \(F_{h}(g_{h}(n))\) descends to \(\tilde{F}_{h}(\tilde{g}_{h}(n)\Gamma^{\square})\) a nilsequence on \(G^{\square}\). Since \(g(n)\Gamma\) is periodic modulo \(N\), it follows that \(g_{h}(n)\Gamma\times\Gamma\) is periodic modulo \(N\) and thus \(g_{h}(n)\Gamma\times_{G_{2}\cap\Gamma}\Gamma\) is also periodic modulo \(N\), and so \(\tilde{g}_{h}(n)\Gamma^{\square}\) is periodic modulo \(N\). The hypothesis then rearranges to \[|\mathbb{E}_{n\in[N]}\tilde{F}_{h}(\tilde{g_{h}}(n)\Gamma^{\square})|\geq \delta^{2}\] for \(\delta^{2}N\) many \(h\in[N]\). Making a change of variables for some integer \(1\leq M_{1}\leq(10^{k}k)!M\) \[|\mathbb{E}_{n\in[N]}\tilde{F}_{h}(\tilde{g_{h}}(M_{1}n)\Gamma^{\square})|\geq \delta^{2}\] By Lemma A.3, it follows that \(F_{h}\) has parameter at most \(M^{O(1)}\) on \(G^{\square}\), \(G^{\square}\) is abelian, and so Fourier expanding \(\tilde{F_{h}}\) into characters \(\eta\), we note by Lemma A.3 that \(\eta\) decomposes into \(\zeta+\zeta_{2}\) where \(\zeta\) is a horizontal character on \(G\) which annihilates \(G_{2}\) and \(\zeta_{2}\) is a character on \(G_{2}\). Note that we can take \(\zeta_{2}\) to be \(\xi+\xi^{\prime}\) because \(F\) is a \(G_{2}\)-character of frequency \(\xi+\xi^{\prime}\) Note also that \((\xi+\xi^{\prime})([g,h])=\xi([g,h])\) since \(\xi^{\prime}\) is orthogonal to the vertical direction. It thus follows from the one-step case that \[\|\zeta(g(M_{1}n))+(\xi+\xi^{\prime})(g_{2}(M_{1}n+h))-(\xi+\xi^{\prime})(g_{2}( M_{1}n))+\xi([g(1)^{M_{1}n},\{g(1)^{h}\}])\|_{C^{\infty}[N]}=0.\] The point of making the change of variables is so that by Lemma A.10, the coefficients of \(g_{2}(M_{1}\cdot)\) have denominator \(N\), and \(\xi([g(1)^{M_{1}n},\{g(1)^{h}\}])\) consists of \(\langle an,\{\alpha h\}\rangle\) where \(a\) and \(\alpha\) have denominator \(N\). Applying Corollary 3.2 and using the fact that \(N\) is prime yields \(w_{i}\)'s and \(\eta_{j}\)'s which satisfy the conclusions of the lemma. Proof of Theorem 8.: We will once again quotient by the kernel of \(\xi\). Suppose \(G_{2}\) does not lie in the center of \(G\). If it does, we may apply Lemma 4.1 to finish. If \(F\) was not already a \(G_{k}\)-character, we may Fourier expand via Lemma A.6 and pigeonhole in one character, we may replace \(F\) with a character on \(G_{(s)}G_{k}\) with frequency \(\xi+\xi^{\prime}\) with \(\xi^{\prime}\) orthogonal to the vertical direction in \(G\). By Lemma 2.2, we may reduce to the case of when \(g(0)=1\) and \(|\psi(g(1))|\leq\frac{1}{2}\). Then by van der Corput, we have for \(\delta^{O(1)}N\) many \(h\)'s \[|\mathbb{E}_{n\in[N]}F(g(n+h)\Gamma)\overline{F(g(n)\Gamma)}|\geq\delta^{O(1)}.\] Defining \[g_{h}(n)=(\{g(1)^{h}\}^{-1}g_{2}(n+h)g(1)^{n}\{g(1)^{h}\},g_{2}(n)g(1)^{n})\] and \(\tilde{F}_{h}(x,y)=F(\{g(1)^{h}\}x)\overline{F(y)}\), it follows that \[|\mathbb{E}_{n\in[N]}\tilde{F}_{h}(g_{h}(n))|\geq\delta^{O(1)}.\] It follows that \(g_{h}(n)\) lies in \(G\times_{G_{2}}G=\{(g,g^{\prime}):g^{\prime}g^{-1}\in G_{2}\}\). This group has the filtration \(G_{i}\times_{G_{i+1}}G_{i}\) with the last group \(G_{k}^{\triangle}\). However, \(\tilde{F}_{h}\) is \(G_{k}^{\triangle}\)-invariant, so descends via a quotient by \(G_{k}^{\triangle}\) to a degree \(d-1\) nilsequence. In order to apply the induction hypothesis, we shall need to Fourier expand \(\tilde{F}_{h}\) into nilcharacters on \(G^{\square}\). Since \(\xi\) doesn't annihilate \([G,G_{2}]\), it follows that \(G^{\square}\) is two-step with vertical direction \(G_{(2)}\times_{G_{(2)}}G_{(2)}\). Thus, we can write each vertical character \(\chi\) there as \[\chi(gh,g)=\chi^{1}(g)+\chi^{2}(h)\] for all \(g\in G_{(2)}\) and \(h\in[G,G_{2}]\). To emphasize that \(\chi^{2}\) comes from the \(x_{1}-x_{2}\) direction, we shall write \(\chi^{2}\) as \(\chi^{2}\otimes\overline{\chi^{2}}\). Here, since \(F\) is a nilcharacter of frequency \(\xi\), it follows that \(\tilde{F}_{h}\) is already a nilcharacter of frequency \(\xi\otimes\overline{\xi}\). Pigeonholing in \(h\) and applying the induction hypothesis we find \(w_{i}\)'s and \(\eta_{j}\)'s such that for \(c_{1}(\delta)N\) many \(h\in[N]\) that \[\|\xi\otimes\overline{\xi}([w_{i},\tilde{g_{h}}(n)])\|_{C^{\infty}[N]}=0\] \[\|\eta_{j}\circ\tilde{g_{h}}(n)\|_{C^{\infty}[N]}=0\] where \(\tilde{g_{h}}\) is the projection of \(g_{h}\) to \(G^{\square}\). Here, we remind the reader that \(c_{1}(\delta)\) as defined in Section 2.1 is any quantity lower bounded by \(\gg(\delta/M)^{-O_{s,k}(d)^{O_{s,k}(1)}}\). Letting \(H=G\times_{G_{2}}G\), we see that the horizontal component of \(H\) is \(H/[H,H]\) while the horizontal component of \(G^{\square}=H/G_{k}^{\triangle}\) can be identified with \(H/[H,H]G_{k}^{\triangle}\) (since \([H/G_{k}^{\triangle},H/G_{k}^{\triangle}]=[H,H]G_{k}^{\triangle}/G_{k}^{\triangle}\) by viewing the coset equivalence). Thus, each \(\eta_{j}\) can be identified with a horizontal character on \(H\) with zero \(G_{k}^{\triangle}\) component (if \(G_{k}^{\triangle}\) does not lie inside \([H,H]\)). In addition, identifying the horizontal component of \(\tilde{g_{h}}\) with the projection of the horizontal component of \(g_{h}\) to \(H/[H,H]G_{k}^{\triangle}\), and since the horizontal component can be identified with its Lie algebra, it follows that \(H/[H,H]G_{k}^{\triangle}\) embeds in the horizontal component \(H/[H,H]\). Under this embedding, \(\xi\otimes\overline{\xi}([w_{i},\tilde{g_{h}}(n)])\) does not depend on the \(G_{k}^{\triangle}\) component of \(w_{i}\) or of \(\tilde{g_{h}}\), and it follows that we may extend the \(w_{i}\)'s to find \(\eta_{j}\) (which have zero \(G_{k}^{\triangle}\) component) and linearly independent \(w_{i}\) such that \(\eta_{j}(w_{i})=0\) and \[\|\xi\otimes\overline{\xi}([w_{i},g_{h}(n)])\|_{C^{\infty}[N]}=0\] \[\|\eta_{j}\circ g_{h}(n)\|_{C^{\infty}[N]}=0.\] Since the two-step part only consists of \(\xi\otimes\overline{\xi}\), we in fact have \[\|\xi\otimes\overline{\xi}([w_{i},g_{h}(n)])\|_{C^{\infty}[N]}=0.\] Note that \((g(n),g(n)g_{2}(n+h)g_{2}(n)^{-1}p(n,h))\) where \(p\) is a (bracket) polynomial in \(n\) and \(h\) which is an element in \([G,G]\). We write \(w_{i}=(u_{i}v_{i},u_{i})\) and \(\eta_{j}=(\alpha_{j},\beta_{j})\) so \[\eta_{j}(g_{h}(n)) =\alpha_{j}(g(n))+\beta_{j}(g_{2}(n+h)g_{2}(n)^{-1})\] \[\eta_{j}(w_{i}) =\alpha_{j}(u_{i})+\beta_{j}(v_{i})=0\] \[\xi\otimes\overline{\xi}([w_{i},g_{h}(n)]) =\xi\otimes\xi^{-1}(([u_{i},g(n)],[u_{i}v_{i},g(n)][u_{i}v_{i},g_ {2}(n+h)g_{2}(n)^{-1}])\] \[=\xi([v_{i},g(n)]+[u_{i}v_{i},g_{2}(n+h)g_{2}(n)^{-1}])\] where \(\alpha_{j}\in G\), \(\beta_{j}\in G_{2}\) and \(v_{i}\in G_{2}\), and \(u_{i}\in G\) (note that after quotienting by \(\xi\) that since \(\beta_{j}\) annihilates \([G,G_{2}]=[G,G]\), it follows that \(\beta_{j}\) annihilates the bracket polynomial part). Using Vinogradov's lemma (Lemma D.2), it follows that \[\alpha_{j}(g(n)),\xi([v_{i},g(n)]),\beta_{j}(g_{2}(n)),\xi([u_{i}v_{i},g_{2}(n )])\equiv 0\pmod{1}.\] Note that here we must use the fact that \(N\) is prime and the fact that \(g_{2}\) has horizontal component with denominator \(N\) to eliminate the binomial coefficient that comes from expanding out \(g_{2}(n+h)g_{2}(n)^{-1}\). Let \(\tilde{G}=\{g\in G:\alpha_{j}\circ g=0,[v_{i},g]=0\}\) and \(\tilde{G}_{2}=\{g\in\tilde{G},\beta_{j}(g)=0,[u_{i}v_{i},g]=0\}\). We claim that \([\tilde{G},\tilde{G}_{2}]=0\). To show this, we let \(\tilde{H}=\tilde{G}\times_{G_{2}}\tilde{G}/[G,G]^{\triangle}\) and we claim is that \(\tilde{H}\) is Abelian. To see this, note that each element of \(\tilde{H}\) of the form \((gg_{2},g)\) satisfies \(\alpha_{j}(g)+\beta_{j}(g_{2})=0\) and \([v_{i},g]+[u_{i}v_{i},g_{2}]=0\). Since \(\alpha_{j}(u_{i})+\beta_{j}(v_{i})=0\) for each \(i,j\), it follows that \((gg_{2},g)\) can be generated by \((u_{i}v_{i},u_{i})\) modulo \([G,G]^{2}\). However, for any other \((hh_{2},h)\) in \(\tilde{H}\), we have \(\xi\otimes\overline{\xi}[(u_{i}v_{i},u_{i}),(hh_{2},h)]=0\). Hence \(\tilde{H}\) is Abelian. Finally, we have \(\xi\otimes\overline{\xi}([(g,g),(hg_{2},h)])=\xi([g,g_{2}])=0\) whenever \(g\in\tilde{G}\) and \(g_{2}\in\tilde{G}_{2}\). This shows that \(\xi([\tilde{G},\tilde{G}_{2}])=0\), so we may make our filtration finer by restricting to \(\tilde{G}\) and \(\tilde{G}_{2}\). Since we've quotiented by the kernel of \(\xi\), we work with \([\tilde{G},\tilde{G}_{2}]=0\). Thus, by Lemma A.1 and Lemma A.2, we can write \(g(n)\Gamma=g_{1}(n)\gamma(n)\Gamma\) with \(g_{1}\) being a polynomial sequence with the filtration \(\tilde{G}_{i}\) with \(\tilde{G}_{i}\) satisfying \(\tilde{G}_{i}=\tilde{G}_{2}\cap G_{i}\) for \(i\geq 2\). Finally, we apply Lemma A.1 and Lemma A.8 to \(g_{1}(Pk!n)\) (with \(P\) the period of \(\gamma\)) to find \(\alpha_{1},\dots,\alpha_{d^{\prime}}\) with the property that we can factorize \(g_{1}(Pk!n)=g_{2}(n)\gamma(n)\) such that \(g_{2}\) lies inside \(\bigcap_{i}\ker(\alpha_{i})\), which is \(1\)-step and \(\gamma\) is \(P^{\prime}\)-rational. Thus, \(g(k!PP^{\prime}n)\Gamma\) lies inside \(\bigcap_{i}\ker(\alpha_{i})\), and so invoking Lemma 2.1, the first conclusion is satisfied. To satisfy the second conclusion, we see that \(g(k!PP^{\prime}n)\) can be generated by vectors orthogonal to \(\alpha_{i}\)'s and \(\Gamma\). Thus, for any \(w\in\Gamma/(\Gamma\cap[G,G])\) orthogonal to all of the \(\alpha_{i}\)'s, it follows from Lemma 2.1 that \[\|\xi([w,g])\|_{C^{\infty}[N]}=0.\] Using Lemma A.7 we may pick a basis for all such \(w\) with size at most \(c_{1}(\delta)^{-1}\) ## 5. Periodic Polynomial Nilsequences In this section, we deduce Theorem 3, which will be deduced from Theorem 9. An example in the beginning of Section 7 in [1] indicates that an obstruction that may occur is that our filtration may be redundant. To overcome the obstruction, [1] proceeds via an induction on dimensions, lowering the dimension of \(G_{2}\) one-by-one until no such obstruction exists. The downside to using induction on dimensions is that it has a habit of incurring losses double exponential in dimension, which is what happens if one proceeds along the lines of [1]. Our proof here completely sidesteps the induction on dimensions obstruction the [1] runs into, and instead shifts the induction on dimension to an induction on degree or step. We do, however, run into a similar phenomenon of the filtration being redundant, causing us to have to divide our proof into two cases. In the two-step polynomial sequence case, we also needed to consider two cases. The first case is where \(G_{2}\) lies in the center of \(G\), that is, when \([G_{2},G]=0\). The second case is when this does not occur. Roughly speaking, the second case corresponds to the obstruction that Green and Tao run into of the filtration being redundant. In the proof of the second case, we reduce to the first case. Similarly, in the general polynomial nilsequences case on a \(s\)-step nilmanifold, we divide our proof into two cases: one where \([G_{2},G,G,\dots,G]=0\) where \(s-1\) commutators are taken and one where this does not occur. Similar to the two-step nilmanifold case, we shall reduce the second case to the first case (see Section 2.1 for asymptotic notation such as \(c_{1}(\delta)\)). **Theorem 9**.: _Let \(0<\delta<\frac{1}{10}\), \(N\) a prime, \(M\geq 1\), and \(F:G/\Gamma\to\mathbb{C}\) be a Lipschitz vertical character of nonzero frequency \(\xi\) on an \(s\)-step nilmanifold \(G/\Gamma\) and Lipschitz parameter \(\leq 1\) with \(|\xi|\leq(\delta/M)^{-1}\). Suppose \(g(n)\) is a polynomial sequence in \(G\), and that_ \[|\mathbb{E}_{n\in[N]}F(g(n)\Gamma)|\geq\delta\] _with \(g(n)\Gamma\) being \(N\)-periodic. Then either \(N\ll(\delta/M)^{-O_{k,s}(d)^{O_{k,s(1)}}}\) or else there exists some integer \(0\leq r\leq d_{\text{horiz}}\) and a set of linearly independent \(\eta_{1},\dots,\eta_{r}\) of size at most \((\delta/M)^{-O_{s,k}(d)^{O_{s,k(1)}}}\) such that_ \[\|\eta_{i}\circ g\|_{C^{\infty}[N]}=0\] _and such that for any \(w_{1},\dots,w_{s-1}\in\Gamma/(\Gamma\cap[G,G])\) orthogonal to all of the \(\eta\)'s,_ \[\|\xi([g,w_{1},\dots,w_{s-1}])\|_{C^{\infty}[N]}=0.\] _In other words, after taking a quotient by the kernel of \(\xi\), \(g\) is contained (up to some periodic part) in an \(s-1\)-step nilmanifold of some higher complexity._ _Remark_.: See section two for the definition of "orthogonal." Note that \(\tilde{G}:=\{\eta_{i}(g)=0,[g,w_{1},\dots,w_{s-1}]=0\text{ for all such }w_{i}^{\prime}s\}\) is \(s-1\)-step. This is because if \(g_{1},\dots,g_{s}\in\tilde{G}\), then each \(g_{1},\dots,g_{s-1}\) is generated by \(w_{j}\)'s modulo \(G_{(2)}\) since they are orthogonal to the \(\eta_{i}\)'s. Thus, \([g_{1},[g_{2},\dots,[g_{s-1},g_{s}]]]\) can be written as a combination of \([w_{j_{1}},[w_{j_{2}},\dots,[w_{j_{s-1}},g_{s}]]]=0\) by definition. Thus, Theorem 3 follows from Theorem 9. To prove this, we will need the following preliminary lemma: **Lemma 5.1**.: _Suppose \(\xi([G_{2},G,\cdots,G])=0\) (with the commutator being taken \(s-1\) times). Then the same is true if we swap any one of the \(G\)'s with the \(G_{2}\)._ Proof.: Our main tool for proving this is the Hall-Witt identity, which states that if \(x^{y}=x[x,y]\), then \[[[x,y],z^{x}][[z,x],y^{z}][[y,z],x^{y}]=1.\] Since \([G_{2},G,\ldots,G]\) lies inside \(G_{(s)}\), the commutator term \([x,y]\) is a "lower order term" and so the Hall-Witt identity we morally work with is \[[[x,y],z][[z,x],y][[y,z],x]``="1\] or rather \[[[x,y],z][[z,x],y][[y,z],x]=1\pmod{G_{(4)}}.\] We claim by induction that \[[g_{1},\ldots,g_{k}]\in[G_{2},G,\ldots,G]\pmod{G_{(k+1)}}\] if any of the \(g_{k}\)'s lie inside \(G_{2}\), from which the lemma follows. The Hall-Witt identity then implies the case of \(k=3\). Now suppose this holds for \(k\). We must show that given \(g_{1},\ldots,g_{k+1}\), if any one of these elements lie in \(G_{2}\), then \[[g_{1},\ldots,g_{k+1}]\in[G_{2},G,\ldots,G]\pmod{G_{(k+2)}}.\] If that element is \(g_{1},\ldots,g_{k}\), and since \([g_{k+1},G_{(k+1)}]\subseteq G_{(k+2)}\), it follows that the hypothesis is satisfied. Otherwise, by the Hall-Witt identity, it suffices to show that \[[[g_{1},\ldots,g_{k-1},g_{k+1}],g_{k}]\in[G_{2},\ldots,G]\pmod{G_{(k+2)}}\] and \[[[g_{k},g_{k+1}],[g_{1},\ldots,g_{k-1}]]\in[G_{2},\ldots,G]\pmod{G_{(k+2)}}.\] The first is true by hypothesis. To prove the second one, we apply the Hall-Witt identity again, showing that it suffices to show that \[[[g_{1},\ldots,g_{k-2},g_{k},g_{k+1}],g_{k-1}]\in[G_{2},\ldots,G]\pmod{G_{(k+ 2)}}\] and \[[[g_{k+1},g_{k},g_{k-1}],[g_{1},\ldots,g_{k-2}]]\in[G_{2},\ldots,G]\pmod{G_{(k +2)}}.\] Once again, the first identity is true by hypothesis. As for the second identity, we apply the Hall-Witt identity again, and continue in this procedure until we arrive at a condition of \[[g_{k+1},g_{k},\ldots,g_{1}]\in[G_{2},\ldots,G]\pmod{G_{(k+2)}}\] which is true by hypothesis. The start of the proof for either case is the same. We first reduce via Lemma 2.2 to the case of when \(g(0)=1\) and \(|\psi(g(1))|\leq\frac{1}{2}\) and quotient out by the kernel of \(\xi\) to make a one-dimensional vertical torus. We then apply van der Corput's inequality to obtain \[|\mathbb{E}_{n\in[N]}F(g(n)\Gamma)\overline{F(g(n+h)\Gamma)}|\geq\delta^{2}\] for \(\delta^{2}N\) many \(h\in[N]\). Then letting \(g_{2}(n):=g(n)g(1)^{-n}\) be the nonlinear part of \(g\), we denote \[g_{h}(n)=(\{g(1)^{h}\}^{-1}g_{2}(n+h)g(1)^{n}\{g(1)^{h}\},g(n))\] and \(F_{h}(x,y)=\overline{F}(\{g(1)^{h}\}x)F(y)\). The hypothesis then rearranges to \[|\mathbb{E}_{n\in[N]}F_{h}(g_{h}(n))|\geq\delta^{2}\] for \(\delta^{2}N\) many \(h\in[N]\). Next, we make a change of variables \(n\mapsto M_{1}n\) with some \(M_{1}\leq M^{O_{s,k}(1)}\) to eliminate all of the rationality coming from the Lie bracket and so that \(g_{2}(M_{1}n)\) has denominator \(N\). This can be done via Lemma A.10. Note that since \(g(n)\Gamma\) is periodic modulo \(N\), that \(g_{h}(n)(\Gamma\times\Gamma)\) is also periodic modulo \(N\). Define \(G\times_{G_{2}}G\) as the group \(\{(g,g^{\prime}):g^{-1}g^{\prime}\in G_{2}\}\). This is an \(\leq s\)-step nilpotent Lie group and has a natural filtration of \(G_{i}\times_{G_{i+1}}G_{i}\). Since \([G,G]\subseteq G_{2}\), it follows that \(g_{h}(n)\in G\times_{G_{2}}G\). Since \(F\) is a nilcharacter on \(G/\Gamma\) of frequency \(\xi\), and since \(G_{k}\) lies in the center of \(G\) and \(F_{h}\) is invariant under \(G_{k}^{\triangle}\), it follows that \(F_{h}\) descends to a function on \(G\times_{G_{2}}G/G_{k}^{\triangle}=G^{\square}\). By Lemma A.3, it follows that \(G\times_{G_{2}}G\) has complexity at most \(M^{O_{s,k}(1)}\), and given a horizontal character \(\eta:G\times_{G_{2}}G\to S^{1}\), we may decompose it as \(\eta(gg_{2},g)=\eta_{1}(g)+\eta_{2}(g_{2})\) where \(g\in G\) and \(g_{2}\in G_{2}\) where \(\eta_{1}\) is a horizontal character on \(G\) and \(\eta_{2}\) is a horizontal character on \(G_{2}\). The same lemma tells us that if our horizontal character has size at most \(c_{1}(\delta)^{-1}\), then \(\eta_{1}\) and \(\eta_{2}\) has size at most \(c_{1}(\delta)^{-1}\). ### Intuition for the argument Here, we provide a brief sketch of the argument. The argument proceeds via an induction on degree or step. If \([G_{2},G,\ldots,G]=0\), then \(G^{\square}\) is \(s-1\)-step, so we may use induction to show that \(g_{h}\) lies in an \(s-2\)-step subgroup. A candidate for such a subgroup is \(\tilde{G}^{\square}:=\tilde{G}\times_{\tilde{G}_{2}}\tilde{G}/\tilde{G}_{(s-1)} ^{\triangle}\) where \(\tilde{G}\) is \(s-1\)-step and \(\tilde{G}_{2}=\tilde{G}\cap G_{2}\). It's possible that not all \(s-2\)-step subgroups of \(\tilde{G}\) are of this form, but as above, we can show via Vinogradov's lemma and the refined bracket polynomial lemma that after pigeonholing in \(h\), \(g_{h}\) lies inside a subgroup of bounded rationality of that form. We can then use this to show that \(g\) lies inside \(\tilde{G}\) which is \(s-1\)-step. If \([G_{2},G,\ldots,G]\neq 0\) (where the commutator is taken \(s-1\) times), then \(G^{\square}\) is not \(s-1\)-step, but rather \(s\)-step so an induction on degree tells us that \(g_{h}(n)\) lies in an \(s-1\)-step subgroup of \(G^{\square}\). A candidate subgroup is \(\tilde{G}^{\square}:=\tilde{G}\times_{\tilde{G}_{2}}\tilde{G}/\tilde{G}_{(s)} ^{\triangle}\) where \([\tilde{G}_{2},\ldots,\tilde{G}]=0\). These are probably not the only \(s-1\)-step subgroups of \(G^{\square}\), but the structure of \(g_{h}(n)\) allows us to use Vinogradov's lemma to guarantee that after pigeonholing in \(h\), \(g_{h}(n)\) actually does lie inside some group of the form \(\tilde{G}^{\square}\) with bounded rationality. Using this, we can show that \(g\) lies inside the filtered nilpotent Lie group \(\tilde{G}\) where \(\tilde{G}_{0}=\tilde{G}_{1}=\tilde{G}\) and \(\tilde{G}_{i}=\tilde{G}_{2}\cap G_{i}\) for \(i\geq 2\). ### Case 1 Suppose first that \[\xi([G_{2},G,G,\cdots,G])=0.\] Since we took a quotient of the vertical component by the kernel of \(\xi\), by Lemma 2.3, we are in the case that \([G_{2},G,G,\cdots,G]=0\). We will be inducting on step rather than degree for our argument. Replacing \(G_{\ell}\) for all \(k\geq\ell\geq 2\) with \(G_{\ell}G_{(s)}\) (and noting that this still preserves normality and the filtration property since \(G_{(s)}\) lies in the center of \(G\)), and taking a Fourier expansion to \(G_{k}\)-characters as in Lemma A.6 and pigeonholing in one character, we may replace \(F\) with a character on \(G_{k}\) with frequency \(\xi+\xi^{\prime}\) with \(\xi^{\prime}\) orthogonal to the vertical direction in \(G\). In order to apply induction for our argument, we must analyze nilcharacters on \(G\times_{G_{2}}G\). First note that since \(\xi([G_{2},G,G,\cdots,G])=0\) and \(\xi^{\prime}\) is orthogonal to the vertical direction on \(G\), it follows that \(F_{h}\) descends to a \(\leq s-1\)-step nilsequence on \(G^{\square}\). In order to apply induction in this case, we must understand nilcharacters on \(G^{\square}\). The vertical torus is \(G_{(s-1)}\times_{K}G_{(s-1)}/G_{(s)}^{\triangle}\) where \(K=[G_{2},G,G,\cdots,G]\) where the commutator is taken \(s-2\) times. Thus, if \(\chi\) is a vertical character on \(G_{(s-1)}\times_{K}G_{(s-1)}/G_{(s)}\), it follows that \[\chi(gh,g)=\chi^{1}(g)+\chi^{2}(h)\] for all \(g\in G_{(s-1)}\) and \(h\in K\). To emphasize that \(\chi^{2}\) lies in a \(x_{1}-x_{2}\) direction, we write \(\chi^{2}\) as \(\chi^{2}\otimes\overline{\chi^{2}}\). We may further decompose \(\chi^{2}(h)\) as the sum of a vertical component \(\chi^{2}_{vert}\) and a complementary component \(\chi^{2}_{comp}\). Going back to the problem, since \(F\) is a vertical character with frequency \(\xi\), it follows that in the Fourier expansion of \(F_{h}\) in \(G^{\square}\), we can take \(\chi^{2}_{vert}\) to be \(\xi\). By induction (the base case being two-step polynomial sequences), Fourier expanding \(F_{h}\) via Lemma A.6 into \(\xi\otimes\overline{\xi}+\zeta\) (where \(\zeta\) is a horizontal character on \(G_{(s-1)}\) that annihilates \([G,G_{(s-1)}]\)), and pigeonholing in \(h\), there exists horizontal characters \(\eta_{1},\ldots,\eta_{r}\) of size at most \(c_{1}(\delta)^{-1}\) such that \[\|\eta_{i}(\tilde{g_{h}}(M_{1}n))\|_{C^{\infty}[N]}=0\] and such that for any \(w_{1},\ldots,w_{s-2}\in\Gamma^{\square}/(\Gamma^{\square}\cap[G^{\square},G^{ \square}])\) which are orthogonal to the \(\eta_{i}\)'s, we have \[\|(\zeta+\xi\otimes\bar{\xi})([\tilde{g_{h}}(M_{1}n),w_{1},w_{2},\ldots,w_{s- 2}])\|_{C^{\infty}[N]}=0.\] Letting \(H=G\times_{G_{2}}G\), we see that the horizontal component of \(H\) is \(H/[H,H]\) while the horizontal component of \(G^{\square}=H/G_{k}^{\triangle}\) can be identified with \(H/[H,H]G_{k}^{\triangle}\) (since \([H/G_{k}^{\triangle},H/G_{k}^{\triangle}]=[H,H]G_{k}^{\triangle}/G_{k}^{\triangle}\) by viewing the coset equivalence). Thus, each \(\eta_{j}\) can be identified with a horizontal character on \(H\) with zero \(G_{k}^{\triangle}\) component (if \(G_{k}^{\triangle}\) does not lie inside \([H,H]\)). In addition, identifying the horizontal component of \(\tilde{g_{h}}\) with the projection of the horizontal component of \(g_{h}\) to \(H/[H,H]G_{k}^{\triangle}\), and since the horizontal component can be identified with its Lie algebra, it follows that \(H/[H,H]G_{k}^{\triangle}\) embeds in the horizontal component \(H/[H,H]\). Under this embedding, \((\zeta+\xi\otimes\overline{\xi})([\tilde{g_{h}}(n),w_{1},\ldots,w_{s-2}])\) does not depend on the \(G_{k}^{\triangle}\) component of \(w_{i}\) or of \(\tilde{g_{h}}\), so we may find horizontal characters \(\eta_{1},\ldots,\eta_{r}\) on \(G\times_{G_{2}}G\) such that for any horizontal characters \(w_{1},\ldots,w_{s-2}\) orthogonal to \(\eta_{i}\)'s such that for \(c_{1}(\delta)N\) many \(h\in[N]\) that \[\|\eta_{i}(\tilde{g_{h}}(M_{1}n))\|_{C^{\infty}[N]}=0\] \[\|(\zeta+\xi\otimes\bar{\xi})([\tilde{g_{h}}(M_{1}n),w_{1},w_{2},\ldots,w_{s- 2}])\|_{C^{\infty}[N]}=0.\] Writing \(\eta_{i}=\eta_{i}^{1}+\eta_{i}^{2}\) via Lemma A.3, and \(w_{i}=(u_{i}v_{i},u_{i})\), we have \[\|\eta_{i}^{1}(g(M_{1}n))+\eta_{i}^{2}([g(1)^{M_{1}n},\{g(1)^{h}\}])+\eta_{i}^ {2}(g_{2}(M_{1}n+h))-\eta_{i}^{2}(g_{2}(M_{1}n))\|_{C^{\infty}[N]}=0\] \[\|[\text{deg}_{n}\neq 1\text{ terms}]+\alpha h+\xi([g(1)^{M_{1}n},\{g(1)^{h}\},u_{1},u_{2},\ldots,u_{s-2}])+M_{1} \beta n\|_{C^{\infty}[N]}=0\] for some \(\alpha,\beta\in\widetilde{\mathbb{Z}/N\mathbb{Z}}\). Here, \([\text{deg}_{n}\neq 1\text{ terms}]\) denotes a polynomial in \(n\) and \(h\) where there are no terms in the \(n\) variable which are degree one. By linear algebraic manipulations and Cramer's rule (in particular Lemma A.7), we may assume without a loss of generality that the \(\eta_{i}\)'s are consists of orthogonal vectors to \(u_{j}\)'s and have a zero \(\eta_{i}^{2}\) component, then to \(v_{j}\)'s and have a zero \(\eta_{i}^{1}\) component, and the remaining have both \(\eta_{i}^{1}\) and \(\eta_{i}^{2}\) components, with everything having size bounded by \(c_{1}(\delta)^{-1}\). This implies via various applications of Vinogradov's lemma and Lemma A.10 that for all \(\eta_{i}^{1}\) orthogonal to all of the \(u_{j}\)'s (and have zero \(\eta_{i}^{2}\) component) that \[\|\eta_{i}^{1}\circ g\|_{C^{\infty}[N]}=0\] \[\|\alpha h+\xi([g(1)^{M_{1}},\{g(1)^{h}\},u_{1},u_{2},\ldots,u_{s-2}])\|_{ \mathbb{R}/\mathbb{Z}}=0\] for \(c_{1}(\delta)N\) many \(h\). We now shift our focus to the linear dimensions \(G/G_{2}\). Let \(\tilde{u_{i}}\) be the linear dimension part of \(u_{i}\) and let \(\tilde{\eta_{i}^{1}}\) the linear part of \(\eta_{i}^{1}\). By Lemma 5.1, \[\xi([g(1)^{M_{1}n},\{g(1)^{h}\},\widetilde{u_{1}},\widetilde{u_{2}},\ldots, \widetilde{u_{s-2}}])\] only depends on the linear part of the \(u_{i}\)'s and the linear part of the \(g(1)^{n}\) and \(\{g(1)^{h}\}\). Applying Corollary 3.2, there exists \(x_{1},\ldots,x_{r}\) horizontal characters that annihilate \(G_{2}\) and \(y_{1},\ldots,y_{d_{lin}-r}\) inside \(\Gamma/(\Gamma\cap G_{2})\) (with \(d_{lin}\) the linear dimension \(\dim(G/G_{2})\)) which are independent, \(x_{i}\)'s are orthogonal to \(y_{j}\)'s (i.e. \(x_{i}(y_{j})=0\) for all \(i\) and \(j\)) and such that \[x_{i}\circ g\equiv 0\pmod{1},\xi([y_{i},g,\widetilde{u_{1}},\widetilde{u_{2}}, \ldots,\widetilde{u_{s-2}}])\equiv 0\pmod{1}.\] Let \[\tilde{G}=\{g\in G:x_{\ell}(g)=0,[y_{k},g,\widetilde{u_{1}},\widetilde{u_{2}},\ldots,\widetilde{u_{s-2}}]=0\text{ for all such }u_{i},\eta_{j}^{1}(g)=0\text{ for all j, k, }\ell\}.\] We see that by Cramer's rule (i.e., Lemma A.7), we can pick linearly independent \((u_{i}v_{i},u_{i})\) which span the subspace orthogonal to the \(\eta_{j}\)'s with \(u_{i}\) and \(v_{i}\) having size at most \(c_{1}(\delta)^{-1}\), so \(\tilde{G}\) has complexity at most \(c_{1}(\delta)^{-1}\). We claim that \(\tilde{G}\) is \(s-1\)-step. To see this, letting \(g_{1},\ldots,g_{s-1}\) be elements in \(\tilde{G}\), we wish to show that \[[g_{s-1},g_{s-2},\ldots,g_{1}]=0.\] Note that the right hand side of the equation only depends on the linear dimensions of \(g_{i}\), which by definition and orthogonality can be generated by vectors in the orthogonal complement of \(\tilde{\eta}_{j}^{1}\)'s, and for \(\tilde{u_{i}}\) an annihilator of the \(\tilde{\eta}_{j}^{1}\)'s, \((u_{i},u_{i})\) lies inside the annihilator of \(\eta_{j}\). In addition, \(g_{s-2}\) is orthogonal to \(x_{i}\), so can be generated by \(y_{i}\)'s. Thus, this amounts to checking that \[[y_{i},g_{s-1},\widetilde{u_{1}},\widetilde{u_{2}},\ldots,\widetilde{u_{s-2}} ]=0\] which is true by definition. This completes the proof of Case 1 since \(\|\eta_{j}\circ g\|_{C^{\infty}[N]}=0\) implies that \(\|\tilde{\eta_{j}^{1}}(g(1))\|_{\mathbb{R}/\mathbb{Z}}=0\) so by Lemma A.1, and since we quotiented out by the kernel of \(\xi\), \(g(n)\Gamma\) is contained, up to a periodic part in an \(s-1\)-step nilmanifold and we can write \(g(n)=g_{1}(n)\gamma(n)\) with \(\gamma(n)\Gamma\) being \(P\)-rational with some \(P\leq c_{1}(\delta)^{-1}\). We can then apply Lemma A.8 obtaining characters \(\alpha_{1},\ldots,\alpha_{r}\) such that \(\|\alpha_{i}\circ g(Pk!\cdot)\|_{C^{\infty}[N]}=0\) and since \(Pk!\ll N\), it is relatively prime to \(N\), we have \(\|\alpha_{i}\circ g\|_{C^{\infty}[N]}=0\). Furthermore, it follows that if \(\beta_{1},\ldots,\beta_{s-1}\) are orthogonal horizontal characters to \(\alpha_{i}\), it follows that \[\|\xi([g(Pk!\cdot)\Gamma,\beta_{1},\ldots,\beta_{s-1}])\|_{C^{\infty}[N]}=\| \xi([g_{1}(Pk!\cdot)\Gamma,\beta_{1},\ldots,\beta_{s-1}])\|_{C^{\infty}[N]}=0\] so \[\|\xi([g,\beta_{1},\ldots,\beta_{s-1}])\|_{C^{\infty}[N]}=0.\] ### Case 2 Suppose now that \[\xi([G_{2},G,G,\ldots,G])\neq 0.\] Our goal for case two is to reduce to case one. In this case, we shall need to replace \(G_{\ell}\) with \(G_{\ell}G_{(s)}\) for \(k\geq\ell\geq s\) so that \(G_{k}\) contains \(G_{(s)}\). Note that this preserves normality and the filtration property since \(G_{(s)}\) lies in the center of \(G\). Fourier expanding \(F_{h}\) via Lemma A.6 and pigeonholing in one of the Fourier coefficients, we may assume (at the cost of replacing \(\delta\) with \(c_{1}(\delta)\) and \(M\) with \(M^{O_{s,k}(1)}\)) that \(F_{h}\) is a \(G_{k}\)-character with frequency \(\xi+\xi^{\prime}\). This time, while \(G\times_{G_{2}}G\) is no longer \(s-1\)-step, \(F_{h}\) now annihilates \((G\times_{G_{2}}G)_{k}=G_{k}^{\triangle}\), so \(F_{h}(g_{h}(n)\Gamma)\) descends to a degree \(k-1\) nilsequence on \(G^{\square}/\Gamma^{\square}\). The vertical torus of \(G_{(s)}\times_{K}G_{(s)}\) where \(K=[G_{2},G,\ldots,G]\) with the commutator being taken \(s-1\) times, \(F_{h}\) is a nilcharacter on \(G^{\square}\) of frequency \(\xi\otimes\overline{\xi}\). By our induction on degree and pigeonholing in \(h\), we find \(\eta_{1},\ldots,\eta_{r}\) such that for any orthogonal \(w_{1},\ldots,w_{s-1}\in\Gamma^{\square}/(\Gamma^{\square}\cap[G^{\square},G^{ \square}])\) to the \(\eta_{j}\)'s we have \[\|\eta_{i}(\tilde{g_{h}}(M_{1}n))\|_{C^{\infty}[N]}=0\] \[\|\xi\otimes\bar{\xi}([\tilde{g_{h}}(M_{1}n),w_{1},\ldots,w_{s-1}])\|_{C^{\infty}[N] }=0\] where \(\tilde{g_{h}}\) is the projection of \(g_{h}\) to \(G^{\square}\). Letting \(H=G\times_{G_{2}}G\), we see that the horizontal component of \(H\) is \(H/[H,H]\) while the horizontal component of \(G^{\square}=H/G_{k}^{\triangle}\) can be identified with \(H/[H,H]G_{k}^{\triangle}\) (since \([H/G_{k}^{\triangle},H/G_{k}^{\triangle}]=[H,H]G_{k}^{\triangle}/G_{k}^{\triangle}\) by viewing the coset equivalence). Thus, each \(\eta_{j}\) can be identified with a horizontal character on \(H\) with zero \(G_{k}^{\triangle}\) component (if \(G_{k}^{\triangle}\) does not lie inside \([H,H]\)). In addition, identifying the horizontal component of \(\tilde{g_{h}}\) with the projection of the horizontal component of \(g_{h}\) to \(H/[H,H]G_{k}^{\triangle}\), and since the horizontal component can be identified with its Lie algebra, it follows that \(H/[H,H]G_{k}^{\triangle}\) embeds in the horizontal component \(H/[H,H]\). Under this embedding, \(\xi\otimes\bar{\xi}([\tilde{g_{h}}(n),w_{1},\ldots,w_{s-1}])\) does not depend on the \(G_{k}^{\triangle}\) component of \(w_{i}\) or of \(\tilde{g_{h}}\), so we may find \(\eta_{1},\ldots,\eta_{r}\) which are horizontal characters on \(G\times_{G_{2}}G\) such that for any orthogonal \(w_{1},\ldots,w_{s-1}\) to the \(\eta_{j}\)'s, we have for \(c_{1}(\delta)N\) many \(h\in[N]\) that \[\|\eta_{i}(g_{h}(M_{1}n))\|_{C^{\infty}[N]}=0\] \[\|\xi\otimes\bar{\xi}([g_{h}(M_{1}n),w_{1},\ldots,w_{s-1}])\|_{C^{\infty}[N]}=0.\] Decomposing the \(\eta_{i}\)'s as in Lemma A.3, we obtain \[\|\eta_{i}^{1}(g(M_{1}n))+\eta_{i}^{2}([g(1)^{M_{1}n},\{g(1)^{h}\}])+\eta_{i}^ {2}(g_{2}(M_{1}n+h))-\eta_{2}(g_{2}(M_{1}n))\|_{C^{\infty}[N]}=0.\] We choose (via Cramer's rule or Lemma A.7) linear independent \((\tilde{u_{i}}\tilde{v_{i}},\tilde{u_{i}})\) of size at most \(c_{1}(\delta)^{-1}\) to be orthogonal vectors in \(G^{\square}\) that are orthogonal to the \(\eta_{i}\)'s and span the subspace orthogonal to the \(\eta_{i}\)'s. Separating \(\tilde{v}_{i}=\tilde{v}_{i}^{1}\tilde{v}_{i}^{2}\pmod{[G,G_{2}]}\) where \(\tilde{v}_{i}^{1}=\psi_{horiz}(v_{i})\), we see that using Lemma A.7 that we may replace \(\eta_{i}\)'s with orthogonal vectors to \((\tilde{u_{i}}\tilde{v_{i}},\tilde{u_{i}})\) with a property that a subset of them contains \((\eta_{i}^{1},\eta_{i}^{2,1})\) and that \((\eta_{i}^{1},\eta_{i}^{2,1})\) annihilates all of \((\tilde{u_{i}}\tilde{v_{i}}^{1},\tilde{u_{i}})\) and \(\eta_{i}^{2,1}\) is a horizontal character on \(G_{2}\) that annihilates \([G,G]\) and such that \((\tilde{u_{i}},\tilde{v}_{i}^{1})\) and \((\eta_{i}^{1},\eta_{i}^{2,1})\) span the horizontal components of \(G\times G_{2}\) inside \(G\times G\), and that each of the \(\eta_{i}\)'s have size at most \(c_{1}(\delta)^{-1}\). Vinogradov's lemma (applied to the constant term in the polynomial in \(h\)) and the fact that \(N\) is prime gives us that for \(\eta_{i}=(\eta_{i}^{1},\eta_{i}^{2,1})\) that \[\|\eta_{i}^{1}(g)\|_{C^{\infty}[N]}=0.\] The second hypothesis rearranges to (observing that the \([g(1)^{n},\{g(1)^{h}\}]\) term disappears under \(s-1\) commutators) for \(c_{1}(\delta)N\) many \(h\in[N]\) that for some horizontal character \(\zeta\) (corresponding to the \(2^{s-1}-1\) leftover terms of containing the \(v_{i}\)'s in the commutators), \[\|\zeta(g(M_{1}n))+\xi([g_{2}(M_{1}n+h)g_{2}(n)^{-1},\widetilde{u_{j_{1}}} \widetilde{v_{j_{1}}},\ldots,\widetilde{u_{j_{s-1}}}\widetilde{v_{j_{s-1}}}]) \|_{C^{\infty}[N]}=0\] for each \(j:\{1,\ldots,s-1\}\to\{1,\ldots,d^{\prime}-r\}\) where \(d^{\prime}\) is the dimension of the horizontal torus of \(G\times_{G_{2}}G\). Applying Vinogradov's lemma and using the fact that \(N\) is prime (so that we can eliminate the binomial coefficient and the power of \(M_{1}\) in front of each coefficient when expanding out \(g_{2}(M_{1}n+h)g_{2}(n)^{-1}\) in coordinates) yields \[\|\xi([g_{2},\widetilde{u_{j_{1}}}\widetilde{v_{j_{1}}},\ldots,\widetilde{u_{j _{s-1}}}\widetilde{v_{j_{s-1}}}])\|_{C^{\infty}[N]}=0.\] We now define \[\tilde{G}=\{g\in G:\eta_{i}^{1}(g)=0,\forall i\text{ such that }\eta_{i}=(\eta_{i}^{1},\eta_{i}^{2,1})\}\] \[\tilde{G}_{2}=\{g\in\tilde{G}:[g,\widetilde{u_{j_{1}}}\widetilde{v_{j_{1}}}, \widetilde{u_{j_{2}}}\widetilde{v_{j_{2}}},\ldots,\widetilde{u_{j_{s-1}}} \widetilde{v_{j_{s-1}}}]=0\forall j\}.\] We claim that \([\tilde{G}_{2},\tilde{G},\tilde{G},\ldots,\tilde{G}]=0\) where the commutator is taken \(s-1\) times. This amounts to showing that for any \(g_{1},\ldots,g_{s-2}\in\tilde{G}\) and any \(h\in\tilde{G}_{2}\), that \[[h,g_{1},\ldots,g_{s-2}]=0.\] This follows from the fact that \((x,x)\) for \(x\in\tilde{G}\) is orthogonal to all of the \(\eta_{i}\) of the form \((\eta_{i}^{1},\eta_{i}^{1,2})\) and so can be generated by \((\tilde{u}_{j}\tilde{v}_{j}^{1},\tilde{u}_{j})\)'s modulo \([G,G]^{2}\) (noting that \([G,G]^{2}\) contains \([G\times_{G_{2}}G,G\times_{G_{2}}G]\) and is a normal subgroup of \(G\times_{G_{2}}G\)). We then use the fact that the above expression only depends on the horizontal components of \(\tilde{v}_{j}\)'s. Using Lemma A.1 and Lemma A.2, we write \(g(n)\Gamma=g_{1}(n)\gamma(n)\Gamma\) with \(g_{1}\in\tilde{G}\) with the filtration \(\tilde{G}_{i}=\tilde{G}_{2}\cap G_{i}\) for \(i\geq 2\), we can then apply Case 1 and Lemma A.8 to \(g_{1}(Pk!n)\) (with \(P\) the period of \(\gamma\)), obtaining characters \(\alpha_{1},\ldots,\alpha_{r}\) such that \(\|\alpha_{i}\circ g(Pk!)\|_{C^{\infty}[N]}=0\) and since \(Pk!\) is relatively prime to \(N\), we have \(\|\alpha_{i}\circ g\|_{C^{\infty}[N]}=0\). Furthermore, it follows that if \(\beta_{1},\ldots,\beta_{s-1}\) are orthogonal horizontal characters to \(\alpha_{i}\), it follows that \[\|\xi([g(Pk!\cdot)\Gamma,\beta_{1},\ldots,\beta_{s-1}])\|_{C^{\infty}[N]}=\| \xi([g_{1}(Pk!\cdot)\Gamma,\beta_{1},\ldots,\beta_{s-1}])\|_{C^{\infty}[N]}=0\] so \[\|\xi([g,\beta_{1},\ldots,\beta_{s-1}])\|_{C^{\infty}[N]}=0.\] ### Deducing the main theorem Finally, to deduce Theorem 3 from Theorem 9, we quotient out by the kernel of \(\xi\) and apply Lemma 2.2 to assume that \(g(0)=1\). We then apply apply Lemma A.1 to the horizontal characters obtained from Theorem 9 to obtain \(g(n)=g_{1}(n)\gamma(n)\) where \(\epsilon\) is constant, \(g_{1}\) lies on a \(\leq s-1\)-step nilmanifold \(G_{1}/\Gamma_{1}\) with rationality and complexity at most \((\delta/M)^{-O_{s,k}(d)^{O_{s,k}(1)}}\) and \(\gamma\) is \((\delta/M)^{-O_{s,k}(d)^{O_{s,k}(1)}}\)-rational. By making a change of variables \(n\equiv Pk!m\pmod{N}\), we obtain \[F(g(n)\Gamma)=\tilde{F}(g_{1}(Pk!m)\Gamma_{1})\] where \(\tilde{F}\) is the restriction of \(F\) to \(\tilde{G}_{1}\). Making another change of variables \(Q\equiv(Pk!)^{-1}\pmod{N}\), we may write \[F(g(n)\Gamma)=\tilde{F}(g_{1}(Pk!Qn)\Gamma_{1}).\] Thus, \(\tilde{F}(\epsilon g_{1}(PQn)\Gamma_{1})\) satisfies the required conclusions of Theorem 3 and we are done. _Remark_.: The reader may have picked up on the fact that our argument isn't sharp, in the sense that there is information gained during the proof that was not fully used in the proof. For instance, in the proof of case 1, we only fully used the \(\eta_{i}\)'s which had zero \(\eta_{i}^{2}\) component. Thus, one can speculate whether one can induct on even more quantities than we did so here to obtain a qualitatively stronger result with good quantitative bounds. We did not attempt to do so here, since these improvements aren't relevant to us in obtaining an equidistribution theory with losses single exponential in dimension, but it could be possible that understanding the equidistribution theory of nilsequences better could lead to a better understanding of other problems in higher order Fourier analysis such as the inverse theorem. ## 6. A Ratner-type factorization theorem for a single observable To illustrate the quantitative strength of our main theorem, we iterate our main theorem to deduce an analogous Ratner-type factorization theorem for a single observable. We do want to emphasize, though, that the actual Ratner-type factorization theorem does at least appear significantly stronger and more flexible than the theorem we prove below, though it remains to be seen if for the sake of applications, the theorem below can or cannot be used as a substitute for the Ratner-type factorization theorem. We need the following definition before we state our result: **Definition**.: We say that a nilsequence \(F(g(n)\Gamma)\) is \(\delta\)-equidistributed on scale \(N\) if \[\left|\mathbb{E}_{n\in[N]}F(g(n)\Gamma)-\int_{G/\Gamma}Fd\mu\right|<\delta\|F\|_{ Lip(G)}\] **Theorem 10**.: _For any periodic nilsequence \(F(g(n)\Gamma)\) of Lipschitz parameter \(\leq 1\), degree \(k\), complexity \(M_{0}\), dimension \(d\), and step \(s\), and \(\delta>0\), \(A\geq 100\), there exists some \(M\) with \(M_{0}\leq M\leq(M_{0}/\delta)^{O(Ad)^{O(1)}}\) such that_ \[F(g(n)\Gamma)=\sum_{i=1}^{L}F_{i}(g_{i}(n)\Gamma_{i})+h(n)\] _with \(L\leq M\), \(g_{i}\) is a polynomial sequence in \(G_{i}\), which is the projection of some subgroup of \(G\), has complexity \(\leq M\), and \(F_{i}(g_{i}(n)\Gamma_{i})\) is \(M^{-A}\)-equidistributed in \(G_{i}/\Gamma_{i}\), and \(\|h\|_{L^{\infty}[N]}\leq\delta\)._ This theorem can be thought of as a "Fourier approximation of a nilsequence." Proof.: Let \(1/M_{0}^{A}=\delta_{1}>\delta_{2}>\cdots\) and \(M_{0}\leq M_{1}\leq M_{2}\leq\dots\) be a sequence of parameters to be specified later on. Next, we invoke Lemma 2.2 to assume that \(g(0)=1\) and \(|\psi(g(1))|\leq\frac{1}{2}\). If \(F(g(n)\Gamma)\) is \(M_{0}^{A}\)-equidistributed, then we are done. Otherwise, by Lemma \(A.6\), we may Fourier approximate \[F(g(n)\Gamma)=\sum_{i=1}^{L_{1}}F_{i}(g(n)\Gamma)+O_{L^{\infty}}(\delta)\] with \(L_{1}\leq(\delta/M_{0})^{-O(Ad)}:=\delta_{1}\). Applying the our main theorem to each \(F_{i}(g(n)\Gamma)\), we see that they are all either \(\delta_{1}^{2}\)-equidistributed or there exists a factorization \(g(n)=g_{i}(n)\gamma_{i}(n)\). We obtain \[F(g(n)\Gamma)=\sum_{i=1}^{L_{1}}F_{i}(g_{i}(n)\gamma_{i}(n)\Gamma)+O_{L^{ \infty}}(\delta).\] For each term \(F_{i}(g_{i}(n)\gamma_{i}(n)\Gamma)\), we make a change of variables \(F_{i}(g_{i}(Pn)\Gamma)=F_{i}(g_{i}(Pn)\Gamma_{i})\) where \(\Gamma_{i}\) is the lattice of the subgroup that \(g_{i}\) lies in and \(P\) is the period of \(\gamma_{i}\), everything having complexity and rationality bounded by \(M_{1}\leq(\delta_{1}/M_{0})^{-O(Ad)^{O(1)}}\). We then take another Fourier expansion of \(F_{i}\) in the vertical expansion in \(G_{i}\) to obtain \[F_{i}(x\Gamma_{i})=\sum_{j=1}^{(\delta/M_{1})^{-O(d)}}F_{i,j}(x\Gamma_{i})+O_ {L^{\infty}}((\delta/M_{1})^{2A}).\] We can then apply our main theorem to \(F_{i,j}(g_{i}(Pn)\Gamma_{i})\) with \(\delta_{2}:=(\delta_{1}/M_{1})^{2A}\)-equidistribution. Iterating this procedure \(O_{s,k}(1)\) many times gives the desired result. ## Appendix A Auxiliary Lemmas In this section, we shall state auxiliary lemmas we use in the proof of our main theorem. Most of these results come from [1]. The first two lemmas are [1, Proposition 9.2] and [1, Lemma 7.9], respectively. **Lemma A.1** (Factorization lemma I).: _Let \(g(n)\Gamma\) be a periodic nilsequence of step \(s\), degree \(k\), dimension \(d\), and complexity \(M\) and suppose \(\eta_{1},\dots,\eta_{r}\) are a set of linearly independent nonzero horizontal characters of size at most \(L\). Suppose \(\|\eta_{i}\circ g\|_{C^{\infty}[N]}=0\) for each character \(i\). Then we may write \(g(n)=\epsilon(n)g_{1}(n)\gamma(n)\) where \(\epsilon(n)\) is constant, \(g_{1}(n)\) is a periodic nilsequence in_ \[\tilde{G}=\bigcap_{i=1}^{r}\ker(\eta_{i})\] _which has complexity at most \(M(dL)^{O_{s,k}(r)}\), \(\gamma\) is \(O((dL)^{O(r)})\)-rational, and \(g_{1}(0)=\gamma(0)=1\)._ _Remark_.: The important point here is that losses are at most single exponential in dimension. Proof.: By setting \(\epsilon(n)=g(0)\), we may work with the assumption that \(g(0)=1\). We may thus write in coordinates that \[\psi(g(n))=\sum_{i}\binom{n}{i}t_{i}\] where \(t_{i}\) are vectors representing the coordinates of the degree \(i\) component of \(g\) in Mal'cev coordinates. By Cramer's rule, we may pick a rational vector \(v\) with denominator at most \((dL)^{O(r)}\) such that \(\eta_{i}\cdot v=\eta_{i}\cdot\psi(g(n))\) and such that the nonlinear component of \(v\) is zero. We define \(\gamma(n)\) to be the preimage of \(\psi\) of \(\sum_{i}\binom{n}{i}v_{i}\). Thus, the polynomial sequence \(g_{1}(n):=g(n)\gamma(n)^{-1}\) lies inside \(\tilde{G}\) and by construction \(\gamma(n)\Gamma\) is \((dL)^{O(r)}\)-periodic. **Lemma A.2** (Factorization lemma II).: _Let \(g(n)\Gamma\) be a periodic nilsequence of step \(s\), degree \(k\), dimension \(d\), and complexity \(M\) and suppose \(\eta_{1},\dots,\eta_{r}\) are a set of linearly independent nonzero horizontal characters on \(G_{2}\) which annihilate \([G,G_{2}]\) of size at most \(L\). Suppose \(\|\eta_{i}\circ g_{2}\|_{C^{\infty}[N]}=0\) for each character \(i\). Then we may write \(g(n)=g_{1}(n)\gamma(n)\) where \(g_{1}(n)\) is a periodic nilsequence with nonlinear part in_ \[\tilde{G}_{2}=\bigcap_{i=1}^{r}\ker(\eta_{i})\] _which has complexity at most \(M(dL)^{O_{s,k}(r)}\), \(\gamma\) is \(O((dL)^{O(r)})\)-rational, and \(g_{1}(0)=\gamma(0)=1\)._ Proof.: We write in Mal'cev coordinates that \[\psi_{G_{2}}(g_{2}(n))=\sum_{i\geq 2}\binom{n}{i}t_{i}\] where \(t_{i}\) are vectors representing the coordinates of the degree \(i\) component of \(g\) in Mal'cev coordinates. By Cramer's rule, we may pick a rational vector \(v\) in \(G_{2}\) with denominator at most \((dL)^{O(r)}\) such that \(\eta_{i}\cdot v=\eta_{i}\cdot\psi(g(n))\) and such that the nonlinear component of \(v\) is zero. We define \(\gamma(n)\) to be the preimage of \(\psi\) of \(\sum_{i}\binom{n}{i}v_{i}\). Thus, the polynomial sequence \(g_{1}^{1}(n):=g_{2}(n)\gamma(n)^{-1}\) lies inside \(\tilde{G}_{2}\) and by construction \(\gamma(n)\Gamma\) is \((dL)^{O(r)}\)-rational. Defining \(\tilde{g}(n)=g(1)^{n}g_{2}(n)\gamma(n)\Gamma\), we obtain the desired properties. Let \(G\) be a nilpotent Lie group of step \(s\) with the finite degree \(k\) filtration \(G_{i}\) satisfying \([G_{i},G_{j}]\subseteq G_{i+j}\). Suppose \(G\) has complexity \(M\) and dimension \(d\). In many of the proofs above, we work with the group \(G\times_{G_{2}}G\). The next lemma is a lemma regarding properties of this group. **Lemma A.3** (Properties of \(G^{\square}\)).: _The following properties hold for \(G\times_{G_{2}}G\):_ * \((G\times_{G_{2}}G)_{i}=G_{i}\times_{G_{i+1}}G_{i}\) _forms a filtration of_ \(G\times_{G_{2}}G\)_._ * _If_ \(G\) _has complexity_ \(M\)_, then_ \(G\times_{G_{2}}G\) _has complexity at most_ \((dM)^{O_{s}(1)}\) * _Furthermore, if_ \(F\) _is a Lipschitz function on_ \(G\) _with norm_ \(L\)_. Define_ \(F^{\square}(x,y)=F(gx)\overline{F}(y)\) _where_ \(d_{G}(g,e)\leq 1\)_, then_ \(F^{\square}\) _has Lipschitz parameter at most_ \((dML)^{O_{s,k(1)}}\)_. Furthermore, if_ \(F\) _is a nilcharacter of frequency_ \(\xi\)_, then_ \(F\) _annihilates_ \(G_{k}^{\triangle}\)_._ * _For each horizontal character_ \(\eta\) _on_ \(G\times_{G_{2}}G\)_, we may decompose_ \(\eta\) _uniquely as_ \[\eta(g^{\prime},g)=\eta_{1}(g)+\eta_{2}(g^{\prime}g^{-1})\] _where_ \(\eta_{1}\) _is a horizontal character on_ \(G\)_,_ \(\eta_{2}\) _is a horizontal character on_ \(G_{2}\) _which annihilates_ \([G,G_{2}]\)_. Furthermore, if_ \(|\eta|\) _is bounded by_ \(K\)_, then_ \(|\eta_{1}|,|\eta_{2}|\) _are bounded above by_ \(K(dM)^{O_{s,k(1)}}\)_._ Proof.: These properties will ultimately follow from [11, Proposition 7.2, Lemma 7.4, Lemma 7.5]. For convenience to the reader, we will sketch out an argument here. For the first point, note that if \((g_{i},g_{i+1}g_{i})\) and \((h_{i},h_{i+1}h_{i})\) are two elements in \((G\times_{G_{2}}G)_{i}\), then \[[(g_{i},g_{i+1}g_{i}),(h_{i},h_{i+1}h_{i})]=([g_{i},h_{i}],[g_{i+1}g_{i},h_{i+1 }h_{i}])\] and various commutator identities show that \[[g_{i+1}g_{i},h_{i+1}h_{i}]=[g_{i},h_{i}][g_{i+1},h_{i}][g_{i},h_{i+1}][g_{i+1},h_{i+1}]=[g_{i},h_{i}]\pmod{G_{i+j+1}}.\] Hence, \((G\times_{G_{2}}G)_{i}\) forms a filtration. To show the second point, denoting \(\{X_{1},\ldots,X_{d}\}\) as the Mal'cev basis, consider \[\{(X_{1},0),(0,X_{1}),\ldots,(X_{d},0),(0,X_{d})\}.\] This is a Mal'cev basis for \(G/\Gamma\times G/\Gamma\), and by Cramer's rule, \(G\times_{G_{2}}G\) is \((dM)^{O(d)}\)-rational with respect to this basis. By [11, Proposition A.10], it follows that there exists a Mal'cev basis on \(G\times_{G_{2}}G\) which is an \((dM)^{O_{s,k}(d)}\)-rational combination of \((X_{i},X_{j})\). For the third point, note that \(F\) restricted to \(G/\Gamma\times G/\Gamma\) has Lipschitz constant \(L^{2}\). We see that \(G\times_{G_{2}}G\) has rationality \((dM)^{O_{s,k}(1)}\), so if \(x,y\in G\times_{G_{2}}G\), then \(d_{G/\Gamma\times G/\Gamma}(x,y)\leq(dM)^{O_{s,k}(1)}d_{G\times_{G_{2}}G}(x,y)\). The third point follows from \(F(gx_{g})\overline{F}(g_{g}y)=F(gx)\overline{F}(y)\) where \(g_{s}\in G_{(s)}\). Finally, for the fourth point, we define \(\eta_{1}(g)=\eta(g,g)\) and \(\eta_{2}(h)=\eta(h,1)\). Since \(\eta\) annihilates \([G\times_{G_{2}}G,G\times_{G_{2}}G]\), this contains \([G^{\triangle}.G^{\triangle}]=[G,G]^{\triangle}\) and \([G_{2}\times 1,G^{\triangle}]=[G_{2},G]\times 1\), we see that \(\eta_{2}\) must annihilate \([G,G_{2}]\) and \(\eta_{1}\) must annihilate \([G,G]\). We also see that since \(\eta\) annihilate \(\Gamma\times_{G_{2}\cap\Gamma}\Gamma\), this contains both \(\Gamma^{\triangle}\) and \((\Gamma\cap G_{2})\times 1\), so \(\eta_{1}\) annihilates \(\Gamma\) and \(\eta_{2}\) annihilates \(\Gamma\cap G_{2}\). To check the boundedness conditions, we see that the Mal'cev coordinates of \(G\times_{G_{2}}G\) are rational combinations of \(G\times G\) with coefficients that have denominator at most \((dM)^{O_{s,k}(1)}\). It follows that \(\eta_{1},\eta_{2}\) are bounded by \(K(dM)^{O_{s,k}(1)}\). The next lemma is another auxiliary lemma used in the proof of the main theorem above. In that setting, we have \(g(n)\) is a polynomial sequence in a filtered nilpotent Lie group. For \(h\in\mathbb{Z}\), we define \(g_{h}(n)=(\{g(1)^{h}\}^{-1}g_{2}(n+h)g(1)^{-n}\{g(1)^{h}\},g(n))\) and the filtered nilpotent Lie group \((G\times_{G_{2}}G)_{i}=G_{i}\times_{G_{i+1}}G_{i}\). **Lemma A.4**.: _The sequence \(g_{h}(n)\) is a polynomial sequence in \(G_{i}\times_{G_{i+1}}G_{i}\)._ Proof.: This will once again follow from [11, Proposition 7.2], which uses the fact that \(\operatorname{poly}(\mathbb{Z},G\times_{G_{2}}G)\) is a group and is a normal subgroup of \(\operatorname{poly}(\mathbb{Z},G\times G)\). Since \((\{g(1)^{h}\},1)\) lies inside \(G\times G\), it follows that \(g_{h}(n)\) is a polynomial sequence if \((g_{2}(n+h)g(1)^{-n},g_{2}(n)g(1)^{-n})\) is a polynomial sequence. Since \((g(1)^{-n},g(1)^{-n})\) is a polynomial sequence in \(G\times_{G_{2}}G\), it follows that it suffices to show that \((g_{2}(n+h),g_{2}(n))\) is a polynomial sequence in \(G\times_{G_{2}}G\) This amounts to checking that if \(g_{i}\in G_{i}\), then \((g_{i}^{\binom{n+h}{i}},g_{i}^{\binom{n}{i}})\) is a polynomial sequence on \(G\times_{G_{2}}G\). Taking \(j\) derivatives, this amounts to checking that \((g_{i}^{\binom{n+h}{i-j}},g_{i-j}^{\binom{n}{i-j}})\) is a polynomial sequence in \(G_{j}\times_{G_{j+1}}G_{j}\). For \(j\geq i\), this becomes that \((1,1)\) is a polynomial sequence, which is true. For \(j<i\), this follows from the fact that \(g_{i}\in G_{i}\). The next lemma, is a quantitative Fourier expansion lemma for Abelian groups, which will be useful for the following lemma: **Lemma A.5** (Fourier/Fejer Expansion lemma).: _Let \(f\colon\mathbb{T}^{d}\to\mathbb{C}\) be a continuous function with Lipschitz parameter at most \(L\), meaning that \(\|f\|_{L^{\infty}(\mathbb{T}^{d})}+\|f\|_{Lip(\mathbb{T}^{d})}\leq L\). Then we may write_ \[f=\sum_{i=1}^{k}a_{i}e(n_{i}x)+g\] _where \(\sum_{i}^{k}|a_{i}|\leq C^{d^{2}}L\delta^{-2d^{2}-d}\) and \(\|g\|_{\infty}\leq 3\delta\)._ Proof.: Let \(\phi\colon\mathbb{R}\to\mathbb{R}_{\geq 0}\) be a smooth compactly supported function supported in \([-1,1]\) and with integral one. Let \(Q_{\delta}(x)=\prod_{i=1}^{d}\delta^{-1}\phi(x_{i}/\delta)\) and let \(K=Q_{\delta}*Q_{\delta}\) be a Fejer-type kernel. Since \(|\hat{\phi}(\xi)|\ll_{k}|\xi|^{-k}\), \[|\hat{K}(\xi)|\leq C^{d}\delta^{2d}\xi^{-2}\] for some constant \(C\) it follows that the Fourier coefficients of \(K\) larger than \(M\) contributes at most \(C^{d}M^{-1}\delta^{2d}\). In addition, for \(M=C^{-d}\delta^{-2d-1}\|f\|_{Lip}^{-1}\) \[\|f-f*K\|_{\infty}\leq\int|f(x)-f(y)|K(x-y)dy=\int zK(z)dz\leq 2\delta\] since \(K\) has integral one and is supported on \(|x|\leq 2\delta\). Set \[h(x)=\sum_{k\in\mathbb{Z}^{d}:|k|\leq M}\hat{f}(k)\hat{K}(k)e(kx).\] Then by Fourier inversion formula, it follows that \[\|h-f*K\|_{\infty}\leq\delta.\] Thus, \(\|h-f\|_{\infty}\leq 3\delta\). The sum of the Fourier coefficients of \(H\) is at most \(LC^{-d^{2}}\delta^{-2d^{2}-d}\). which shows that we may Fourier expand a Lipschitz function on \(G/\Gamma\) to nilcharacters: **Lemma A.6** (Quantitative Fourier Expansion Lemma).: _Let \(G/\Gamma\) be a nilmanifold with dimension \(d\), complexity \(M\), degree \(k\), and step \(s\). Let \(F:G/\Gamma\to\mathbb{C}\) be a Lipschitz function with Lipschitz constant \(L\). Then we may Fourier expand_ \[F(x)=\sum_{|\xi|\leq(\delta/L)^{-O(d)}}F_{\xi}(x)+O(\delta)\] _where \(F_{\xi}\) is a nilcharacter of frequency \(\xi\). More generally, given a connected and simply connected subgroup \(H\) of the center of \(G\) with rationality bounded by \(M\), we can Fourier expand_ \[F(x)=\sum_{|\xi|\leq(\delta/L)^{-O(d)}}F_{\xi}(x)+O(\delta).\] Proof.: Let \(K\) denote the kernel constructed before adapted to \(G_{(s)}\). We define \[\tilde{F}(x)=\int_{G_{(s)}/(\Gamma\cap G_{(s)})}F(g_{s}x)K(g_{s})dg_{s}.\] Write \(G_{(s)}\cong\mathbb{R}^{d_{s}}/\mathbb{Z}^{d_{s}}\). For \(M=C^{-d}\delta^{-2d-1}\|F\|_{Lip}^{-1}\), we have \[\|F-\tilde{F}\|_{\infty}\leq\int_{\mathbb{T}^{d_{s}}}|f(gx)-f(hx)|K(g-h)dh\leq \int_{\mathbb{T}^{d_{s}}}hK(h)dh\leq 2\delta\] since \(K\) has integral \(1\) and is supported on \(|x|_{G_{(s)}}\leq 2\delta\). Let \[G(x)=\sum_{k\in\mathbb{Z}^{d_{s}},|k|\leq M}\hat{F}(k)\hat{K}(k)e(kx).\] Then by Fourier inversion, it follows that \[\|G-\tilde{F}\|_{\infty}\leq\delta.\] Thus, \(\|G-\tilde{F}\|_{\infty}\leq 3\delta\) and the first part of the lemma follows from this. For the second part of the lemma, we have by the rationality bounds that the Lipschitz constant of \(F\) is bounded by \(ML\). We can then follow the proof of the first part of the lemma. The following linear algebraic lemma is used often in our paper: **Lemma A.7** (Corollary of Cramer's rule).: _Let \(v_{1},\ldots,v_{r}\) be integral vectors of \(\mathbb{R}^{d}\) size at most \(M\geq 2\). Then there exists integral \(\eta_{1},\ldots,\eta_{d-r}\) of size at most \((dM)^{O(d)}\) such that \(v_{1},\ldots,v_{r},\eta_{1},\ldots,\eta_{d-r}\) span \(\mathbb{R}^{d}\) and \(\langle v_{i},\eta_{j}\rangle=0\)._ Proof.: This is a simple application of Cramer's rule. Let \(e_{1},\ldots,e_{d}\) be the unit coordinate vectors in \(\mathbb{R}^{d}\). Then there exists a subset, say, \(E=\{e_{j_{1}},\ldots,e_{j_{d-r}}\}\) such that \(\operatorname{span}(E)\oplus\operatorname{span}(v_{1},\ldots,v_{r})=\mathbb{ R}^{d}\). Let \(A\) be the matrix who's rows are consisting of the elements \(v_{1},\ldots,v_{r}\) and \(e_{j_{1}},\ldots,e_{j_{d-r}}\). Then in the matrix \(A^{-1}\) has columns that are linearly independent, and letting \(\eta_{1},\ldots,\eta_{d-r}\) be the last \(d-r\) columns, we see that \(\langle v_{i},\eta_{j}\rangle=0\). Multiplying \(\eta_{j}\) by some integer bounded by \((dM)^{O(d)}\) gives the result. **Lemma A.8**.: _Suppose \(G^{\prime}\) is a subgroup of \(G\) (which is nilpotent of step \(s\)) with rationality \(Q\) such that \(G^{\prime}\) is nilpotent of step \(\leq s-1\). Then there exists \(\eta_{1},\ldots,\eta_{r}\) horizontal characters with size bounded by \(Q^{O(d)^{2}}\) of \(G\) such that \(\eta_{i}(G^{\prime})=0\), and if \(w_{1},\ldots,w_{s}\) are vectors on the horizontal component orthogonal to \(\eta_{i}\), then \([w_{1},w_{2},\ldots,w_{s-1},w_{s}]=0\)._ Proof.: This is a consequence of the Lie algebra of \(G^{\prime}\) is a subspace of the Lie algebra of \(G\). Let \(\pi_{horiz}:G\to G/[G,G]\) be the projection from \(G\) to the horizontal component. Since \(G^{\prime}\) is \(Q\)-rational, it follows from Cramer's rule (or rather Lemma _A.7_) that there exists horizontal characters \(\eta_{1},\ldots,\eta_{r}\) of size at most \(Q^{O(d)^{2}}\) such that \[\pi_{\text{horiz}}\left(\bigcap_{i=1}^{r}\ker(\eta_{i})\right)=\pi_{\text{ horiz}}(G^{\prime}).\] Furthermore, given \(w_{1},\ldots,w_{s}\) vectors inside \(\bigcap\ker(\eta_{i})\), we see that the expression \[[w_{1},w_{2},\ldots,w_{s-1},w_{s}]\] only depends on the horizontal components of \(w_{i}\), so we may replace them with elements inside \(G^{\prime}\), in which case the expression above is zero. We will also need the following two lemmas about periodic nilsequences: **Lemma A.9**.: _Let \(g(n)\Gamma\) be a periodic modulo \(N\) on an abelian nilmanifold with \(N\) larger than the degree of \(g\). Then all of the coordinates of \(g\) are rational with denominator \(N\)._ Proof.: This follows from [T2, Lemma 1.4.1]. **Lemma A.10**.: _Let \(g(n)\Gamma\) be a periodic modulo \(N\) with \(N\) larger than \((10^{k}k)!\) where \(k\) is the degree of \(g\). Then the horizontal Mal'cev coordinates of \(g(n)\Gamma\) have denominator \(N\) (i.e., for any horizontal character \(\eta\), \(\eta\circ g\) has denominator dividing \(N\)). In addition, if we define the nonlinear part \(g_{2}(n):=g(n)g(1)^{-n}\), then \(g_{2}((10^{k}k)!n)\) has horizontal (\(G_{2}/[G,G_{2}]\)) Mal'cev coordinates with denominator \(N\)._ _Remark_.: Note that it is not necessarily true that \(g((10^{k}k)!n)\) has all of its coordinates with denominator \(N\) as the following example shows: \[g(n)=\begin{pmatrix}1&a\alpha&a\beta&a\alpha\beta\\ 0&1&0&\beta\\ 0&0&1&\alpha\\ 0&0&0&1\end{pmatrix}^{n}\] with \(a\in\mathbb{Z}\), \(\alpha,\beta\in\widehat{\mathbb{Z}/N\mathbb{Z}}\). One can check that if \(G\) is the group where we replace \(a\alpha,b\beta,\alpha,\beta,a\alpha\beta\) with an arbitrary real numbers, and \(\Gamma\) is the subgroup where all of the corresponding entries are integers, then \(g(n)\Gamma\) is \(N\)-periodic if \(\alpha\) and \(\beta\) have denominator \(N\). Note that this example corresponds to the \(N\)-periodic bracket polynomial \(n\mapsto e(a\{\alpha n\}\{\beta n\})\). The example also shows that if we were to add a quadratic part to the upper right corner of \(g\), in order for the nilsequence to stay \(N\)-periodic, the coefficient of the quadratic part must have denominator \(N\). This is equivalent to saying that if we were to add higher order parts to \(g\), then the horizontal component of \(g_{2}\) must have denominator \(N\), as the lemma claims. Proof.: The first part of the statement follows from expanding out horizontal Mal'cev coordinates (i.e., projecting to the orthogonal subspace to \([\mathfrak{g},\mathfrak{g}]\) in the Lie algebra with respect to the chosen Mal'cev coordinates) and applying Lemma A.9. For the second part, we work with the joining \(G\times_{G_{2}}G\). For \(h\in\mathbb{N}\), we define \(g_{h}(n)=(\{g(1)^{h}\}^{-1}g_{2}(n+h)g(1)^{-n}\{g(1)^{h}\},g(n))\). It follows that \(g_{h}(n)\tilde{\Gamma}\) is \(N\)-periodic on \(G\times_{G_{2}}G\), and so it is \(N\)-periodic on \(G\times_{G_{2}}G/G_{(s)}^{\triangle}\), and so the horizontal component of \([g(1)^{n},\{g(1)^{h}\}]g_{2}(n+h)g_{2}(n)^{-1}\) is \(N\)-periodic modulo \(1\). Letting the horizontal part of \(g_{2}\) be \[\sum_{i=2}^{k}\alpha_{i}n^{i}\] we see that the horizontal part of \(g_{2}(n+h)g_{2}(n)^{-1}\) is \[\sum_{i=2}^{k}\alpha_{i}\sum_{j=0}^{i-1}n^{j}h^{i-j}\binom{i}{j}=\sum_{i}h^{i} \alpha_{i}+\sum_{i=1}^{k}n^{i}\left(\sum_{j=i+1}^{k-1}\alpha_{j}h^{j-i}\binom{ j}{i}\right).\] Furthermore, we claim that the horizontal component of \([g(1),\{g(1)^{h}\}]\) in \(G_{2}/[G,G_{2}]\) has denominator \(N\). To see this, note that it suffices to show that for any character \(\xi:G_{2}\to S^{1}\) which annihilates \([G,G_{2}]\), and \(\Gamma\cap G_{2}\) that \(\xi([g(1),\{g(1)^{h}\}])\) has denominator \(N\), or that \(\xi([g(1)^{N},\{g(1)^{h}\}])=0\pmod{1}\). We may factor \(\{g(1)^{h}\}=g(1)^{h}[g(1)^{h}]^{-1}\) and using the fact that powers of \(g(1)\) commute with each other, we see that \[\xi([g(1)^{N},\{g(1)^{h}\}])=\xi([g(1)^{N},[g(1)^{h}]^{-1}])=0\] since \([g(1)^{N},[g(1)^{h}]^{-1}]\in\Gamma\cap G_{2}\). The horizontal component of \([g(1)^{n},\{g(1)^{h}\}]g_{2}(n+h)g_{2}(n)^{-1}\) is \[n\psi_{horiz}([g(1)^{n},\{g(1)^{h}\}])+\sum_{i}h^{i}\alpha_{i}+\sum_{i=1}^{k}n^ {i}\left(\sum_{j=i+1}^{k-1}\alpha_{j}h^{j-i}\binom{j}{i}\right).\] By Lemma A.9 and the fact that \(\psi_{horiz}([g(1)^{n},\{g(1)^{h}\}])\) has denominator \(N\), we see that for each \(i\in[k]\), that \[\sum_{j=i+1}^{k-1}\alpha_{j}h^{j-i}\binom{j}{i}\equiv 0\pmod{1/N}\] for all \(h\in\mathbb{N}\). Applying Vinogradov's lemma, it follows that \((10^{k})!\alpha_{j}\binom{j}{i}\equiv 0\pmod{1/N}\) for each \(j>i\), from whence the lemma follows. _Remark_.: In fact, if one uses Polya's classification of integer-valued polynomials [P], one can eliminate the factor of \(10^{k}\). ## Appendix B Deduction of the equidistribution lemma for Theorem 5 In this section, we deduce the following, which is a refinement of [1, Lemma 6.1]: **Lemma B.1**.: _Let \(F(g(n)\Gamma)\) be a two-step periodic nilsequence modulo a prime \(N\) on a two-step nilpotent Lie group with the standard filtration having one dimensional vertical dimension with \(F\) being a nilcharacter with nonzero frequency \(\xi\). If_ \[|\mathbb{E}_{n\in[N]}F(g(P(y))\Gamma)F(g(Q(y))\Gamma)\overline{F(g(P(y)+Q(y)) \Gamma)}|\geq\delta.\] _Then either \(N\ll(\delta/M)^{-O_{P,Q}(d)^{O_{P,Q}(1)}}\) or there exists \(w_{1},\ldots,w_{r}\in\Gamma/(\Gamma\cap[G,G])\) and horizontal characters \(\eta_{1},\ldots,\eta_{d-1-r}\) such that \(|w_{i}|,|\eta_{j}|\leq(\delta/M)^{-O_{P,Q}(d)^{O_{P,Q}(1)}}\), \(\langle\eta_{j},w_{i}\rangle=0\) for all \(i,j\), and_ \[\|\xi([w_{i},g])\|_{C^{\infty}[N]}=\|\eta_{j}\circ g\|_{C^{\infty}[N]}=0.\] Proof.: The first part of the argument is similar to the first part of the argument in [1, Lemma 6.1]. Let \(H\) denote the subgroup of \(G^{3}\) consisting of elements \(\{(g_{1},g_{2},g_{3}):g_{1}g_{2}g_{3}^{-1}\in[G,G]\}\). We claim that \([H,H]=[G,G]^{3}\). By definition, we see that for \(h\in[G,G]\) that \((1,h,h)\) and \((h,1,h)\), and \((h,h,h^{4})\) lies inside \([H,H]\) (the last fact is true because \([(g_{1},g_{1},g_{1}^{2}),(h_{1},h_{1},h_{1}^{2})]=([g_{1},h_{1}],[g_{1},h_{1}],[g_{1},h_{1}]^{4})\)). This yields that \((1,1,h^{2})\) lies inside \([H,H]\), and because of connectedness and simple connectedness, it follows that \((1,1,h)\in[H,H]\). We can verify from there that \([H,H]=[G,G]^{3}\). We were given the polynomial sequence \[F(g(P(y))\Gamma)F(g(Q(y))\Gamma)\overline{F(g(P(y)+Q(y))\Gamma)}\] where \(F\) is a nilcharacter of nonzero frequency \(\xi\). This is a nilcharacter on \(H\) of frequency \((\xi,\xi,-\xi)\). Taking a quotient of \(H\) by the kernel of \((\xi,\xi,-\xi)\), which is \((x,y,x+y)\), we obtain that the center is of the form \((x,x,-x)\) with \((x,y,z)\) being projected to \((x+y-z)(1,1,1)\). Let \(H_{1}\) denote the subgroup with the one dimensional vertical directions. Applying Theorem 8, we obtain \(w_{1},\ldots,w_{r}\) and \(\eta_{1},\ldots,\eta_{d-r}\) such that \(\langle w_{i},\eta_{j}\rangle=0\) and \(\eta_{j}\circ(g(P(y)),g(Q(y)),g(P(y)+Q(y)))\equiv 0\pmod{1}\), and \(\xi([w_{i},g(P(y)),g(Q(y)),g(P(y)+Q(y))\Gamma)]\) \((Q(y))])\equiv 0\pmod{1}\). Denoting \(\eta_{j}=(\alpha_{j},\beta_{j})\) and \(w_{i}=(u_{i},v_{i},u_{i}v_{i})\) and the action \(\eta_{j}(w_{i}):=\alpha_{j}(u_{i})+\beta_{j}(v_{i})\), we see that \[\|\xi([v_{i},g(P(y))])+\xi([u_{i},g(Q(y))])\|_{C^{\infty}[N]}\equiv 0\pmod{1}\] \[\|\alpha_{j}(g(P(y)))+\beta_{j}(g(Q(y)))\|_{C^{\infty}[N]}\equiv 0\pmod{1}.\] Since \(P\) and \(Q\) are linearly independent, it follows that there exists some coefficients \(c_{k}x^{k}\), \(c_{\ell}x^{\ell}\) of \(P\), and \(d_{k}x^{k}\) and \(d_{\ell}x^{\ell}\) of \(Q\) such that \(c_{k}d_{\ell}-d_{k}c_{\ell}\neq 0\). Thus, the conditions become \[c_{k}\xi([u_{i},g(1)])+d_{k}\xi([v_{i},g(1)])\equiv 0\pmod{1}\] \[c_{\ell}\xi([u_{i},g(1)])+d_{\ell}\xi([v_{i},g(1)])\equiv 0\pmod{1}\] which implies since \(\alpha\) has denominator \(N\) which is prime that \(\xi([u_{i},g(1)])\equiv 0\pmod{1}\) and \(\xi([v_{i},g(1)])\equiv 0\pmod{1}\). Similarly, we have \(\alpha_{j}(g(1))\equiv 0\pmod{1}\) and \(\beta_{j}(g(1))\equiv 0\pmod{1}\). Let \(\tilde{G}:=\{g\in G:\xi([v_{i},g])=0,\xi([u_{i},g])=0,\alpha_{j}(g)=0,\beta_{j} (g)=0\forall i,j\}\). We claim that \(\tilde{G}\) is abelian, from whence the Lemma would follow from an application of Lemma A.8. This amounts to showing that for any \(g,h\in\tilde{G}\) that \([g,h]=1\). For such \(g\), \((g,g)\) is annihilated by \((\alpha_{j},\beta_{j})\), and since \(\alpha_{j}(u_{i})+\beta_{j}(v_{i})=0\), it follows that \((g,g)\) can be written as a combination of \((u_{i},v_{i})\) modulo \([G,G]^{2}\). It follows that \([(g,g),(h,h)]=1\), and thus \([g,h]=1\). One can insert the proof of this lemma into the argument of [1], using the Sanders \(U^{3}\) inverse theorem [S] instead of the Green-Tao inverse theorem used there. This would yield quasi-polynomial bounds on the inverse theorem proven there, which would yield Theorem 5. Again, the formal deduction will appear in [1] ## Appendix C Proof of the general refined bracket polynomial lemma In this section, we shall provide two additional proofs of the refined bracket polynomial lemma, this time adapted for the case where we do not assume that various real numbers have denominator \(N\). The first proof given here appears to be different from the proof given in section 3 and the second proof given in this section is a generalization of the one given in Section 3. The second proof of the refined bracket polynomial lemma given here proceeds via a more straightforward induction on dimensions and Minkowski's first theorem7 (or the uncertainty principle) but is quite a bit more cumbersome. As such, we shall describe the main idea of the proof here in a bit more before we dive into the details. The reader may want to first assume that \(\alpha\) is a rational with denominator \(N\), a large prime. In this setting, it's much more clear that the induction on dimensions closes with bounds single exponential in dimension. Footnote 7: readers unfamiliar with Minkowski’s first theorem can refer to [TV, Theorem 3.28] (or simply doing an internet search) for a proof The key idea which prevents the size of \(a\) from growing too much in the bracket polynomial lemma is that the characters \(\eta\) should morally speaking lie close to the direction of \(a\), which when we project \(a\) in the direction orthogonal to \(\eta\) to reduce the dimension should actually make \(a\) smaller. We can actually prove that \(\eta\) satisfies such a property either by using Minkowski's first theorem (in a similar way as it's used in Polya's orchard problem) or by the uncertainty principle (which states that a function with essential support in a box has essential Fourier support in the dual box) in a Fourier argument in a flavor closer to the argument of Green and Tao. The rest of the proof involves carefully tracking how the constants are used so as to not run into bad behavior. This involves splitting the role of \(\delta\) as in [1, Proposition 5.3] into several different variables and carefully tracking how they iterate under an induction on dimensions. We need the following lemma (see [1, Appendix E] for similar manipulations). Before we state it, we would need a definition: We define a notion of _Fourier complexity_ as in [1]. Specifically, we define the \(L^{p}[N]\)\(\delta\)-_Fourier complexity_ (likewise \(L^{p}([N]\times[H])\)\(\delta\)-Fourier complexity) of a function \(f:[N]\to\mathbb{C}\) to be the infimum of all \(L\) such that \[f(n)=\sum_{i}a_{i}e(\xi_{i}n)+g\] where \(\|g\|_{L^{p}[N]}\leq\delta\) and \(\sum_{i}|a_{i}|=L\). **Lemma C.1** (Fourier complexity lemma).: _Either \(N,H\ll\delta^{-O(1)}\), or else \(e(k_{1}\{\alpha_{1}h\}\{\beta_{1}n\}+k_{2}\{\alpha_{2}h\}\{\beta_{2}n\}+ \cdots k_{d}\{\alpha_{d}h\}\{\beta_{d}h\})\) has \(L^{1}[N\times H]\)-\(\delta\)-Fourier complexity at most \((\delta/2^{d}k)^{-O(d^{2})}\) for \(|k_{i}|\leq k\) integers._ Proof.: Let \[\phi(n,h)=k_{1}\{\alpha_{1}h\}\{\beta_{1}n\}+k_{2}\{\alpha_{2}h\}\{\beta_{2}n \}+\cdots k_{d}\{\alpha_{d}h\}\{\beta_{d}n\}.\] The function \(e(\phi(n,h))\) resembles a degree one nilsequence on a torus \(\mathbb{T}^{2d}\) except with possible discontinuities at the endpoints \(x_{i}=\frac{1}{2}\pmod{1}\). To remedy this possible issue, we must set ourselves in a position so that the endpoints contribute very little to the \(L^{1}[N\times H]\) norm. In the periodic nilsequences case, this will end up being true since \(N=H\) are prime and \(\alpha_{i}\) and \(\beta_{i}\) will always have denominator \(N\). For readers only interested in the periodic case, one can multiply \(e(\phi(n,h))\) by smooth cutoff supported on \([-1/2+(\delta/2^{d}k),1/2-(\delta/2^{d}k)]^{2d}\) of derivative at most \((\delta/2^{d}k)^{-1}\) and Fourier expand everything via Lemma A.5. The terms for which we have ignored from taking cutoff contribute an \(L^{1}[N\times H]\) norm at most \(\delta/2^{d-1}\) since \(\{n\in[N]:\|\alpha_{i}n-\frac{1}{2}\|_{\mathbb{R}/\mathbb{Z}}<\delta\}\) has at most \(2\delta N\) many elements (and similarly for each \(\beta_{i}\)) since each \(\alpha_{i}\) has denominator \(N\). Thus, we have obtained the desired Fourier complexity bounds. The reader can skip the rest of the proof which only applies in the nonperiodic case. In the general case, we would need an induction on dimensions procedure so our goal is to show that this does not lead to double exponential losses in parameters. This involves carefully quantifying the procedure in [1, Appendix E] and making sure that losses are only single exponential in dimension. Let \(1>\epsilon>0\) be a quantity to be determined later. If \(\{n\in[N]:\|\alpha_{1}n-\frac{1}{2}\|_{\mathbb{R}/\mathbb{Z}}<\epsilon\}\) has more than \(\sqrt{\epsilon}N\) many elements, then an application of Vinogradov's lemma shows that there exists some integer \(q_{1}\leq\sqrt{\epsilon}^{-1}\) such that \(\|2q_{1}\alpha_{1}\|_{\mathbb{R}/\mathbb{Z}}\ll\frac{\sqrt{\epsilon}}{N}\). We now divide \([N]\) into progressions of common difference \(2q_{1}\) such that on each progression, \(\{\alpha_{1}n\}\) varies by at most \(\epsilon^{1/2-o(1)}\). This can be done as follows: * First, divide \([N]\) into progressions of common difference \(2q_{1}\). * On each progression, we have that \(\alpha_{1}n\) varies by at most \(\sqrt{\epsilon}\) modulo \(1\); however, it's possible that \(\{\alpha_{1}n\}\) "lapped over" \(1/2\) along the progression, so we must divide each progression into two subprogressions, one before \(\{\alpha_{1}n\}\) lapped over \(1/2\) and one after it lapped over \(1/2\). * The number of progressions we obtain is at most \(4q_{1}\). Thus, we may divide \([N]=\bigsqcup P_{i}\) into such progressions. We shall want to "prune out" the small progressions and write \([N]=\bigsqcup P_{i}\sqcup E\) where \(P_{i}^{\prime}\) are sufficiently large and \(E\) is a small error. If \(|P_{i}|\leq\epsilon^{1+1/2}N\), then we include it in \(E\) and otherwise, we do nothing. Thus, we can ensure that \(|P_{i}|\geq\epsilon N\) and \(|E|\leq 4q_{1}\epsilon N\leq 2\epsilon N\). We now iterate in the following manner: if \(\{n\in[N]:\|\alpha_{2}n-\frac{1}{2}\|_{\mathbb{R}/\mathbb{Z}}<\epsilon\}\) has more than \(\sqrt{\epsilon}N\) many elements, then (assuming that \(\epsilon\) is sufficiently small) there exists a progression \(P\) (as in the decomposition above) such that \(\{n\in P:\|\alpha_{2}n-\frac{1}{2}\|_{\mathbb{R}/\mathbb{Z}}<\epsilon\}\) has more than \(\frac{\sqrt{\epsilon}}{2}|P|\) many elements, then writing \(P=q_{1}\cdot[N_{1}]+r_{1}\), it follows that there are more than \(\frac{\sqrt{\epsilon}}{2}N_{1}\) many elements in \(\{n\in[N_{1}]:\|q_{1}\alpha_{2}n+\gamma\|_{\mathbb{R}/\mathbb{Z}}<\epsilon\}\) for some \(\gamma\). Letting \(n\) and \(m\) be two such elements, it follows that \(\|q_{1}\alpha_{2}(n-m)\|_{\mathbb{R}/\mathbb{Z}}\leq\|-\gamma-q_{1}\alpha_{2} m\|_{\mathbb{R}/\mathbb{Z}}+\|q_{1}\alpha_{2}n+\gamma\|_{\mathbb{R}/\mathbb{Z}} \leq 2\epsilon\), so there are more than \(\frac{\sqrt{\epsilon}}{2}|P|\) many elements such \(n\in[N_{1}]\) such that \(\|q_{1}\alpha_{2}\|_{\mathbb{R}/\mathbb{Z}}<2\epsilon\). Thus, there exists some \(q_{2}\leq 2\sqrt{\epsilon}^{-1}\) such that \(\|q_{1}q_{2}\alpha_{2}\|_{\mathbb{R}/\mathbb{Z}}\ll\frac{2\sqrt{\epsilon}}{|P|}\). We now continue down a similar path, dividing each \(P_{i}\) into subprogressions instead of dividing \([N]\) into subprogressions, this time obtaining an error set of size at most \(2\sqrt{\epsilon}|P_{i}|\). Under an iteration (making sure to choose \(\epsilon\) so that \(\frac{\sqrt{\epsilon}}{2^{2d}}\gg\epsilon\)), we would obtain at most \((\epsilon/2^{d})^{-O(d)}\) many progressions each of size at least \((\epsilon/2^{d})^{O(d)}N\) and common difference at most \((\epsilon/2^{d})^{-O(d)}\) such that * either \(n\mapsto\{\alpha_{i}n\}\) or \(h\mapsto\{\beta_{i}h\}\) varies by at most \(\epsilon^{1/2-o(1)}\) on each progression; * or the set \(\{n\in[N]:\|\alpha_{i}n-\frac{1}{2}\|_{\mathbb{R}/\mathbb{Z}}<\epsilon\}\) has at most \(\sqrt{\epsilon}N\) many elements and an error set \(E\) of size at most \(2^{O(d)}\epsilon N\). For the \(\alpha_{i}\)'s or \(\beta_{i}\)'s that lie in the second case, we introduce a smooth cutoff \(\varphi_{j}\) of derivative at most \(\epsilon^{-1}\) in the dimensions that correspond to the \(\alpha_{i}\)'s or \(\beta_{i}\)'s. Taking a product of \(e(\varphi(n,h))\) and the smooth cutoff and summing over each arithmetic progressions, we have obtained a sum of degree one nilsequences with parameter at most \(\epsilon^{-1}\) and dimension at most \(d\) along progressions. We may approximate the terms that vary very little on the subprogressions as a constant function plus an error of at most \(2kd\epsilon^{1/2}\). Fourier expanding the progression via the procedure in [1, Appendix E] and the Lipschitz function via Lemma A.5, we obtain at most \((\epsilon/2^{d})^{-O(d^{2})}\) many terms and an \(L^{1}[N\times H]\) error of at most \(2^{Cd}kd\sqrt{\epsilon}\) for some absolute constant \(C\). Choosing \(\epsilon\) so that \(\delta\geq 2^{Cd}kd\sqrt{\epsilon}\), we see that we have a \(L^{1}[N\times H]\)-Fourier complexity at most \((\delta/2^{d}k)^{-O(d^{2})}\). The important point about this is that we experience at most single exponential losses in the number of bracket phases we need in an application of the above lemma. Our proof involves an iteration scheme via the following lemma: **Lemma C.2**.: _Suppose there are at least \(\delta N_{1}\) many \(h\in[N_{1}]\) such that_ \[\|\beta+\gamma h+a\cdot\{\alpha h\}\|_{\mathbb{R}/\mathbb{Z}}\leq\frac{K}{N}\] _with \(|\gamma|\leq L/N\). Then either \(N\ll L^{O(1)}(K\delta/2^{d}M)^{-O(d)^{O(1)}}\) or \(N_{1}\ll L^{O(1)}(K\delta/2^{d}M)^{-O(d)^{O(1)}}\) or \((\delta/2^{d}M)^{4d}\|a\|_{\infty}\leq K/N\) or there exists an integer vector \(v\) of size at most \((\delta/2^{d}M)^{-O(d)}\) in a \((\delta/2^{d}M)\)-tube in the direction of \(a\) such that \(\|v\cdot\alpha\|_{\mathbb{R}/\mathbb{Z}}\leq L(\delta/2^{d}M)^{-O(d)}/N_{1}\)._ _Remark_.: The fact that \(v\) lies in a small tube in the direction of \(a\) is crucial for our iteration and is the chief difference between this lemma and the bracket polynomial lemma [1, Proposition 5.3] of Green and Tao. Proof.: We may assume that \((\delta/2^{d}M)^{4d}\|a\|_{\infty}\geq K/N\) since otherwise the lemma is proven. The assumption then implies that \[\|\beta+\gamma h\|_{\mathbb{R}/\mathbb{Z}}\leq\frac{K}{N}+d\|a\|_{\infty}\leq(d+ 1)\|a\|_{\infty}.\] If \((d+1)\|a\|_{\infty}\gg\delta\), it follows that \(|\gamma|\ll(d+1)\delta^{-1}L\|a\|_{\infty}/N\). Otherwise, Vinogradov's Lemma implies that here exists some \(q\leq\delta^{-C}\) such that \[\|q\gamma\|_{\mathbb{R}/\mathbb{Z}}\leq\|a\|_{\infty}(d+1)\delta^{-C}/N_{1}.\] Since \(|\gamma|\leq\frac{c_{2}^{-1}}{N_{1}}\), it follows that \(|\gamma|\leq(d+1)\delta^{-C}\|a\|_{\infty}/N_{1}\). Thus, if we pigeonhole and replace \(N_{1}\) with \(N_{2}=\frac{\delta^{C}(\delta/2^{d}M)^{O(d)}N_{1}}{(d+1)L}\) it follows that there exists some \(\theta\) such that for \(\delta|I|\) many \(h\in I\) with \(I\) an interval of size at least \(N_{2}/2\), we have \[\|\theta+a\cdot\{\alpha h\}\|_{\mathbb{R}/\mathbb{Z}}\leq(\delta/2^{d}M)^{3d }\|a\|_{\infty}.\] Following the first proof, we use the pigeonhole principle to isolate a single integer \(j\) such that \[\theta+a\cdot\{\alpha h\}=j+O((\delta/2^{d}M)^{3d}\|a\|_{\infty})\] and we pigeonhole in a sign pattern so there exists \(\delta|I|/2^{d}M\) many integers \(h\in[|I|]\) such that \[|a\cdot\{\alpha h\}|=O((\delta/2^{d}M)^{3d}\|a\|_{\infty}). \tag{1}\] Let \(T\) be the tube of width \((\delta/2^{d}M)^{2}\) and length \(2^{d+1}(\delta/2^{d}M)^{-2d}\) in the direction of \(a\). By Minkowski's first theorem, this tube contains a nonzero lattice point, \(\eta\). Multiplying (1) by \(t\) so that \(ta\) is \((\delta/2^{d}M)^{2}\)-close to \(\eta\), we see that \[|\eta\cdot\{\alpha h\}|=O(t(\delta/2^{d}M)^{3d}\|a\|_{\infty}+(\delta/2^{d}M)^ {2}).\] We see that \(t\leq 2^{d+1}(\delta/2^{d}M)^{-2d}d/\|a\|_{\infty}\). Thus \[\|\eta\cdot\alpha h\|_{\mathbb{R}/\mathbb{Z}}\leq 2(\delta/2^{d}M)^{2}.\] Thus, by Vinogradov's Lemma, there exists some nonzero integer \(q\leq(\delta/2^{d}M)^{-1}\) such that \[\|q\eta\cdot\alpha h\|_{\mathbb{R}/\mathbb{Z}}\leq 2(\delta/2^{d}M)/|I|\leq L( \delta/2^{d}M)^{-O(d)}/N_{1}.\] We will now explain the iteration step as follows: at stage \(j\), we let \(K_{j}\), \(\delta_{j}\), \(M_{j}\), \(L_{j}\), \(N_{j}\) be the \(K\), \(\delta\), \(M\), \(L\), and \(N_{1}\) we apply Lemma C.2 with. At a first read through, the reader can take the case of when \(a\) and \(\alpha\) have denominator \(N\), and \(\gamma\) in the above iteration lemma is zero. The reader can then take \(K_{j}=L_{j}=0\), \(N_{j}=N\). At step \(j\) of the iteration, we will also need to pigeonhole to a progression, of which the common difference we will denote as \(q_{j}\). Again, at a first read through, with the assumption that \(a\) and \(\alpha\) have denominator \(N\), we can simply take \(q_{j}=1\). The quantity \(q_{j}\) will not be relevant during the use of the lemma, but rather, will be relevant for passing from the hypothesis of the bracket polynomial lemma, which is \[|\mathbb{E}_{n\in[N]}e(\beta n+an\cdot\{\alpha h\})|\geq K^{-1}. \tag{2}\] We iterate this lemma as follows: at the first step of the iteration, we apply the lemma with \(K_{1}=K\), \(\delta_{1}=\delta\), \(M_{1}=M\), \(L_{1}=1\), \(N_{1}=N\) to obtain some nonzero \(\eta\). Suppose without a loss of generality that \(\eta_{1}\neq 0\), that is, the first component of \(\eta\) is nonzero. By pigeonhole in progressions of common difference \(\eta_{1}\), it follows that there exists a subprogression \(\eta_{1}\cdot[N_{2}]+r\) of common difference \(\eta_{1}\) with size at least \(\lfloor N/\eta_{1}\rfloor\) such that at least \(\delta N_{2}/2\) many \(h\)'s in the subprogression that satisfy (2). Letting \(q_{2}:=\eta_{1}\), and writing \(h=q_{2}k+r\), and noting that \[\begin{split} an\cdot\{\alpha h\}&=an\cdot\{\alpha q _{1}k+\alpha r\}\pmod{1}\\ &=an\cdot\{\alpha q_{2}k\}+4an\cdot\{\alpha r\}+\{an\}(-\{ \alpha(q_{2}k+r)\}+\{\alpha q_{2}k\}+\{\alpha r\})\pmod{1}\end{split}\] we note that the last few terms, while may look daunting, will be eliminated via the Fourier complexity lemma (Lemma C.1). Since the Fourier complexity lemma only guarantees an \(L^{1}\) Fourier complexity, in which case we only get a statement for _almost all_\(k\), we will have to adjust \(\delta/2\) by \(\delta/2-\delta^{2}/4\), or for simplicity, \(\delta_{2}=\delta/4\). Writing \[\alpha_{1}q_{2}\equiv-\eta_{2}\alpha_{2}-\cdots-\eta_{d}\alpha_{d}+O\left( \frac{(\delta/2^{d}M)^{-O(d)}}{N}\right)\] and using a similar bracket polynomial expansion as above, we will obtain \[an\cdot\{\alpha q_{2}k\}=\tilde{a}n\cdot\{\alpha k\}+an\cdot O\left(\frac{( \delta/2^{d}M)^{-O(d)}}{N}\right)k+[\text{Lower order bracketed terms}]\pmod{1}\] where \(\tilde{a}_{j}=a_{j}\eta_{1}-\eta_{j}a_{1}\) with \(\tilde{a}_{1}=0\), and the lower order bracketed terms consist of at most \(2d\) many bracketed terms such as \(\{\ell\alpha k\}\{\beta n\}-\ell\{\alpha k\}\{\beta n\}\). Applying the Fourier complexity lemma to the threshold \(\min(\delta^{2}/4,K^{-1}/2)\) and pigeonholing in one of the Fourier coefficients, we obtain that for at least \(\delta/4\) many \(k\in[N_{2}]\), \[|\mathbb{E}_{n\in[N]}e(\tilde{a}n\cdot\{\alpha k\}+an\cdot O\left(\frac{( \delta/2^{d}M)^{-O(d)}}{N}\right)k+\alpha^{\prime}\ell+\beta^{\prime}n)|\geq(4 q_{2}K/2^{d})^{-Cd^{2}}. \tag{3}\] for some absolute constant \(C\). Thus, we let \(K_{2}:=(4q_{2}K)^{4d^{2}}\) and \(L_{2}=(\delta/2^{d}M)^{-O(d)}\). The very important point of the Fourier complexity lemma is that it pushes all of the potential losses of \(\delta\) to \(K\). If we were to pigeonhole in \(k\) (i.e., using the fact that something like \(\{\ell\alpha k\}\{\beta n\}-\ell\{\alpha k\}\{\beta n\}\) takes at most \(O(k)\) many values and pigeonholing in one of those values), we would experience an iteration of \(\delta\mapsto\delta^{2}\), which is disastrous. While \(K\mapsto(4q_{2}K/2^{d})^{Cd^{2}}\) seems similarly disastrous, we will refer to step (2) at each step of the iteration rather than step (3), so the effect of the iteration is \(K_{j}\mapsto(4q_{j}K_{1}/2^{d})^{Cd^{2}}\) rather than \(K_{j}\mapsto(4q_{j}K_{j-1}/2^{d})^{Cd^{2}}\). Since \(\eta\) lies in a \((\delta/2^{d}M)\)-tube around \(a\), it follows that we may write \[\eta=ta+O(\delta/2^{d}M)\] and so \[\eta_{1}=ta_{1}+O(\delta/2^{d}M)\] \[\eta_{j}=ta_{j}+O(\delta/2^{d}M).\] Multiplying the first equation by \(a_{j}\) and the second equation by \(a_{1}\) and subtracting the two, we obtain \[|\eta_{1}a_{j}-\eta_{j}a_{1}|\leq 2|a|\delta/2^{d}M\] and so we may take \(M_{2}=M\). This is more or less how the iteration will proceed at step \(j\) as well. Again, as emphasized before, we should refer back to the original hypothesis (2) in the iteration rather than an intermediate hypothesis (or basically the step before) as in (3). This will prevent \(K_{j}\) from increasing too quickly. At stage \(j\), we will have \(j\) linearly independent vectors \(\eta^{1},\ldots,\eta^{j}\) of size at most \((\delta/2^{d}M)^{-O(d)}\). Applying the bracket polynomial lemma yields either than \(\tilde{a}_{j}\) is too small, in which case we are done, or there exists some \(\eta^{j+1}\) of size at most \((\delta_{j}/2^{d}M)^{-O(d)}\) such that \[\|\eta^{j}\cdot\alpha\|_{\mathbb{R}/\mathbb{Z}}\leq L_{j}(\delta_{j}/2^{d}M)^{-O (d)}/N_{j}.\] Doing a similar computation as above with projecting in the directions orthogonal to \(\eta^{1},\ldots,\eta^{j}\), we will once again obtain \(\tilde{a}_{j+1}\) that is bounded above by \(M\). We may take \(q_{j+1}=q_{j}\|\eta^{j+1}\|_{\infty}\). Applying the Fourier complexity lemma to the \(O(jd)\) many lower order bracketed terms yields that we can take \(K_{j+1}=(2q_{j+1}K/2^{d})^{O(jd^{2})}\) and finally, we can simply take \(L_{j+1}=jL_{j}(\delta_{j}/2^{d}M)^{-O(d)}\), and \(N_{j+1}=N_{j}/q_{j}\) and \(\delta_{j+1}=\delta_{j}/4\). Thus, the iteration looks like8: Footnote 8: If one used the uncertainty principle to ensure that \(\eta\) lies in a small tube around \(a\) instead of Minkowski’s first theorem, \(\delta\) would iterate to \(\delta_{j+1}=\delta_{j}\log^{2}(1/\delta_{j})\) and similarly \(M_{j+1}=M_{j}\log^{2}(M_{j})\), but the iteration in \(K\) and \(L\) would be slightly more efficient; it turns out that this would also work. \[(\delta_{j},M_{j},K_{j},N_{j},L_{j},q_{j})\] \[=(\delta_{j-1}/4,M_{j-1},(2q_{j-1}K/2^{d})^{O(jd^{2})},N_{j-1}/q_{j-1},jL_{j-1} (\delta/2^{d}M)^{-O(d)},(\delta_{j-1}/2^{d}M)^{-O(d)}q_{j-1}).\] In the periodic nilsequences case, where we can assume that \(a\) and \(\alpha\) have denominator \(N\), the iteration becomes \[(\delta_{j},M_{j},K_{j},N_{j},L_{j},q_{j})=(\delta_{j-1}/4,M_{j-1},0,N,0,1).\] In either of these iterations, it's easy to see that all of the terms are increasing linearly except for \(K\), which doesn't increase iteratively, but rather increases based on which step of the iteration we are on. Hence, we have ensured that all of the terms are bounded in the appropriate way. Finally, we verify the orthogonality properties of the \(w_{i}\)'s and the \(\eta_{j}\)'s. Note that by Cramer's rule, that we can ensure that the \(w_{i}\)'s are also bounded by \((\delta/2^{d}M)^{-O(d)^{O(1)}}\). More explicitely, we can express \(\tilde{a}_{j}\) as a projection of \(a\) in the direction orthogonal to the \(\eta_{j}\)'s. The first conclusion of Lemma C.2 that states that the components of \(\tilde{a}_{j}\) are small are then equivalent to the orthogonal vector \(w_{i}\) applied to \(a\) is small, and hence by construction \(w_{i}\) must be orthogonal to \(\eta_{j}\). ### Generalization of the first proof Since the above proof is rather cumbersome, we also provide a generalization of the proof of the refined bracket polynomial lemma given in Section 3 to the setting where \(a\) and \(\alpha\) are real. We shall work with the following hypothesis: **Lemma C.3**.: _Suppose_ \[\|\beta+a\cdot\{\alpha h\})\|_{\mathbb{R}/\mathbb{Z}}\leq\frac{K}{N}\] _for \(\delta N\) many \(h\in[N]\) with \(|a|\leq M\), \(\alpha,a\in\mathbb{R}^{d}\). Then either \(N\ll(\delta/KM)^{-O(d)^{O(1)}}\) or else there exists linearly independent vectors \(w_{1},\ldots,w_{r}\) and \(\eta_{1},\ldots,\eta_{d-r}\) in \(\mathbb{Z}^{d}\) all having norm less than \((\delta/KM)^{-O(d)^{O(1)}}\) such that \(\langle w_{i},\eta_{j}\rangle=0\) and_ \[|w_{i}\cdot a|\leq(\delta/KM)^{-O(d)^{O(1)}}/N,\ \ \|\eta_{i}\circ\alpha\|_{ \mathbb{R}/\mathbb{Z}}\leq(\delta/KM)^{-O(d)^{O(1)}}/N.\] Proof.: First, we modify \(\alpha\) by replacing it with a rational with denominator prime that is of size between \(N^{2}\) and \(2N^{2}\). We proceed similarly as in the first proof, reducing to the case of when \(\beta=0\), at the case of changing \(\delta\) to \(\frac{\delta}{2^{d}M}\) and changing \(K\) to \(2K\). This time, however, our lattice and convex set is slightly different, since we can no longer use that \(\alpha\) has denominator \(N\). Let \(\Gamma\) denote the lattice \((\alpha,1/N)\mathbb{Z}+\mathbb{Z}^{d+1}\) and \(B=\{|x_{i}|\leq\frac{1}{2},|a\cdot x|\leq\frac{K}{N},|x_{d+1}|\leq 1/2\}\). Since \(\alpha\) has prime denominator, it follows that \(|B\cap\Gamma|\geq\frac{\delta N}{2^{d}M}\). By Minkowski's second theorem (or rather Proposition _D.1_), there exists vectors \(v_{1},\ldots,v_{d^{\prime}}\) and corresponding \(N_{1},\ldots,N_{r}\) such that \(N_{1}\cdots N_{r}=N\frac{\delta d^{-O(d)}}{2^{d}M}\) such that \(P:=\{\ell_{1}v_{1}+\cdots+\ell_{r}v_{r}:\ell_{i}\in[N_{i}]\}\) which lies inside \(B\). Let \(V\) be the vector subspace generated by \(v_{1},\ldots,v_{r}\). Since the set \(X:=\{|x_{i}|\leq 1/2\}\) has at most \(N\) points in \(\Gamma\), by putting a fundamental parallelopiped of \(\Gamma\) at each lattice point of \(\Gamma\) in \(X\), we see that \(X\) has volume at most \(N\) times \(\delta(\frac{\delta d^{-O(d)}}{2^{d}M})^{-O(1)}\), and so the wedge product of \(|v_{1}\wedge\cdots\wedge v_{r}|\) has size \(\frac{(d\delta/2^{d}M)^{O(d)}}{N}\) so by Ruzsa's covering lemma (Lemma _D.1_) \(X\) can be covered by \((\frac{\delta d^{-O(d)}}{2^{d}M})^{-O(1)}\) many translates of \(P\), and so using the fact that \(X\) is connected, it follows that a dilation of \((\frac{\delta d}{2^{d}M})^{O(d)^{O(1)}}\) times the unit ball lies inside \(B\). Let \(P_{L}=\{\ell_{1}v_{1}+\cdots+\ell_{r}v_{r}:\ell_{i}\in[LN_{i}]\}\). This lies in the intersection of a ball of radius \(dL\) and the set \(\{|x_{i}|\leq\frac{K}{2},|a\cdot x|\leq\frac{KL}{N},|x_{d+1}|\leq L/2\}\). By the pigeonhole principle, at least \(L^{r}\frac{\delta}{2^{d}M}d^{-O(d)}\) many elements in the ball that are a distance of at most \(2/N\) from a lattice point in \(\mathbb{Z}^{d+1}\). Since the elements of \(P_{L}\) lie in a lattice itself and the generators \(v_{1},\ldots,v_{r}\) have wedge product \(\frac{(d\delta/2^{d}M)^{O(d)}}{N}\) for some \(C\), it follows that there are at least \(L^{r}(d\delta/2^{d}M)^{O(d)}\) many distinct points \(\mathbb{Z}^{d+1}\) inside \(B_{2/N}(B_{dL}(0)\cap V)\) where \(B_{r}(p)\) is the open ball around \(p\) with radius \(r\). By Minkowski's second theorem again (or rather Proposition _D.1_) applied to \(\mathbb{Z}^{d+1}\) and \(B_{2/N}(B_{dL}(0)\cap V)\)9, choosing \(L\) to be larger than \((\frac{\delta}{2^{d}M}d^{-O(d)})^{O(1)}\), it follows that there exists \(r^{\prime}\) linearly independent integer vectors \(w_{1},\ldots,w_{r^{\prime}}\) that lie inside the \(B_{K}(0)\cap B_{2/N}(V)\) where \(B_{2/N}(V)\) is the \(2/N\)-neighborhood of \(V\) and \(M_{1},\ldots,M_{r^{\prime}}\) such that \(Q:=\{m_{1}w_{1}+\cdots+m_{r^{\prime}}w_{r^{\prime}}:m_{i}\in[M_{i}]\}\) lies inside \(B_{K}(0)\cap B_{2/N}(V)\) with \(|M_{1}\cdots M_{r^{\prime}}|\sim L^{r}\frac{\delta}{2^{d}M}d^{-O(d)}\). A volume packing argument and \(N\) and \(L\) being sufficiently large shows that, projecting \(Q\) to \(V\), we obtain a \(r\)-dimensional generalized arithmetic progression whose generators have wedge product at most \((d\delta/2^{d}M)^{-O(d)^{O(1)}}\). Pulling back, we may thus assume that \(r^{\prime}=r\) and that the generators have wedge product at most \((d\delta/2^{d}M)^{-O(d)^{O(1)}}\). Applying Minkowski's second theorem a third time where the lattice is the lattice generated by \(w_{1},\ldots,w_{r}\), the ambient space is the vector space generated by \(w_{1},\ldots,w_{r}\), and the convex body is the ball of radius \(1\), we then obtain that there exists generators \(w_{1}^{\prime},\ldots,w_{r}^{\prime}\) of the lattice generated by \(Q\), all of which have size at most \((d\delta/2^{d}M)^{-O(d)^{O(1)}}\). We claim that in some sense many elements of \(P_{L}\) lie close to the real span of the vectors in \(Q\). Footnote 9: This is primarily the difference between the periodic nilsequences case and the general case; in the periodic nilsequences case, we got a lattice for free while restricting to the integer vectors on the vector space we obtain. Here, the integer vectors don’t have to lie on the vector subspace we obtain so we must apply Minkowski’s second theorem one extra time. To prove this, let \(\tilde{Q}\) be \(B_{2/N}(\{m_{1}w_{1}+\cdots+m_{r}w_{r}:m_{i}\in\mathbb{R},|m_{i}|\leq M_{i}\})\). By Ruzsa's covering lemma, it follows that we may cover \(B_{2/N}(V\cap B_{dL}(0))\) (since that aforementioned set contains at most \((2L)^{r}\) lattice points) with at most \((\frac{\delta}{2^{d}M}d^{-O(d)})^{-O(d)^{O(1)}}\) many translations of \(\tilde{Q}\), and since \(B_{2/N}(V\cap B_{dL}(0))\) is connected, symmetric, and convex, it follows that a \((d\delta/2^{d}M)^{O(d)^{O(1)}}\)-dilation of it lies inside \(\tilde{Q}\) and thus at least \(N(d\delta/2^{d}M)^{O(d)^{O(1)}}\) many elements in \(P\) is distance of \(\frac{2}{N}\) from an element in the real (actually, \((-1/2,1/2)\)-span) of \(w_{1},\ldots,w_{r}\). Let \(\eta_{1},\ldots,\eta_{d+1-r}\) be integral orthogonal vectors to \(w_{1}^{\prime},\ldots,w_{r}^{\prime}\) (and so they are orthogonal to \(w_{1},\ldots,w_{r}\)). We claim that for at least \(N(d\delta/2^{d}M)^{O(d)^{O(1)}}\) many elements \(v\) in \(P\) that \(|\eta_{i}\cdot v|\leq\frac{(d\delta/2^{d}M)^{-O(d)^{O(1)}}}{N}\). This would hold for elements \(v\) that are \(\frac{2}{N}\) away from being in the real span of \(w_{1},\ldots,w_{r}\). Unravelling everything and using Vinogradov's lemma, we obtain that \[\|\eta_{i}\cdot(\alpha,1/N)\|_{\mathbb{R}/\mathbb{Z}}\leq\frac{(d\delta/2^{d}M )^{-O(d)^{O(1)}}}{N}.\] Since the ball of radius \((d\delta/2^{d}M)^{O(d)^{O(1)}}\) with the intersection of \(V\) lies inside \(B\) (or rather, inside the interior of the generalized arithmetic progression \(P\)), and the dilation by \(L(d\delta/2^{d}M)^{O(d)^{O(1)}}\) of \(P\) lies inside \(\tilde{Q}\) (and the components of \(Q\) generate the vector space that all of the \(w_{i}^{\prime}\)'s lie in) and since the components of \(w_{i}^{\prime}\) have size at most \((d\delta/2^{d}M)^{-O(d)^{O(1)}}\), it follows that \(w_{i}^{\prime}\) is a distance of at most \((d\delta/2^{d}M)^{-O(d)^{O(1)}}/N\) away from a point in \(V\) and so consequently, \[|w_{i}^{\prime}\cdot a|\leq\frac{K(d\delta/2^{d}M)^{-O(d)^{O(1)}}}{N}.\] For the sake of cleanliness, we shall relabel \(w_{i}^{\prime}\) as \(w_{i}\). Finally, we eliminate the last component of the \(\eta_{i}\)'s. The remaining computations will be quite similar to those in Corollary 3.2. To do so, suppose that they are not all zero. Then without a loss of generality, we may suppose that \(\eta_{r}\) has last component \(\eta_{r}^{d+1}\) nonzero. We then define for \(i=1,\ldots,r-1\)\(\tilde{\eta}_{i}=\eta_{r}^{d+1}\eta_{i}-\eta_{i}^{d+1}\eta_{r}\), and \(\tilde{w}_{i}\) is equal to \(w_{i}\) except at the last coordinate where it is zero. We claim that * \(\tilde{w}_{i}\) and \(\tilde{\eta}_{j}\) are orthogonal to each other * \(\tilde{w}_{i}\) and \(\tilde{\eta}_{j}\) are linearly independent. To show the first point, we see that \[\langle w_{i},\eta_{j}\rangle=\langle\tilde{w}_{i},\eta_{j}\rangle+w_{i}^{d+1} \eta_{j}^{d+1}=0\] \[\langle w_{i},\eta_{r}\rangle=\langle\tilde{w}_{i},\eta_{r}\rangle+w_{i}^{d+1 }\eta_{r}^{d+1}=0.\] Multiplying the first equation by \(\eta_{r}^{d+1}\) and the second by \(\eta_{j}^{d+1}\) and subtracting the two yields \[\langle\tilde{w}_{i},\tilde{\eta}_{j}\rangle=0.\] To verify the second point, we first verify that \(\tilde{\eta}_{j}\) are linearly independent. Suppose there is some vector \(\vec{c}\) with \[\sum_{j}c_{j}\tilde{\eta}_{j}=\sum_{j}c_{j}(\eta_{r}^{d+1}\eta_{j}-\eta_{j}^{d +1}\eta_{r})=0.\] Then rearranging the coefficients, we see that \[\sum_{j<r}c_{j}\eta_{r}^{d+1}\eta_{j}+\left(\sum_{j<r}c_{j}\eta_{j}^{d+1} \right)\eta_{r}=0.\] Hence \(c_{j}\eta_{r}^{d+1}=0\) and since \(\eta_{r}^{d+1}\neq 0\), it follows that \(c_{j}=0\) for all \(j\). Hence \(\tilde{\eta}_{j}\)'s are linearly independent. To verify that \(\tilde{w}_{i}\) are linearly independent, note that \(w_{i}\) are orthogonal to \((\tilde{\eta}_{j},0)\) and \(\eta_{r}\). Since \((0,1)\) is not orthogonal to \(\eta_{r}\), it follows that \(w_{i}\) cannot span \((0,1)\), so \(w_{i}\) \((0,1)\) are linearly independent of each other, which implies that \(\tilde{w}_{i}\) are linearly independent of each other. We can then verify that \[|\tilde{w}_{i}\cdot a|\leq\frac{K(d\delta/2^{d}M)^{-O(d)^{O(1)}}}{N}\] and \[\|\tilde{\eta}_{j}\cdot\alpha\|_{\mathbb{R}/\mathbb{Z}}\leq\frac{(d\delta/2^{d }M)^{-O(d)^{O(1)}}}{N}\] as desired. ## Appendix D Diophantine approximation and the geometry of numbers In this section, we shall state relevant theorems from the geometry of numbers used mostly in the proofs of the refined bracket polynomial lemma. The relevant geometry of numbers theorems we'll be listing below can be found along with their proofs in [TV]. The first is Minkowski's first theorem, which one can find a statement and proof in [TV, THeorem 3.28]. **Theorem 11** (Minkowski's first theorem).: _Let \(\Gamma\) be a lattice of \(\mathbb{R}^{d}\). Let \(X\) be a convex body symmetric about the origin such that \(\text{vol}(X)>2^{d}\text{vol}(\mathbb{R}^{d}/\Gamma)\). Then \(X\) contains a nonzero vector \(v\in\Gamma\)._ A generalization of this theorem (which is obtained from carefully iterating Minkowski's first theorem) is Minkowski's second theorem below. Before we state it, we shall need some terminology. Given a lattice \(\Gamma\) of \(\mathbb{R}^{d}\) and a convex body \(X\), the _successive minima_ of \(X\) with respect to \(\Gamma\), denoted \(\lambda_{i}\), are defined as \[\lambda_{k}:=\inf\{\lambda>0:\lambda\cdot X\text{ contains }k\text{ independent vectors of }\Gamma\}.\] We have \(\lambda_{1}\leq\lambda_{2}\leq\lambda_{3}\leq\lambda_{4}\leq\cdots\leq\lambda_{d }<\infty.\) Minkowski's second theorem ([TV, Theorem 3.30]) states that the successive minima satisfy the following property: **Theorem 12** (Minkowski's second theorem).: _Let \(\Gamma\) be a lattice of full rank in \(\mathbb{R}^{d}\), \(B\) a symmetric convex body with successive minimal \(\lambda_{1},\ldots,\lambda_{d}\). Then there exists vectors \(v_{1},\ldots,v_{d}\) such that_ * _For each_ \(1\leq j\leq d\)_,_ \(v_{j}\) _lies on the boundary of_ \(\lambda_{j}\cdot B\)_, but_ \(\lambda_{j}\cdot B\) _does not contain any vectors in_ \(\Gamma\) _outside the span of_ \(v_{1},\ldots,v_{j-1}\)_._ * _The high dimensional polyhedra with vertices_ \(\pm v_{i}\) _contain no elements of_ \(\Lambda\) _other than_ \(0\)_._ * _We have_ \[\frac{2^{d}|\Gamma/(\text{Span}_{\mathbb{Z}}(v_{1},\ldots,v_{d}))|}{d!}\leq \frac{\lambda_{1}\cdots\lambda_{d}\text{vol}(B)}{\text{vol}(\mathbb{R}^{d}/ \Gamma)}\leq 2^{d}.\] While the statement of Minkowski's second theorem may appear mysterious, it gives information about the structure of the intersection of a lattice of a convex body and a lattice. The next two statements below show, as a corollary of Minkowski's second theorem that such an intersection must contain a _generalized arithmetic progression_ (see below for definition) which has size roughly \(d^{-O(d)}\) times the intersection of the convex body and the lattice. The next lemma we'll need, a statement and proof of which can be found in [TV, Lemma 3.14], is Ruzsa's covering lemma: **Lemma D.1** (Ruzsa's covering lemma).: _For any bounded subsets \(A\) and \(B\) of \(\mathbb{R}^{d}\) with positive measure, we may cover \(B\) by at most \(\text{min}\left(\frac{\text{vol}(A+B)}{\text{vol}(A)},\frac{\text{vol}(A-B)}{ \text{vol}(A)}\right)\) many translates of \(A-A\)._ We use Ruzsa's covering lemma in the various proofs of the refined bracket polynomial lemma to convex subsets of \(\mathbb{R}^{d}\). In the case of convex and symmetric \(A\) and \(B\), Ruzsa's covering lemma is extra powerful, since \(A+A\) and \(A-A\) are both just dilations of \(A\). Given a group \(G\), a _generalized arithmetic progression_ is a subset of \(G\) of the form \(\{\ell_{1}v_{1}+\cdots+\ell_{d}v_{d}:\ell_{i}\in[N_{i}]\}\). The generalized arithmetic progression is _proper_ if each of the elements \(\ell_{1}v_{1}+\cdots+\ell_{d}v_{d}\) is distinct as \(\ell_{i}\) ranges in \([N_{i}]\). The _rank_ of a proper generalized arithmetic progression is the quantity \(d\). As a consequence of Minkowski's second theorem and Ruzsa's covering lemma, we have the following, which is [4, Lemma 3.33]: **Proposition D.1**.: _Let \(B\) be a symmetric convex body and let \(\Gamma\) be a lattice in \(\mathbb{R}^{d}\). Then there exists a proper generalized arithmetic progression \(P\) in \(B\cap\Gamma\) of rank at most \(d\) such that \(|P|\geq O(d)^{-7d/2}|B\cap\Gamma|\)._ We will also need Vinogradov's lemma, a statement and proof of which can be found in [1, Lemma 6]: **Lemma D.2** (Vinogradov's Lemma).: _Let \(I\subseteq[N]\) be an interval and \(P\colon\mathbb{Z}\to\mathbb{R}\) a polynomial of degree \(d\) of the form \(P(n)=\sum_{i=0}^{d}\alpha_{i}n^{i}\). Suppose that \(\|P(n)\|\leq\epsilon\) for \(\delta N\) many values of \(n\in I\) with \(0<\delta,\epsilon<1\). Then either_ \[N\ll\delta^{-\exp(O(d)^{O(1)})}\] _or_ \[\epsilon\ll O(\delta)^{\exp(O(d)^{O(1)})}\] _or there exists some \(q\ll O(\delta)^{-\exp(O(d)^{O(1)})}\) such that_ \[\|q\alpha_{i}\|_{\mathbb{R}/\mathbb{Z}}\ll\frac{\delta^{-O(1)}\epsilon}{N^{i}}.\] A consequence of Vinogradov's lemma is a periodic version: **Corollary D.1**.: _Let \(N\) be a large prime and \(P\colon\mathbb{R}\to\mathbb{R}\) a polynomial of degree \(d\) of the form \(P(n)=\sum_{i=0}^{d}\alpha_{i}n^{i}\) where \(\alpha_{i}\) has denoinator \(N\). Suppose \(N\gg\delta^{-O_{d}(1)}\) (though with the Weil bounds one can perhaps even take \(N\gg\delta^{-2}\)) and there are at least \(\delta N\) many elements \(n\in[N]\) such that_ \[\|P(n)\|_{\mathbb{R}/\mathbb{Z}}\leq\epsilon.\] _Then either \(\epsilon\gg\delta^{O_{d}(1)}\) or \(\|\alpha_{i}\|_{\mathbb{R}/\mathbb{Z}}=0\)._
2302.10606
Giant Magneto-Optical Schäfer-Hubert Effect in Two-Dimensional van der Waals Antiferromagnets \textit{M}PS$_3$ (\textit{M}=Mn, Fe, Ni)
The recent discovery of long-range magnetic order in atomically thin films has triggered particular interest in two-dimensional (2D) van der Waals (vdW) magnetic materials. In this paper, we perform a systematic theoretical study of the magneto-optical Sch\"{a}fer-Hubert effect (MOSHE) in 2D vdW antiferromagnetic \textit{M}PS$_3$ (\textit{M} = Mn, Fe, Ni) with multifold intralayer and interlayer magnetic orders. The formula for evaluating the MOSHE in 2D magnets is derived by considering the influence of a non-magnetic substrate. The MOSHE of monolayer and bilayer \textit{M}PS$_3$ are considerably large ($>2^{\circ}$), originating from the strong anisotropy of in-plane optical conductivity. The Sch\"{a}fer-Hubert rotation angles are surprisingly insensitive to the orientations of the N\'{e}el vector, while the Sch\"{a}fer-Hubert ellipticities are identified to be a good criterion to distinguish different interlayer magnetic orders. Our work establishes a theoretical framework for exploring novel 2D vdW magnets and facilitates the promising applications of the 2D \textit{M}PS$_3$ family in antiferromagnetic nanophotonic devices.
Ping Yang, Wanxiang Feng, Gui-Bin Liu, Guang-Yu Guo, Yugui Yao
2023-02-21T11:28:41Z
http://arxiv.org/abs/2302.10606v1
Giant Magneto-Optical Schafer-Hubert Effect in Two-Dimensional van der Waals Antiferromagnets _M_ps\({}_{3}\) (_M_=Mn, Fe, Ni) ###### Abstract **The recent discovery of long-range magnetic order in atomically thin films has triggered particular interest in two-dimensional (2D) van der Waals (vdW) magnetic materials. In this paper, we perform a systematic theoretical study of the magneto-optical Schafer-Hubert effect (MOSHE) in 2D vdW antiferromagnetic _M_PS\({}_{3}\) (_M_ = Mn, Fe, Ni) with multifold intralayer and interlayer magnetic orders. The formula for evaluating the MOSHE in 2D magnets is derived by considering the influence of a non-magnetic substrate. The MOSHE of monolayer and bilayer _M_PS\({}_{3}\) are considerably large (\(>2^{\circ}\)), originating from the strong anisotropy of in-plane optical conductivity. The Schafer-Hubert rotation angles are surprisingly insensitive to the orientations of the Neel vector, while the Schafer-Hubert ellipticities are identified to be a good criterion to distinguish different interlayer magnetic orders. Our work establishes a theoretical framework for exploring novel 2D vdW magnets and facilitates the promising applications of the 2D _M_PS\({}_{3}\) family in antiferromagnetic nanophotonic devices.** ## Introduction Two-dimensional (2D) van der Waals (vdW) magnetic materials have attracted emerging attention since the discovery of intrinsically long-range ferromagnetic (FM) order in Cr\({}_{2}\)Ge\({}_{2}\)Te\({}_{6}\) and CrI\({}_{3}\) atomic layers [1, 2]. The highly tunable magnetism and other exciting physical properties by electric gating [3] and strain engineering [4, 5] offer them a promising potential for applications in magnetic sensor, storage, and spintronics. Magneto-optical spectroscopy is a powerful non-contact technique for investigating 2D magnetic materials. For 2D ferromagnets, magneto-optical Kerr effect (MOKE) signals a solid evidence of long-range FM order even down to monolayer limit [2]. Furthermore, first-principles calculations of MOKE in thin films [6, 7] provide a complementary avenue to characterize 2D FM materials [8, 9, 10, 11]. For 2D antiferromagnets that have zero net magnetization, the MOKE as a first-order effect is vanishing, and therefore the commonly used magneto-optical techniques are based on second-order effects. [12, 13, 14] One option is to probe the difference in absorption or reflectivity for linearly polarized lights parallel and perpendicular to the Neel vector, which is known as magnetic linear dichroism (MLD). Another option is to probe the polarization rotation upon transmission and reflection, which are called magneto-optical Voigt effect [15] and magneto-optical Schafer-Hubert effect (MOSHE) [16], respectively. Since the second-order magneto-optical effects in magnetic materials are usually very weak, the characterization of 2D AFM order has long been considered extremely challenging. Transition metal thiophosphates _M_PS\({}_{3}\) (_M_ = Mn, Fe, Ni) are a representative family of 2D vdW materials that host multifold intrinsically intralayer AFM orders [17, 18, 19]. In a recent experiment, Zhang et al. [20] observed large MLD in FePS\({}_{3}\) with zigzag-AFM order. The large magneto-optical signals enable the detection of 2D AFM domain orientations [20, 21] and the study of ultrafast spin dynamics [22]. Subsequently, the tuning of MLD in FePS\({}_{3}\) was realized by coupling with optical-cavity [23], and the MLD at specific wavelength can be even enhanced to a near-unity (100%) value. Such an optically anisotropic 2D magnetic material is desirable for achieving densely integrated polarization selective devices. To date, most of the reports on large linear dichroism and its tuning for 2D materials have been limited to those with in-plane anisotropic crystal structures, such as black phosphorus [24, 25] and GeSe [26]. By contrast, anisotropic 2D magnetic materials are more promising for the fast field-effect control since the magnetic orders are sensitive to external stimuli, e.g., magnetic [27] and strain [28] fields. These recent advances call for an exploration of more excellent 2D AFM magneto-optical materials, however, theoretical studies on the second-order magneto-optical effects in thin films remain absent yet. In this work, we systematically investigate a representative second-order magneto-optical effect, MOSHE, in 2D vdW AFM _M_PS\({}_{3}\) using first-principles calculation together with magnetic group analysis. A theoretical formula for evaluating the MOSHE in 2D magnetic materials placed on a non-magnetic substrate is derived for the first time. The MOSHE in FePS\({}_{3}\) and NiPS\({}_{3}\) with the zigzag-AFM order are close to or even exceed to the magnitudes of first-order magneto-optical effects in conventional ferromagnets, especially the Schafer-Hubert (SH) rotation angle in bilayer NiPS\({}_{3}\) records up to 2.4\({}^{\circ}\). We also find that the MOSHE is insensitive to the magnetization direction, and the SH ellipticity can be used to identify interlayer magnetic structures. Our work deepens the understanding of MOSHE in 2D antiferromagnets and facilitate further exploration of novel AFM magneto-optical devices. ## Results and Discussion When a linearly polarized light normally shines (e.g., along the \(z\)-axis) on a thin film with in-plane magnetic anisotropy, the light propagating in the magnetic thin film can be decomposed into two polarized components along orthogonal anisotropic axes with different refractive indices (\(n_{x}\), \(n_{y}\)) and reflectivity (\(r_{x}\), \(r_{y}\)). The reflected light would become elliptically polarized accompanied by a rotation of polarization plane with respect to the incident light, namely the MOSHE (Fig. 1a). If the electric field of incident light (\(\mathbf{E_{l}}\)) places at an angle of \(\alpha=45^{\circ}\) from the \(x\)-axis, the SH rotation angle (\(\theta_{\mathrm{SH}}\)) and ellipticity (\(\psi_{\mathrm{SH}}\)) reach up to their maximums, given by [29] \[\begin{split}\theta_{\mathrm{SH}}&=\frac{1}{2} \operatorname{atan}\left(\frac{2\operatorname{Re}\chi}{1-|\chi|^{2}}\right)- \frac{\pi}{4},\\ \psi_{\mathrm{SH}}&=\frac{1}{2}\operatorname{asin} \left(\frac{2\operatorname{Im}\chi}{1+|\chi|^{2}}\right),\end{split} \tag{1}\] where \(\chi=r_{y}/r_{x}\). The reflectivity of a magnetic thin film at the interface A (Fig. 1b) can be written as \[r_{x(y)}=\frac{n_{0}-\tilde{n}_{x(y)}}{n_{0}+\tilde{n}_{x(y)}}. \tag{2}\] Here, \(n_{0}=1\) is the refractive index of vacuum, \(\tilde{n}_{x(y)}\) is the effective refractive index of a magnetic thin film by considering the influence of its substrate, \[\tilde{n}_{x(y)}=\frac{1-r^{\prime}_{x(y)}\beta_{x(y)}}{1+r^{\prime}_{x(y)} \beta_{x(y)}}n_{x(y)}, \tag{3}\] in which \(\beta_{x(y)}=\exp(2i\omega dn_{x(y)}/c)\) with the light frequency \(\omega\), light speed \(c\), and film thickness \(d\). The reflectivity of substrate at the interface B is \(r^{\prime}_{x(y)}=(n_{x(y)}-n_{s})/(n_{x(y)}+n_{s})\) and \(n_{s}\) is the refractive index of substrate. Plugging Eqs. (2) and (3) into Eq. (1), the complex SH angle can be recast as \[\theta_{\mathrm{SH}}+i\psi_{\mathrm{SH}}\approx\frac{\tilde{n}_{x}-\tilde{n}_ {y}}{1-\tilde{n}_{x}\tilde{n}_{y}}. \tag{4}\] For monolayer and few-layer of 2D materials whose thicknesses are far less than the wavelength of visible light (\(\lambda\)), the effective refractive index can be approximated to \(\tilde{n}_{x(y)}\approx n_{s}-i\cdot 2\pi d(n_{x(y)}^{2}-n_{s}^{2})/\lambda\). In the case of conventional MOSHE induced by in-plane magnetization (e.g., along the \(x\)-axis), the refractive indices by solving the Fresnel equation are given by \(n_{x}=\sqrt{\epsilon_{xx}}\), \(n_{y}=\sqrt{\epsilon_{yy}+\epsilon_{yz}^{2}/\epsilon_{zz}}\), in which \(\epsilon_{\mu\nu}\) with \(\mu,\nu\in\{x,y,z\}\) is the permittivity tensor. Then, the complex SH angle can be simplified to \[\theta_{\mathrm{SH}}+i\psi_{\mathrm{SH}}\approx\frac{i\omega d}{c(n_{s}^{2}-1 )}(\epsilon_{xx}-\epsilon_{yy}-\frac{\epsilon_{yz}^{2}}{\epsilon_{zz}}). \tag{5}\] We find that the complex SH angle can be related to the complex Voigt angle [30] via \[\theta_{\mathrm{SH}}+i\psi_{\mathrm{SH}}=\frac{2(n_{x}+n_{y})}{1-n_{s}^{2}}( \theta_{\mathrm{V}}-i\psi_{\mathrm{V}}), \tag{6}\] where \(\theta_{\mathrm{V}}\) and \(\psi_{\mathrm{V}}\) are Voigt ration angle and ellipticity, respectively. If the substrate has a relatively small refractive index (\(n_{s}\to 1\)), the SH angle will be much larger than the Voigt angle, indicating that the optical detection upon reflection is more suitable than upon transmission for studying the second-order magneto-optical effects of magnetic thin films. Figure 1: (a) Schematic illustration of magneto-optical Schäfer-Hubert effect emerged in 2D antiferromagnets \(M\)PS\({}_{3}\) (\(M\) = Mn, Fe, Ni) prepared on SiO\({}_{2}\)/Si substrate. The incident light is linearly polarized with the electric field (\(\mathbf{E_{l}}\)) orienting an angle of \(\alpha\) from the optically anisotropic axis (here, \(x\)-axis). The reflected light becomes elliptically polarized and the polarization plane (\(\mathbf{E_{R}}\)) deflects an angle of \(\theta_{\mathrm{SH}}\) with respect to incident light (\(\mathbf{E_{l}}\)). (b) Optical paths in a magnetic thin film placed on an optically isotropic non-magnetic substrate. Refractive indices (\(n_{0}\), \(n_{x}\), \(n_{y}\), \(n_{s}\)) in each region and the electric fields (\(\mathbf{E_{l}}\), \(\mathbf{E_{2}}\), \(\mathbf{E_{R}}\), \(\mathbf{E_{R2}}\)) at the interface A are labeled, and \(d\) denotes the thickness of magnetic thin film. The complex SH angle (see Eq. (5)) can also be written in terms of optical conductivity using the relationship between permittivity and optical conductivity, given by, \(\epsilon_{\mu\nu}=\delta_{\mu\nu}+\frac{4\pi i}{\omega}\sigma_{\mu\nu}\). The off-diagonal elements of the optical conductivity containing the \(z\)-component (e.g., \(\sigma_{yz}\)) have to be zero due to the 2D nature of our considered systems. This can be read from Eq. (8) since the quenched electron velocity along the \(z\) direction (\(\hat{v}_{z}=0\)) leads to the vanishing \(\sigma_{yz}\) and \(\sigma_{zx}\). Therefore, the complex SH angle is simply expressed as \[\theta_{\rm SH}+i\psi_{\rm SH}\approx\frac{4\pi d}{c(n_{s}^{2}-1)}(\sigma_{yy} -\sigma_{xx}), \tag{7}\] which is the formula implemented in our first-principles calculations. The 2D vdW magnetic materials are often grown on transparent substrates, such as SiO\({}_{2}\), whose refractive index \(n_{s}\) is a real number. In this case, the SH rotation angle and ellipticity are determined by the real and imaginary parts of conductivity anisotropy (i.e., \(\sigma_{yy}-\sigma_{xx}\)), respectively. On account of this relationship, the conductivity anisotropy can be accurately measured by MOSHE spectroscopy. For monolayer _M_PS\({}_{3}\), the transition metal atoms \(M\) form a flat honeycomb lattice and a bipyramid of P\({}_{2}\)S\({}_{6}\) ligand locates at the center of hexagon (Fig. 2a,b). If removing the magnetic orders, monolayer _M_PS\({}_{3}\) is in-plane isotropic due to its crystallographic point group of \(D_{3d}\). Nevertheless, the honeycomb lattice can host a variety of magnetic orders, including FM state as well as Neel-, zigzag-, and stripy-AFM states (Fig. 2c-e) [31], depending on the relative strength of intralayer first, second, and third nearest-neighbour exchange interactions. MnPS\({}_{3}\) displays the Neel-AFM order with the out-of-plane (\(z\)-axis) magnetic easy axis [17]. FePS\({}_{3}\) and NiPS\({}_{3}\) display the zigzag-AFM order with the out-of-plane (\(z\)-axis) [18] and in-plane (\(x\)-axis) [32] magnetization, respectively. The exfoliated atomic layers persist long-range AFM orders down to bilayer or even monolayer limit, and their magnetic critical temperatures are nearly independent on thickness. Moreover, the magnetization for Neel- and zigzag-AFM states can be tuned between the out-of-plane and in-plane directions via atomic substitution [33], and the FM state was predicted to be their ground states under sufficient large carrier density [34]. Before practically calculating MOSHE, we conduct symmetry analysis to evaluate which magnetic order breaks the in-plane optically isotropy of monolayer _M_PS\({}_{3}\). The magnetic space groups computed by isotropy code [35] are listed in Table 1, in which the shapes of optical conductivity tensors are identified by symmert code [36; 37]. As expected, all of the magnetic orders with the magnetization along the \(x\)-axis are in-plane anisotropic, which allows the MOSHE. For FM and Neel-AFM orders with the spins along the \(z\)-axis, the in-plane \begin{table} \begin{tabular}{c c c c} \hline \hline \multirow{2}{*}{Magnetic orders} & Magnetic & In-plane & Dipole \\ & space group & anisotropy & Sele. rules \\ & & (\(\sigma_{xx}\neq\sigma_{yy}\)) & (**E**\(\perp z\)) \\ \hline FM (\(x\)) & \(C2^{\prime}/m^{\prime}\) & ✓ & \(\Gamma_{2}^{+}\leftrightarrow\Gamma_{2}^{-}\) \\ & & & \(\Gamma_{4}^{+}\leftrightarrow\Gamma_{5}^{-},\Gamma_{6}^{-}\) & \(\mathbf{K}_{4}\leftrightarrow\mathbf{K}_{5}\) \\ FM (\(z\)) & \(P\overline{3}1m^{\prime}\) & \(\times\) & \(\Gamma_{5}^{+}\leftrightarrow\Gamma_{4}^{-},\Gamma_{6}^{-}\) & \(\mathbf{K}_{4}\leftrightarrow\mathbf{K}_{6}\) \\ & & & \(\Gamma_{6}^{+}\leftrightarrow\Gamma_{4},\Gamma_{5}^{-}\) & \(\mathbf{K}_{5}\leftrightarrow\mathbf{K}_{6}\) \\ Néel-AFM (\(x\)) & \(C2^{\prime}/m\) & ✓ & \(\Gamma_{3}\Gamma_{4}\leftrightarrow\Gamma_{3}\Gamma_{4}\) \\ Néel-AFM (\(z\)) & \(P\overline{3}^{\prime}1m\) & \(\times\) & \(\Gamma_{4}\leftrightarrow\Gamma_{4}\) & \(\mathbf{K}_{4}\leftrightarrow\mathbf{K}_{4}\) \\ & & & \(\Gamma_{4}\leftrightarrow\Gamma_{5}\Gamma_{6}\) & \(\mathbf{K}_{4}\leftrightarrow\mathbf{K}_{5}\mathbf{K}_{6}\) \\ zigzag-AFM (\(x\),\(z\)) & \(P_{c}2_{1}/m\) & ✓ & \(\Gamma_{3}^{+}\Gamma_{4}^{+}\leftrightarrow\Gamma_{3}^{-}\Gamma_{4}^{-}\) \\ stripy-AFM (\(x\),\(z\)) & \(P_{a}2_{1}/c\) & ✓ & \(\Gamma_{3}^{+}\Gamma_{4}^{+}\leftrightarrow\Gamma_{3}^{-}\Gamma_{4}^{-}\) \\ \hline \hline \end{tabular} \end{table} Table 1: Magnetic space groups of monolayer _M_PS\({}_{3}\) with different magnetic orders. The magnetization directions are labeled in brackets. The symbol ✓ (\(\times\)) indicates the in-plane optically anisotropy (isotropy). The dipole selection rules at some high-symmetry points (e.g., \(\Gamma\) and K) are listed. Figure 2: (a,b) Top and side views of monolayer _M_PS\({}_{3}\). Blue dashed lines draw out the primitive cell of non-magnetic state. (c-e) The Néel-, zigzag-, and stripy-antiferromagnetic orders on a honeycomb lattice. Red and blue spheres represent the \(M\) atoms with opposite directions of spin magnetic moments. isotropy is preserved by the three-fold rotational symmetry in magnetic space groups of \(P\overline{3}1m^{\prime}\) and \(P\overline{3}^{\prime}1m\), respectively. The magnetic space groups of zigzag- and stripy-AFM orders with the magnetization along the \(z\)-axis are the same as that along the \(x\)-axis, such that the \(z\)-axis magnetization is also in-plane anisotropic and may also lead to the MOSHE. According to the mirror symmetry \(\mathcal{M}_{y}\) in the zigzag-AFM order, the orthogonal anisotropic axes are determined to be the \(x\)- and \(y\)-axes as shown in Fig. 2. Figure 3 plots the calculated optical conductivities and MOSHE spectra of monolayer AFM _M_PS\({}_{3}\). We first discuss the results of each material on its magnetic ground state. For MnPS\({}_{3}\) with the \(z\)-axis Neel-AFM order (Fig. 3a), the spectrum of \(\sigma_{xx}\) is identical to that of \(\sigma_{yy}\), which is governed by the in-plane optical isotropy, and the resulting SH rotation angle (\(\theta_{\rm SH}\)) and SH ellipticity (\(\psi_{\rm SH}\)) are negligibly small. The absorptive parts of optical conductivity tensor, Re\(\sigma_{xx}\) and Re\(\sigma_{yy}\), are determined by the symmetry-allowed dipole se Figure 3: Optical conductivities and MOSHE spectra of monolayer (a) MnPS\({}_{3}\), (b) FePS\({}_{3}\), and (c) NiPS\({}_{3}\) on SiO\({}_{2}\) substrate. The panels from top to bottom show the real part of optical conductivity (Re\(\sigma\)), SH rotation angle (\(\theta_{\rm SH}\)), imaginary part of optical conductivity (Im\(\sigma\)), and SH ellipticity (\(\psi_{\rm SH}\)), respectively. The magnetization direction of each magnetic order is indicated in brackets, and an asterisk labels the ground state. The Re\(\sigma_{yy}\) and Im\(\sigma_{yy}\) of MnPS\({}_{3}\) are moved upward by 0.5\(\times 10^{15}\)s\({}^{-1}\) for a clear observation, and the \(\theta_{\rm SH}\) and \(\psi_{\rm SH}\) of MnPS\({}_{3}\) are multiplied by a factor of 20. A\({}_{1}\), A\({}_{2}\), and A\({}_{3}\) mark several absorption peaks of Re\(\sigma\) in the low-energy range. lection rules listed in Table 1, from which one can analyze the origination of main peaks in conductivity spectra. For example, the A\({}_{1}\) and A\({}_{2}\) peaks at the energies of 2.9 eV and 3.2 eV originate from the interband transitions \(\rm K_{5}K_{6}\to K_{4}\) and \(\rm K_{4}\to K_{4}\) at the K-point, respectively, and the A\({}_{3}\) peak at the energy of 3.7 eV originates from the interband transition \(\Gamma_{5}\Gamma_{6}\to\Gamma_{4}\) at the \(\Gamma\)-point, as depicted in Fig. 4a. For FePS\({}_{3}\) with the \(z\)-axis zigzag-AFM order (Fig. 3b), one can discern a clear anisotropy in real and imaginary parts of optical conductivity above the absorption edge (\(\sim\)1.8 eV). The spectra of Re\(\sigma_{xx}\) and Re\(\sigma_{yy}\) feature three peaks of A\({}_{1}\), A\({}_{2}\), and A\({}_{3}\) at the energies of 2.3 eV, 2.8 eV, and 3.1 eV, respectively, which come from the interband transitions between the \(\Gamma_{3}^{+}\Gamma_{4}^{+}\) and \(\Gamma_{3}^{-}\Gamma_{4}^{-}\) states at the \(\Gamma\)-point (Fig. 4b). The obvious difference in values between Re\(\sigma_{xx}\) and Re\(\sigma_{yy}\) around the A\({}_{1}\) and A\({}_{3}\) peaks generate the maximal SH rotation angles of -0.7\({}^{\circ}\) at 2.4 eV and of 1.0\({}^{\circ}\) at 3.3 eV, respectively. The SH ellipticity is always negative and reaches up to -1.1\({}^{\circ}\) at 3.1 eV. For NiPS\({}_{3}\) with the \(x\)-axis zigzag-AFM order (Fig. 3c), the real part of optical conductivity resembles the experimental detection of its bulk crystal [38]. Both Re\(\sigma_{xx}\) and Re\(\sigma_{yy}\) spectra show the A\({}_{2}\) peak at 2.3 eV due to the interband transition \(\Gamma_{3}^{+}\Gamma_{4}^{+}\to\Gamma_{3}^{-}\Gamma_{4}^{-}\), while an additional peak A\({}_{1}\) appears at 2.0 eV for Re\(\sigma_{xx}\) which is related to the transition from the \(\Gamma_{3}^{-}\Gamma_{4}^{+}\) state (highest valance band) to the \(\Gamma_{3}^{+}\Gamma_{4}^{+}\) state (lowest conduction band) at the \(\Gamma\)-point (Fig. 4c). In the energy range of 1.7 \(\sim\) 2.5 eV, the significant anisotropy of optical conductivity has to result in large SH rotation angles, e.g., -0.9\({}^{\circ}\) at 1.9 eV and 0.8\({}^{\circ}\) at 2.2 eV. The corresponding SH ellipticity is also obviously large with a peak of -0.8\({}^{\circ}\) at 2.1 eV. Of particular interest here is that the optical conductivity spectra are almost not changed when the magnetization direction changes from the \(z\)-axis to the \(x\)-axis or vice versa (Fig. 3). This is very similar to the cases of three-dimensional noncollinear AFM Mn\({}_{3}\)\(X\) (\(X\) = Rh, Ir, Pt) [39] and 2D vdW FM Fe\({}_{n}\)GeTe\({}_{2}\) (\(n\) = 3, 4, 5) [40]. It can be easily understood as the longitudinal optical conductivities (\(\sigma_{xx}\) and \(\sigma_{yy}\)) are closely related to the joint density of states and interband transition probability [41] which are basically not influenced when the angle between adjacent spins keeps fixed. It follows that the SH spectra of \(\it MPS_{3}\) are insensitive to magnetization direction, e.g., FePS\({}_{3}\) and NiPS\({}_{3}\) with the \(z\)- and \(x\)-axes zigzag-AFM orders (Fig. 3b,c). In the case of MnPS\({}_{3}\) with the \(x\)-axis Neel-AFM order, the optical conductivities are identical to the results of \(z\)-axis Neel-AFM order, such that the SH rotation angel and ellipticity are also negligibly small (Fig. 3a), even though the appearance of MOSHE with the \(x\)-axis Neel-AFM order is allowed by symmetry (Table 1). Similarly, since the \(z\)-axis FM order exhibits in-plane isotropy, the MOSHE in all the three materials with the \(x\)-axis FM order are rather small (see Supplementary Fig. 1), e.g., the largest SH rotation angle (appearing in FePS\({}_{3}\)) is only 0.05\({}^{\circ}\). Therefore, we suggest that it is more likely to observe large second-order magneto-optical effects in AFM materials that exhibit in-plane anisotropy when the spins are out-of-plane oriented, such as the \(\it MPS_{3}\) family with the zigzag-AFM and stripy-AFM (Fig. 3b,c) orders (see Supplementary Fig. 2). Next, we move on to discuss the MOSHE in bilayer FePS\({}_{3}\) and NiPS\({}_{3}\) on their magnetic ground states. For FePS\({}_{3}\), two types of interlayer magnetic structures have long been reported. One is the zigzag-AFM chain along the \(x\)-axis (type-A) with AFM interlayer coupling (Fig. 5a) [42], while the Figure 4: Relativistic band structures of monolayer (a) MnPS\({}_{3}\) with the \(z\)-axis Néel-AFM order, (b) FePS\({}_{3}\) with the \(z\)-axis zigzag-AFM order, and (c) NiPS\({}_{3}\) with the \(x\)-axis zigzag-AFM order. The irreducible representations of relevant bands at the \(\Gamma\) and K points are labeled. The principal interband transitions A\({}_{1}\), A\({}_{2}\), and A\({}_{3}\) are indicated by arrows, corresponding to the peaks of Re\(\sigma_{xx}\) and Re\(\sigma_{yy}\) in Fig. 3. other one is the zigzag-AFM chain along the \(x^{\prime}\)-axis (type-B) with FM interlayer coupling (Fig. 5b) [43]. Recently, the coexistence of the two types of magnetic structures in multilayer FePS\({}_{3}\) has been confirmed by combining MLD and second-harmonic generation measurements [21]. For NiPS\({}_{3}\) powder and single crystal, as far as we know, only the type-A zigzag chain with FM interlayer coupling has been reported [44; 45]. We speculate that the type-B structure may also exist in bilayer and multilayer NiPS\({}_{3}\). Here we consider both FM and AFM interlayer coupling for type-A and type-B zigzag chains in bilayer FePS\({}_{3}\) and NiPS\({}_{3}\). The optical conductivities are not shown because they retain the overall trend in monolayers (Fig. 3b,c) with slightly change in magnitudes due to the weak interlayer vdW interactions. The calculated SH spectra are plotted in Fig. 5c,d, in which the interlayer FM and AFM coupling are not labeled since their spectra are identical to each other. One can observe that for both FePS\({}_{3}\) and NiPS\({}_{3}\), the profiles of SH rotation angles for two types of zigzag chains resemble to each other (top panels of Fig. 5c,d). Moreover, the SH rotation angles for monolayers and bilayers are also similar in the sense that their peaks appear at almost the same photon energy. The calculated SH rotation angles of bilayer FePS\({}_{3}\) (NiPS\({}_{3}\)) are surprisingly large recording to -1.2\({}^{\circ}\) at 2.4 eV and 1.2\({}^{\circ}\) at 3.2 eV (-2.4\({}^{\circ}\) at 2.0 eV and 1.0\({}^{\circ}\) at 2.2 eV). In contrast to the SH rotation angles, the SH ellipticities are highly correlated to the zigzag chain structures (bottom panels of Fig. 5c,d). The ellipticity spectra of bilayer FePS\({}_{3}\) with type-A and type-B structures show a striking contrast in a wide range of photon energy, in particular at 2.4 eV and 3.4 eV where \(\psi_{\rm SH}\) for type-B structure are zero. As well, there is significant difference between the type-A and type-B structures of bilayer NiPS\({}_{3}\) from 3.0 eV to 3.7 eV. We suggest that the dramatic features in SH ellipticity can be used to distinguish the magnetic structures of bilayer _M_PS\({}_{3}\). In summary, our work establishes a simple theoretical framework for studying the magneto-optical Schafer-Hubert effect in 2D magnetic materials using first-principles calculations, and also proposes second-order magneto-optical spectroscopy to be a powerful technique for accurately detecting the in-plane anisotropy in various magnetic structures. The calculated results demonstrate that monolayer FePS\({}_{3}\) and NiPS\({}_{3}\) with the zigzag antiferromagnetic order exhibit large Schafer-Hubert angles (up to 1\({}^{\circ}\)) in visible light and near ultraviolet range. We further find that the Schafer-Hubert effect is interestingly insensitive to the orientations of Neel vector. Finally, the magneto-optical response for bilayer FePS\({}_{3}\) and NiPS\({}_{3}\) with different stackings of zigzag antiferromagnetic chains are studied. Surprisingly, the Schafer-Hubert angle of bilayer NiPS\({}_{3}\) records up to 2.4\({}^{\circ}\), and the obvious discrepancy in ellipticity spectra enable a distinction of different interlayer magnetic structures. The excellent properties render _M_PS\({}_{3}\) family a novel AFM materials platform for nanophotonic devices. More importantly, our theoretical framework allows for high-throughput study of Schafer-Hubert effect among 2D AFM materials for finding potentially interesting systems. ## Methods ### First-principles calculations The electronic structure calculations were performed using the projector augmented wave (PAW) method [46], implemented in Vienna _ab initio_ Simulation Package (VASP) [47]. The exchange-correlation effects were treated using the generalized gradient approximation with the Perdew-Burke-Ernzerhof (GGA-PBE) parameterization [48]. The cutoff energy was set to 300 eV and the energy convergence criterion was chosen to 10\({}^{-6}\) eV. A \(k\)-mesh of \(12\times 12\times 1\) (\(12\times 6\times 1\)) was used for the \(1\times 1\) (\(1\times 2\)) unit cell. Spin-orbit coupling (SOC) effect was included in our calculations. The correlation effects of the \(d\)-orbitals of Fe, Ni, and Mn atoms were treated by the GGA+U method [49], and the effective Hubbard parameters were set to 3.0 eV, 4.0 eV, and 5.0 eV, respectively. The experimental lattice constants are adopted for MnPS\({}_{3}\) (6.077 A), FePS\({}_{3}\) (5.947 A), and NiPS\({}_{3}\) (5.812 A) [50]. The van der Waals interactions were considered using the DFT-D2 method [51]. A vacuum layer of 15 A was used to eliminate the interactions between the adjacent atomic layers. Figure 5: (a,b) Two types of magnetic structures for bilayer _M_PS\({}_{3}\) with the zigzag-AFM chains along the \(x\) and \(x^{\prime}\)-axes. Bright (dark) red and blue spheres denote the \(M\) atoms on bottom (top) layer with opposite spin magnetic moments, whereas P and S atoms are not shown. The solid black lines draw out the 2D primitive cell. (c,d) Magneto-optical Schafer-Hubert spectra (\(\theta_{\rm SH}\) and \(\psi_{\rm SH}\)) of bilayer FePS\({}_{3}\) and NiPS\({}_{3}\) with the type-A and type-B magnetic structures. **Magneto-optical Schafer-Hubert effect** The complex Schafer-Hubert angle in two-dimensional (2D) materials were computed according to Eq. (7). We constructed the maximally-localized Wannier functions, including the \(d\)-orbitals of Mn, Fe, and Ni atoms, the \(s\)- and \(p\)-orbitals of P atoms, and the \(p\)-orbitals of S atoms, using wannier90 package [52]. Then the optical conductivity was calculated using the Kubo-Greenwood formula [53], \[\sigma_{\mu\nu} = \frac{ie^{2}\hbar}{N_{k}V}\sum_{\mathbf{k}}\sum_{n,m}\frac{f_{m \mathbf{k}}-f_{n\mathbf{k}}}{E_{m\mathbf{k}}-E_{n\mathbf{k}}} \tag{8}\] \[\times\frac{\langle\psi_{n\mathbf{k}}|\hat{\nu}_{\mu}|\psi_{m \mathbf{k}}\rangle\langle\psi_{m\mathbf{k}}|\hat{\nu}_{\nu}|\psi_{n\mathbf{k}} \rangle}{E_{m\mathbf{k}}-E_{n\mathbf{k}}-(\hbar\omega+i\eta)},\] where \(f_{n\mathbf{k}}\), \(V\), \(N_{k}\), \(\omega\), and \(\eta\) are the Fermi-Dirac distribution function, volume of unit cell, total number of \(k\)-points in the Brillouin zone, photon frequency, and energy smearing parameter, respectively. \(\hat{v}_{\mu(\nu)}\) is velocity operator with subscripts \(\mu,\nu\in\{x,y,z\}\) denotes Cartesian components. \(\psi_{n\mathbf{k}}\) and \(E_{n\mathbf{k}}\) are the Wannier functions and interpolated energy at the band index \(n\) and momentum \(\mathbf{k}\), respectively. A \(k\)-mesh of \(400\times 400\times 1\) was used to converge the optical conductivity and \(\eta\) was set to be 0.1 eV. The effective thicknesses (\(d\)) of MnPS\({}_{3}\), FePS\({}_{3}\), and NiPS\({}_{3}\) monolayers were taken from the interlayer distances of their bulk compounds, that is, 6.796 A, 6.722 A, and 6.632 A, respectively [43]. The experimental refractive index of SiO\({}_{2}\) at different photon energies [54] was acquired from an online database ([https://refractiveindex.info](https://refractiveindex.info)). **Dipole selection rules** The characters of the energy bands at high-symmetry \(k\)-points were determined using MagVasp2trace code [55, 56]. The corresponding irreducible corepresentations and dipole selection rules were identified by MSGCorep package [57, 58]. Here, we take the magnetic space group \(P\overline{3}\,1m\) as an example to illustrate how to find out the dipole selection rules. For an in-plane polarized light (i.e., \(\mathbf{E}\perp z\)), the dipole operators are defined by either \(-e\hat{x}\) or \(-e\hat{y}\), which together transform as the bases of the irreducible corepresentation \(\Gamma_{3}\) of the group \(P\overline{3}\,1m\). Using the command "showMSGCorepDirectProduct" in MSGCorep package, we can obtain the direct products and their decompositions between \(\Gamma_{3}\) and other corepresentations (Supplementary Fig. 3). It is easy to find that the dipole selection rules are \(\Gamma_{4}\leftrightarrow\Gamma_{4}\) and \(\Gamma_{4}\leftrightarrow\Gamma_{5}\Gamma_{6}\). **DATA AVAILABILITY** The data that support the findings of this study are available from the corresponding author on reasonable request. **CODE AVAILABILITY** The codes that are necessary to reproduce the findings of this study are available from the corresponding author on reasonable request. **REFERENCES**
2301.11048
Decidability of well quasi-order and atomicity for equivalence relations under embedding orderings
We consider the posets of equivalence relations on finite sets under the standard embedding ordering and under the consecutive embedding ordering. In the latter case, the relations are also assumed to have an underlying linear order, which governs consecutive embeddings. For each poset we ask the well quasi-order and atomicity decidability questions: Given finitely many equivalence relations $\rho_1,\dots,\rho_k$, is the downward closed set Av$(\rho_1,\dots,\rho_k)$ consisting of all equivalence relations which do not contain any of $\rho_1,\dots,\rho_k$: (a) well-quasi-ordered, meaning that it contains no infinite antichains? and (b) atomic, meaning that it is not a union of two proper downward closed subsets, or, equivalently, that it satisfies the joint embedding property?
V. Ironmonger, N. Ruskuc
2023-01-26T11:44:03Z
http://arxiv.org/abs/2301.11048v4
# Decidability of well quasi-order and atomicity for equivalence relations under embedding orderings ###### Abstract. We consider the posets of equivalence relations on finite sets under the standard embedding ordering and under the consecutive embedding ordering. In the latter case, the relations are also assumed to have an underlying linear order, which governs consecutive embeddings. For each poset we ask the well quasi-order and atomicity decidability questions: Given finitely many equivalence relations \(\rho_{1},\ldots,\rho_{k}\), is the downward closed set \(\operatorname{Av}(\rho_{1},\ldots,\rho_{k})\) consisting of all equivalence relations which do not contain any of \(\rho_{1},\ldots,\rho_{k}\): (a) well-quasi-ordered, meaning that it contains no infinite antichains? and (b) atomic, meaning that it is not a union of two proper downward closed subsets, or, equivalently, that it satisfies the joint embedding property? Key words and phrases:Equivalence relation, embedding, poset, well quasi order, antichain, atomic, joint embedding property, graph, path, subpath, decidability 2 which consists of all members of \(\mathcal{C}\) which do not contain any of the \(A_{i}\), satisfy property \(\mathcal{P}\). Two properties that have been much investigated are well quasi-order and atomicity. The former means absence of infinite antichains (as well as infinite descending chains, but this is automatic for collections of finite structures); the latter is equivalent to the joint embedding property - for any two members in the poset there is a member that contains them both. There are many papers dealing with this subject matter, most notably on well quasi-order in graphs. For example, the wqo problem for the ordinary subgraph ordering is decidable as an immediate consequence of [5]. The Graph Minor Theorem [14] asserts that under the minor ordering (which is, strictly speaking not an embedding ordering) the set of all graphs is wqo (and hence the wqo problem is trivially decidable). By way of contrast, the wqo problem is wide open for some other natural orderings, notably the induced subgraph ordering. The same is true for some special classes or variations of graphs, such as bipartite graphs, digraphs and tournaments, as well as for other combinatorial structures, notably permutations. One further exception is provided by the case of words over a finite alphabet under the (scattered) subword ordering, where the entire poset is wqo due to the so-called Higman's Lemma [9], which we will briefly review in Section 2, and hence the wqo problem for downward closed classes in this case is trivially decidable. We refer the reader to the first half of [4] for a motivational survey; [11] takes a more comparative-combinatorial viewpoint, and [12] is the most up to date survey focussing on graphs. Turning to the atomicity problem, a good introduction into the concept and its structural significance is given in [15]. A recent major result shows that the property is undecidable for the induced subgraph ordering [2]. The problem is still open for permutations, but [3] shows that atomicity is undecidable for '3-dimensional' permutations, i.e. sets with three linear orders. An embedding ordering that has recently come to prominence is the so-called _consecutive_ ordering. For words, this would be the usual consecutive subword (sometimes called factor) ordering. In permutations, this ordering arises when the entries are required to embed consecutively in one (out of the two available) dimensions/linear orders. We refer the reader to [6] for a survey of this ordering for permutations, and [7] for some recent insights into the structure of the resulting poset. In general, to be able to define consecutive ordering, one requires the presence of a linear order in the language for our combinatorial structures. In [1] it is proved that the wqo problem is decidable for the consecutive embedding ordering on words over a finite alphabet. This was taken further in [13], where it was proved that wqo is also decidable for the consecutive embedding ordering on permutations, and that atomicity is decidable for consecutive embedding orderings on both words and permutations. In all these cases the key idea is to re-interpret the problem in terms of subpath ordering on the set of all paths in a finite digraph. Motivated by these similarities and the desire to gain more understanding into the general behaviour of consecutive orderings, in this paper we take another type of very elementary combinatorial structure, namely equivalence relations. Of course, they do not come naturally equipped with a linear order, so to be able to consider consecutive orderings we add one to the signature for our structures. In order to fill a somewhat surprising gap in literature, we also consider the (non-consecutive) embedding ordering for equivalence relations. Thus we arrive at the topic and content of the present paper: we investigate the collection of all equivalence relations on finite sets under two orderings - the standard (or non-consecutive) embedding ordering and the consecutive embedding ordering. We prove that the collection is well quasi-ordered under the non-consecutive embedding ordering (Theorem 3.4). Furthermore, we prove that the following problems are decidable: * the well quasi-order problem for collections defined by finitely many forbidden equivalence relations under the consecutive embedding order (Theorem 9.4); * The atomicity problem for collections defined by finitely many forbidden equivalence relations under each of the non-consecutive (Theorem 4.5) and consecutive embedding orders (Theorem 10.8). The paper is organised as follows. We begin by giving some necessary preliminary results and notation in Section 2. Following this we look at wqo and atomicity for equivalence relations under the non-consecutive embedding order. In Section 3 we show that the poset of finite equivalence relations under the non-consecutive embedding order is wqo (Theorem 3.4). Then in Section 4 we answer the atomicity problem for the poset of equivalence relations under the non-consecutive embedding order (Theorem 4.5). Sections 5-10 tackle the well quasi-order and atomicity problems for the poset of finite equivalence relations under the consecutive embedding order. We will relate the poset of equivalence relations in an avoidance set to the poset of paths in certain finite digraphs. We rely on results from [13] which give criteria for these posets of paths to be well quasi-ordered or atomic; these are introduced in Section 5. Section 6 introduces the technical tools needed to apply these to equivalence relations, and we utilise these tools in Section 7 to give criteria for wqo in two particular cases. To tackle the remaining case, in Section 8 we introduce a new poset of coloured equivalence relations; combining these results enables us to answer the wqo problem in general in the affirmative (Theorem 9.4). Finally, in Section 10 we answer the atomicity problem for the poset of equivalence relations under the consecutive embedding ordering (Theorem 10.8). The paper concludes with some remarks and open problems in Section 11. ## 2. Preliminaries An _equivalence relation_, considered as a relational or combinatorial structure, is simply a pair \((X,\rho)\), where \(X\) is a set and \(\rho\subseteq X\times X\) is a binary relation which is reflexive, symmetric and transitive. Often we will denote an equivalence relation \((X,\rho)\) as a list of its equivalence classes, separated by vertical bars. For example, \(|\,1\,|\,2\,|\,...\,|\,n\,|\) is the equality relation on \(\{1,\ldots,n\}\), whereas \(|\,1\,2\ldots n\,|\) is the full relation. The equivalence class of an element \(x\in X\) is denoted \(\rho_{x}\). We will consider two posets of equivalence relations; the first of them will use the standard embedding ordering on relational structures: **Definition 2.1**.: The (_non-consecutive_) _embedding ordering_ on equivalence relations is given by \((X,\rho)\leq(Y,\sigma)\) if and only if there is an injective function \(f:X\to Y\) such that \((x,y)\in\rho\) if and only if \((f(x),f(y))\in\sigma\) for all \(x,y\in X\). We also say that \((X,\rho)\) is a _sub-equivalence relation_ of \((Y,\sigma)\), and that \(f\) is an _embedding_ of \((X,\rho)\) into \((Y,\sigma)\). Associated with this definition of embedding is the following definition of isomorphism. Two equivalence relations \((X,\rho)\) and \((Y,\sigma)\) are _isomorphic_ if there is a bijection \(f:X\to Y\) such that for all \(x,y\in X\) we have \((x,y)\in\rho\) if and only if \((f(x),f(y))\in\sigma\), and we will write \((X,\rho)\cong(Y,\sigma)\); this is equivalent to \(\rho\) and \(\sigma\) having the same number of equivalence classes of any size. Observe that if \(X,Y\) are finite, then \((X,\rho)\cong(Y,\sigma)\) if and only if \((X,\rho)\leq(Y,\sigma)\) and \((Y,\sigma)\leq(X,\rho)\). We will consider isomorphic equivalence relations to be equal, and gather the finite ones into a set \(\mathpzc{Eq}\). It can be seen that every equivalence relation on a finite set is isomorphic to an equivalence relation on a subset of \(\mathbb{N}\) so, without loss of generality, from now on we limit our considerations to equivalence relations of this form. In fact, we will almost always work with equivalence relations on the set \([n]=[1,n]=\{1,\ldots,n\}\) for some \(n\in\mathbb{N}\). With these conventions, the set of equivalence relations is an infinite poset under the non-consecutive embedding order, denoted by \((\mathpzc{Eq},\leq)\). Our second poset will use a consecutive embedding ordering, for which we will need our underlying sets to be linearly ordered. **Definition 2.2**.: Let \(X=\{x_{1},\ldots,x_{n}\}\) and \(Y=\{y_{1},\ldots,y_{m}\}\), and let \(\leq_{X}\) and \(\leq_{Y}\) be linear orders on \(X\) and \(Y\) respectively so that \(x_{1}\leq_{X}x_{2}\leq_{X}\cdots\leq_{X}x_{n}\) and \(y_{1}\leq_{Y}y_{2}\leq_{Y}\cdots\leq_{Y}y_{m}\). A mapping \(f:X\to Y\) is _contiguous_ (or _consecutive_) if there exists \(k\) such that \(f(x_{i})=y_{k+i-1}\) for all \(i\in[1,n]\). Note that contiguous maps are always injective. **Definition 2.3**.: Let \((X,\rho)\), \((Y,\sigma)\) be equivalence relations, and let \(\leq_{X}\) and \(\leq_{Y}\) be linear orders on \(X\) and \(Y\). We say that \((X,\rho)\)_embeds consecutively_ in \((Y,\sigma)\) if there is a consecutive embedding \(f:X\to Y\). This is written \((X,\rho)\leq_{\mathrm{cons}}(Y,\sigma)\), and we say that \((X,\rho)\) is a _consecutive sub-equivalence relation_ of \((Y,\sigma)\). As with the non-consecutive embedding ordering, we have a notion of isomorphism under the consecutive embedding ordering. Two equivalence relations \((X,\rho)\) and \((Y,\sigma)\) are _isomorphic_ if there is a contiguous bijection \(f:X\to Y\) such that for all \(x,y\in X\) we have \((x,y)\in\rho\) if and only if \((f(x),f(y))\in\sigma\); this is written \((X,\rho)\cong_{\mathrm{cons}}(Y,\sigma)\). If \(X,Y\) are finite, then \((X,\rho)\cong_{\mathrm{cons}}(Y,\sigma)\) if and only if \((X,\rho)\leq_{\mathrm{cons}}(Y,\sigma)\) and \((Y,\sigma)\leq_{\mathrm{cons}}(X,\rho)\). Again, we will consider isomorphic relations to be equal and gather the finite ones into a set \(\overline{\mathcal{Eq}}\). And again note that every equivalence relation on a finite set is isomorphic to an equivalence relation on a finite subset of \(\mathbb{N}\), where we take the linear order to be the natural order. Again, without loss of generality, we will restrict our considerations to equivalence relations of this type, and we will almost always work with equivalence relations with underlying set \([1,n]\) for some \(n\in\mathbb{N}\). With these conventions, the set of equivalence relations is an infinite poset under the consecutive embedding order, denoted by \((\overline{\mathcal{Eq}},\leq_{\mathrm{cons}})\). If \((X,\rho)\) is an equivalence relation on \(n\) points, we will define the _length_ of \(\rho\) to be \(|\rho|=n\). **Example 2.4**.: We have \(|\,1\,2\,|\,3\,|\leq_{\mathrm{cons}}|\,1\,|\,2\,3\,|\,4\,|\) via the contiguous embedding \(1\mapsto 2\), \(2\mapsto 3\), \(3\mapsto 4\); therefore \(|\,1\,2\,|\,3\,|\!\leq\!|\,1\,|\,2\,3\,|\,4\,|\) as well. Similarly, \(|\,1\,2\,|\,3\,|\!\leq\!|\,1\,|\,2\,4\,|\,3\,|\), via the embedding \(1\mapsto 2\), \(2\mapsto 4\), \(3\mapsto 3\). This embedding is clearly not contiguous, and it is easy to check that neither of the two possible contiguous mappings \([3]\to[4]\) are embeddings; therefore \(|\,1\,2\,|\,3\,|\!\not\leq_{\mathrm{cons}}|\,1\,|\,2\,4\,|\,3\,|\). Finally, \(|\,1\,2\,|\,3\,|\!\not\leq\!|\,1\,|\,2\,|\,3\,|\,4\,|\,5\,|\) since it is not possible to map the class of size two injectively to a class of size one. We will be interested not only in the two posets \((\mathcal{Eq},\leq)\) and \((\overline{\mathcal{Eq}},\leq_{\mathrm{cons}})\), but also their _downward closed subsets_, for which we establish the basic terminology now. **Definition 2.5**.: Let \((X,\leq)\) be a poset and \(Y\subseteq X\). We say that \(Y\) is _downward closed_ if whenever \(x\in Y\) and \(y\leq x\) we have that \(y\in Y\). **Definition 2.6**.: Let \((X,\leq)\) be a poset and \(B\subseteq X\). The _avoidance set of \(B\) under the order \(\leq\)_ is the downward closed set \[\mathrm{Av}(B)=\{x\in X:y\nleq x\ \ \forall y\in B\},\] the set of elements which _avoid_\(B\). If \(C\subseteq X\) is downward closed, then it can be expressed as an avoidance set \(C=\mathrm{Av}(B)\) for some set \(B\), e.g. \(B=X\backslash C\). Moreover, if \(X\) has no infinite descending chains, as is the case with \((\mathcal{Eq},\leq)\) and \((\overline{\mathcal{Eq}},\leq_{\mathrm{cons}})\), we can take \(B\) to be the set of minimal elements of \(X\backslash C\); this choice of \(B\) is the unique antichain such that \(C=\operatorname{Av}(B)\), and will be called the _basis_ of \(C\). In this case, if \(B\) is finite, we say that \(C\) is _finitely based_. **Example 2.7**.: In \((\operatorname{\mathcal{L}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! Note that because embedding orderings respect size, and we are dealing with finite structures, the non-existence of infinite descending chains is automatic, and so the wqo property is equivalent to the absence of infinite antichains. Also note that even though we are working with partially ordered sets, we will use the term _well quasi-ordered_, rather than _well partially ordered_ or _partially well ordered_, in order to keep with the prevailing usage in the literature. **Example 2.13**.: In Section 3 we prove that \((\not{L}\mathbf{q},\leq)\) is wqo. The poset of equivalence relations under the consecutive embedding order is not well quasi-ordered; an infinite antichain is given by the set \(\big{\{}|\,1\,n\,|\,2\,|\dots\,|\,n-1\,|:n=1,2,\dots\big{\}}\). **Definition 2.14**.: The _wqo problem_ for finitely based downward closed sets of a poset \((X,\leq)\) is the algorithmic decidability problem, which takes as its input a finite set \(B\subseteq X\) and asks whether or not \(\operatorname{Av}(B)\) is wqo. We make the following straightforward observations: **Lemma 2.15**.: 1. _Any subset of a wqo set is also wqo._ 2. _If_ \(X\) _is a poset and_ \(Y\) _is a finite subset of_ \(X\)_, then_ \(X\) _is wqo if and only if_ \(X\backslash Y\) _is wqo._ Since \((\not{L}\mathbf{q},\leq)\) is wqo, it follows that all its downward closed subsets are also wqo, and the wqo problem is trivial in this case. In the case of \((\overline{\not{L}\mathbf{q}},\leq_{\mathrm{cons}})\) the problem is non-trivial, and we will prove it is decidable in Section 9. A key tool in establishing wqo in different contexts is the so-called Higman's Lemma. The result was originally proved in a general context of universal algebra, but we only give a specialisation to free semigroups and an immediate corollary that will be of use to us. **Definition 2.16**.: Let \((X,\leq_{X})\) be a poset and \(X^{*}\) be the set of words over \(X\). We define the _domination order_\(\leq_{X^{*}}\) on \(X^{*}\) by \[x_{1}x_{2}...x_{k}\leq_{X^{*}}y_{1}y_{2}...y_{l}\] if and only if there is a subsequence \(j_{1}<j_{2}<...<j_{k}\) of \([1,l]\) such that \(x_{t}\leq_{X}y_{j_{t}}\) for \(t=1,...,k\). **Lemma 2.17** (Higman's Lemma, [9]).: _Let \((X,\leq_{X})\) be a poset and \((X^{*},\leq_{X^{*}})\) be the poset of words over \(X\) with the domination order. If \(X\) is wqo under \(\leq_{X}\) then \(X^{*}\) is wqo under \(\leq_{X^{*}}\)._ **Corollary 2.18**.: _The set of finite sequences of natural numbers is wqo under the ordering given by_ \[(x_{1},...,x_{k})\leq(y_{1},...,y_{l})\quad\Leftrightarrow\quad x_{i}\leq y_{ j_{i}}\text{ for some }1\leq j_{1}<\dots<j_{k}\leq l.\] ## 3. WQO under the non-consecutive embedding ordering The purpose of this section is to show that the poset of equivalence relations under the non-consecutive embedding order is wqo. To do this we need to show that this poset contains no infinite antichains. We begin by introducing another ordering on finite sequences of natural numbers, the prefix domination order. We then express the non-consecutive embedding order on equivalence relations in terms of both the prefix domination order and the usual domination order on the sizes of its classes. These results will allow us to tackle the wqo and atomicity problems for our poset by looking at the domination order and prefix domination order respectively. **Definition 3.1**.: Let \(\sigma=(\sigma_{1},\ldots,\sigma_{k})\), \(\tau=(\tau_{1},\ldots,\tau_{l})\) be finite sequences of natural numbers. We say that \(\sigma\) is related to \(\tau\) under the _prefix domination order_, written \(\sigma\leq_{p}\tau\), if and only if \(k\leq l\) and \(\sigma_{i}\leq\tau_{i}\) for \(i=1,\ldots,k\). In what follows we will use the term 'decreasing' for a sequence in which each entry is less than or equal to the entry preceding it. **Lemma 3.2**.: _If \(\sigma,\tau\) are finite decreasing sequences of natural numbers then \(\sigma\leq\tau\) under the domination ordering if and only if \(\sigma\leq_{p}\tau\)._ Proof.: (\(\Leftarrow\)) This direction follows immediately from the definitions. (\(\Rightarrow\)) Let \(\sigma\leq\tau\) under the domination order and suppose \(\sigma=(\sigma_{1},\ldots,\sigma_{k})\), \(\tau=(\tau_{1},\ldots,\tau_{l})\). Then there is a sequence \(j_{1}<\cdots<j_{k}\) from \([1,l]\) such that \(\sigma_{i}\leq\tau_{j_{i}}\) for each \(i\). Since \(j_{i}\geq i\), \(\tau_{j_{i}}\leq\tau_{i}\) for each \(i\). This means that \(\sigma_{i}\leq\tau_{j_{i}}\leq\tau_{i}\) for each \(i\), so it is true that \(\sigma\leq_{p}\tau\). Given an equivalence relation \((X,\rho)\) with equivalence classes \(C_{1},...,C_{N}\) in decreasing size order, we will assign to \((X,\rho)\) the sequence of natural numbers \((|C_{1}|,|C_{2}|,...|C_{N}|)\) and denote this sequence \(\pi(X,\rho)\), or just \(\pi(\rho)\). **Lemma 3.3**.: _If \((X,\rho)\), \((Y,\sigma)\) are equivalence relations, the following are equivalent:_ 1. \((X,\rho)\leq(Y,\sigma)\) _under the non-consecutive embedding order;_ 2. \(\pi(X,\rho)\leq\pi(Y,\sigma)\) _under the domination order;_ 3. \(\pi(X,\rho)\leq_{p}\pi(Y,\sigma)\) _under the prefix domination order._ Proof.: Let \((X,\rho)\), \((Y,\sigma)\) have equivalence classes \(C_{1},...,C_{n}\) and \(K_{1},...K_{m}\) respectively, listed in order of decreasing size. (iii) \(\Rightarrow\) (ii) If \(\pi(X,\rho)\leq_{p}\pi(Y,\sigma)\) then by Lemma 3.2 we have that \(\pi(X,\rho)\leq\pi(Y,\sigma)\) under the domination order. (ii) \(\Rightarrow\) (i) Suppose \(\pi(X,\rho)\leq\pi(Y,\sigma)\) under the domination order, so there is a sequence \(j_{1}<j_{2}<...<j_{n}\) of \([1,m]\) such that \(|C_{t}|\leq|K_{j_{t}}|\) for \(t\in[1,n]\). This means that we can define an injective mapping \(f\) from the set of equivalence classes of \(\rho\) to those of \(\sigma\) by sending \(C_{t}\) to \(K_{j_{t}}\) for each \(t\). This is an injective function \(f:X\to Y\) such that if \(x,y\in X\) then \[x\leq_{\rho}y\Leftrightarrow f(x)\leq_{\sigma}f(y).\] Therefore, \((X,\rho)\leq(Y,\sigma)\) as required. (i) \(\Rightarrow\) (iii) Suppose \((X,\sigma)\leq(Y,\rho)\), so there is an injective function \(f:X\to Y\) that preserves equivalence classes. It is clear that \(n\leq m\) and \(f\) induces an injective mapping \(f^{\prime}:[1,n]\to[1,m]\) such that \(f\) maps \(C_{i}\) to \(K_{f^{\prime}(i)}\) for \(i=1,\ldots,n\). Note that \(|C_{i}|\leq|K_{f^{\prime}(i)}|\) for \(i=1,\ldots,n\). Consider an arbitrary \(t\in[1,n]\). If \(t\leq f^{\prime}(t)\) then \(|C_{t}|\leq|K_{f^{\prime}(t)}|\leq|K_{t}|\) as required. If \(t>f^{\prime}(t)\) then there exists \(j<t\) such that \(f^{\prime}(j)\geq t\). Then we have that \(|C_{t}|\leq|C_{j}|\leq|K_{f^{\prime}(j)}|\leq|K_{t}|\), so \(\pi(X,\rho)\leq_{p}\pi(Y,\sigma)\), completing the proof. **Theorem 3.4**.: _The poset of equivalence relations on finite sets under the non-consecutive embedding order is well quasi-ordered._ Proof.: Aiming for a contradiction, suppose that there is an infinite antichain of equivalence relations \((X_{1},\rho_{1}),(X_{2},\rho_{2}),\ldots\). Applying \(\pi\) to each element of this antichain gives the sequence \(\pi(X_{1},\rho_{1}),\pi(X_{2},\rho_{2}),\ldots\)of finite sequences of natural numbers. By Corollary 2.18, finite sequences of natural numbers are well quasi-ordered, so \(\pi(X_{i},\rho_{i})\leq\pi(X_{j},\rho_{j})\) for some \(i\) and \(j\). Then by Lemma 3.3, \((X_{i},\rho_{i})\leq(X_{j},\rho_{j})\), a contradiction. We conclude that the poset of equivalence relations under the non-consecutive embedding order is wqo. For completeness we record the following immediate result, obtained by combining Theorem 3.4 and Lemma 2.15(i): **Corollary 3.5**.: _All finitely based avoidance sets of equivalence relations under the non-consecutive embedding order are wqo. _ ## 4. Atomicity under the non-consecutive embedding ordering Now we will consider atomicity for equivalence relations under the non-consecutive embedding ordering, and all avoidance sets in this section will be under this order. We begin with a couple of illustrative examples and then come to the main results of this section. **Example 4.1**.: Consider the avoidance set \(C=\operatorname{Av}(|\,1\,2\,3\,|\,4\,5\,6\,|)\); it consists of all equivalence relations with at most one class of size \(\geq 3\). Take two elements \(\sigma,\rho\in C\) with equivalence classes \(C_{1},\ldots,C_{m}\) and \(K_{1},\ldots,K_{n}\) respectively, listed in order of decreasing size. Let \(p=\max\{|C_{1}|,|K_{1}|\}\) and \(t=\max\{m,n\}\). Now let \(\theta\) be an equivalence relation with \(t-1\) classes of size \(2\) and one class of size \(p\), so \(\pi(\theta)=(p,2,2,\ldots,2)\). Clearly \(\theta\in C\), and by Lemma 3.3 we also have that \(\sigma,\rho\leq\theta\). This means that \(C\) satisfies the JEP and therefore is atomic by Proposition 2.9. **Example 4.2**.: Consider the avoidance set \(C=\operatorname{Av}(\left|\,1\,2\,\right|\left|\,3\,\right|)\), which contains the equivalence relations \(\sigma=\left|\,1\,\right|2\left|\right.\) and \(\rho=\left|\,1\,2\,\right|\). Any equivalence relation containing both \(\sigma\) and \(\rho\) must contain \(\left|\,1\,2\,\right|3\left|\right.\) and so cannot be in \(C\). Hence \(C\) does not satisfy the JEP and therefore by Proposition 2.9, \(C\) is not atomic. **Definition 4.3**.: An equivalence relation is _uniform_ if all its equivalence classes are the same size. **Theorem 4.4**.: _If \(C=\operatorname{Av}(B)\) is a finitely based avoidance set with basis \(B\), then \(C\) is atomic under the non-consecutive embedding order if and only if all elements of \(B\) are uniform._ Proof.: (\(\Rightarrow\)) We prove the contrapositive, that if there is a non-uniform element \(\sigma\in B\) then \(C\) is not atomic. Suppose such a non-uniform element \(\sigma\) exists with equivalence classes \(C_{1},\ldots,C_{m}\), listed in order of decreasing size. Let \(|C_{1}|=k>1\) and suppose that \(\sigma\) has \(n\) classes of size \(k\), so \(|C_{1}|=\cdots=|C_{n}|=k\). Let \(\alpha\) be a uniform equivalence relation consisting of \(n\) classes \(D_{1},\ldots,D_{n}\) of size \(k\). Let \(\beta\) be an equivalence relation identical to \(\sigma\) but with all classes of size \(k\) replaced by classes of size \(k-1\), and let the equivalence classes of \(\beta\) be \(E_{1},\ldots,E_{m}\) in decreasing size order. Since \(\alpha,\beta\lneq\sigma\) and \(B\) is a basis, we have that \(\alpha,\beta\in C\). Suppose there is an equivalence relation \(\theta\in C\) such that \(\alpha,\beta\leq\theta\). Say the equivalence classes of \(\theta\) are \(F_{1},\ldots,F_{l}\) in decreasing size order. Since \(\alpha\leq\theta\), by Lemma 3.3, \(|F_{i}|\geq|D_{i}|=|C_{i}|\) for \(i=1,\ldots,n\). Similarly, since \(\beta\leq\theta\), Lemma 3.3 gives that \(|F_{i}|\geq|E_{i}|=|C_{i}|\) for \(i=n+1,\ldots,m\). Therefore \(|F_{i}|\geq|C_{i}|\) for all \(i\) and so \(\sigma\leq\theta\) by Lemma 3.3, which is a contradiction, so \(C\) does not satisfy the JEP and so is not atomic. (\(\Leftarrow\)) Now suppose that all the elements of \(B=\{\rho_{1},...,\rho_{n}\}\) are uniform. Take two relations \(\alpha,\beta\in C\). Suppose \(\pi(\alpha)=(a_{1},\ldots,a_{k})\) and \(\pi(\beta)=(b_{1},\ldots,b_{l})\). Without loss of generality, assume \(k\geq l\). Let \(\gamma\) be an equivalence relation with \(\pi(\gamma)=(c_{1},\ldots,c_{k})\), where \[c_{i}=\begin{cases}\max\{a_{i},b_{i}\},&i\leq l\\ a_{i},&i>l\end{cases}\] Lemma 3.3 immediately gives that \(\alpha,\beta\leq\gamma\). To give atomicity, we will show that \(\gamma\in C\). Aiming for a contradiction, suppose that \(\rho_{j}\leq\gamma\) for some \(j\). Suppose \(\pi(\rho_{j})=(p,p,\ldots,p)\), with length \(q\); by Lemma 3.3, we have that \(p\leq c_{i}\) for \(i=1,\ldots q\). Without loss of generality, suppose \(c_{q}=a_{q}\). Then, since \(\pi(\alpha)\) is decreasing, for every \(i=1,\ldots,q\) we have that \(p\leq c_{q}=a_{q}\leq a_{i}\). This implies \(\rho_{j}\leq\alpha\), a contradiction. Hence, \(\gamma\in C\) is an equivalence relation containing both \(\alpha\) and \(\beta\), so \(C\) satisfies the JEP and therefore is atomic by Proposition 2.9. **Theorem 4.5**.: _It is decidable whether an avoidance set \(C=\operatorname{Av}(B)\) of equivalence relations under the non-consecutive embedding order is atomic._ Proof.: Firstly, if \(B\) is not a basis, we can reduce it to a basis by removing non-minimal elements. Then it easy to check whether all the basis elements are uniform, meaning that the condition of Theorem 4.4 is decidable and hence atomicity is decidable for avoidance sets of equivalence relations under the non-consecutive embedding order. ## 5. Digraphs: definitions and some useful results In this section we state some necessary definitions related to digraphs and give two results from [13] which will be used in later sections. In the terminology of this paper, a digraph is a structure with a single binary relation. However, to help the intuition and visualisation we will use terminology more familiar from graph theory, which we introduce now. **Definition 5.1**.: A _digraph_\(G\) is a pair \((V,E)\), where \(V\) is a set of _vertices_ and \(E\) is a set of _edges_, which are ordered pairs of vertices. If \((u,v)\in E\), we say that \(u\) and \(v\) are _neighbours_ and that \(u\) and \(v\) are _incident_ to the edge \((u,v)\). **Definition 5.2**.: A _path_ in a digraph \((V,E)\) is an ordered sequence of vertices, written \(v_{1}\to v_{2}\to\cdots\to v_{n}\), where \((v_{i},v_{i+1})\in E\) for \(i=1,\ldots n-1\). The _length_ of such a path is \(n-1\). The _start vertex_ and _end vertex_ are \(v_{1}\) and \(v_{n}\) respectively. A _simple path_ is a path whose vertices are all distinct. A _cycle_ is a path of length at least one with \(v_{1}=v_{n}\). A _simple cycle_ is a cycle \(v_{1}\to v_{2}\to\cdots\to v_{n}\) in which \(v_{1},\ldots,v_{n-1}\) are distinct. **Definition 5.3**.: Let \(\pi=v_{1}\to v_{2}\to\cdots\to v_{n}\) and \(\eta=u_{1}\to u_{2}\to\cdots\to u_{k}\) be paths in a digraph and suppose \(v_{n}=u_{1}\). Then \(\pi\) and \(\eta\) can be _concatenated_ to produce a new path \[\pi\eta=v_{1}\to v_{2}\to\cdots\to v_{n}=u_{1}\to u_{2}\to\cdots\to u_{k}.\] If \(\xi\) is a cycle, we will write the concatenation of \(\xi\) with itself \(m\) times as \(\xi^{m}\). **Definition 5.4**.: The _in-degree_ of a vertex \(v\) in a digraph \(G\) is the number of vertices \(u\) such that \((u,v)\) is an edge in \(G\). Similarly, the _out-degree_ of \(v\) is the number of vertices \(u\) such that \((v,u)\) is an edge in \(G\). **Definition 5.5**.: A cycle in a digraph is an _in-cycle_ if at least one vertex has in-degree two or more but all vertices have out-degree one. A cycle is an _out-cycle_ if all vertices have in-degree one but at least one vertex has out-degree two or more. A cycle is an _in-out cycle_ if it contains at least one vertex of in-degree two or more and at least one vertex of out-degree two or more. **Definition 5.6**.: A digraph \(G\) is _strongly connected_ if there is a path between any pair of vertices in \(G\). **Definition 5.7**.: A digraph is a _bicycle_ if it consists of two disjoint simple cycles connected by a simple path, where only the start and end vertices of the path are in either cycle. We refer to the first cycle as the _initial cycle_ and to the last cycle as the _terminal cycle_. Either of the cycles can be empty, and if one cycle is empty then the connecting path may be absent as as well. However, if neither cycle is empty then the connecting path must have length at least one. **Definition 5.8**.: A _subpath_ of a path \(v_{1}\to v_{2}\to\cdots\to v_{n}\) is any path \(v_{i}\to v_{i+1}\to\cdots\to v_{k}\) with \(1\leq i\leq k\leq n\). **Definition 5.9**.: Let \(G\) be a digraph. We define the _subpath order_ on the set of paths in \(G\) as follows. If \(\pi,\eta\) are paths in \(G\) then \(\pi\leq\eta\) if and only if \(\pi\) is a subpath of \(\eta\). The set of paths in a digraph forms a poset under the subpath order. The next two propositions from [13] give criteria for the poset of paths in a digraph to be well quasi-ordered and atomic. **Proposition 5.10** ([13, Theorem 3.1]).: _The poset of paths in a finite digraph \(G\) under the subpath order is wqo if and only if \(G\) contains no in-out cycles. _ **Proposition 5.11** ([13, Theorem 2.1]).: _The poset of paths in a digraph \(G\) under the subpath order is atomic if and only if \(G\) is strongly connected or a bicycle. _ ## 6. The Factor Graph of an Avoidance Set We have already seen that the poset of equivalence relations under the non-consecutive embedding order is wqo. Now we look at the consecutive embedding order, and from now on all avoidance sets will be under this order so it will be written as \(\leq\), rather than \(\leq_{\mathrm{cons}}\), and isomorphisms will be written \(\cong\), rather than \(\cong_{\mathrm{cons}}\). In Example 2.13 we saw that the poset of equivalence relations under the consecutive embedding order is not wqo, so now we are working towards showing decidability of wqo for avoidance sets. This section introduces the equivalence relation factor graph of an avoidance set \(C\) and explores the relationship between the poset of paths in this graph and \(C\). The ideas from this section will then be applied in Section 7 towards showing decidability of wqo for avoidance sets in general, and in Section 10 to establish decidability of atomicity. Since we will be working under the consecutive embedding order, recall that all equivalence relations are equipped with a linear order - the natural order on \(\mathbb{N}\). Given \((X,\rho)\), if \(S\subseteq X\), we will denote the restriction of \(\rho\) to points in \(S\) by \(\rho\!\upharpoonright_{S}\). It can be seen that the restriction of \((X,\rho)\) to \(S\) yields a consecutive sub-equivalence relation, and any consecutive sub-equivalence relation of \((X,\rho)\) can be expressed as a restriction of \(\rho\) to a subset of \(X\). We will write \(\rho\!\downarrow\) to denote the equivalence relation obtained from \(\rho\) by changing the smallest element into a \(1\), the second smallest into a \(2\), and so on. In other words, \(\rho\!\downarrow\) is the unique equivalence relation isomorphic to \(\rho\) whose underlying set is \([1,|\rho|]\). For example, if \(\rho=|\,0\,|\,2\,3\,6\,11\,|\,4\,5\,50\,|\) and \(S=\{3,4,5,6\}\) then \(\rho\!\!\uparrow_{S}=|\,3\,6\,|\,4\,5\,|\) and \(\rho\!\!\downarrow=|\,1\,|\,2\,3\,6\,7\,|\,4\,5\,8\,|\). In what follows, \(B\) will be a finite set of equivalence relations, \(C=\operatorname{Av}(B)\), and \(b=\max\{|\rho|:\rho\in B\}\). Note that we are not assuming that \(B\) is necessarily the basis for \(C\) (i.e. that it is an antichain). However, if \(B\) is not a basis it can easily be reduced to one by removing the non-minimal elements. If \(S\subseteq\mathbb{N}\) then \(C_{S}\) will denote the set \(\{\sigma\in C:|\sigma|\in S\}\). We begin with the following easy observation, which relates wqo in an avoidance set \(C\) to wqo in the subset \(C_{[b,\infty)}\). **Lemma 6.1**.: _A finitely based avoidance set \(C\) is wqo if and only if \(C_{[b,\infty)}\) is wqo._ Proof.: This follows immediately from Lemma 2.15 (ii), taking \(X=C\) and \(Y=C_{[1,b-1]}\). In [13] de Bruijn graphs are used to show decidability of wqo and atomicity for avoidance sets of words under the contiguous subword ordering. Furthermore, certain modifications of de Bruijn graphs are used to show decidability of wqo and atomicity for avoidance sets of permutations under the contiguous subpermutation ordering. Similarly, now we will introduce the _equivalence relation factor graph_, another modification of de Bruijn graphs, which we use to tackle the wqo and atomicity problems for our poset of equivalence relations. We define \(\overline{\operatorname{\mathcal{L}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! **Example 6.3**.: Let \(B=\overline{\mathcal{L}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! It will be possible to show a close relationship between the subpath order on \(\Gamma_{B}\) and the consecutive embedding order on \(C=\operatorname{Av}(B)\) which will be key in showing decidability of well quasi-order and atomicity for \(C\). Sometimes we will refer to wqo of the poset of paths in a graph \(G\) under the subgraph order simply as wqo of \(G\). **Proposition 6.5**.: _If \(\sigma,\rho\in C_{[b,\infty)}\) and \(\sigma\leq\rho\), then \(\Pi(\sigma)\leq\Pi(\rho)\) under the subpath order in \(\Gamma_{B}\)._ Proof.: From the definition (1) of \(\Pi(\rho)\), we have that for a contiguous subset \(S\subseteq[1,|\rho|]\) the path \(\Pi(\rho\!\upharpoonright_{S})\) is a subpath of \(\Pi(\rho)\), and since \(\sigma\) is on a contiguous subset of \([1,|\rho|]\), the result follows. We will identify when every path in \(\Gamma_{B}\) has a unique associated equivalence relation. It turns out that in this case the converse of Proposition 6.5 is also true, and the wqo problem for \(C\) is reduced to that of wqo for \(\Gamma_{B}\) under the subpath order, which we know is decidable by Proposition 5.10. We will show that this will be true if and only if \(\Gamma_{B}\) does not contain some particular vertices, called _ambiguous vertices_. Then it will remain to tackle the question of wqo separately for factor graphs containing ambiguous vertices. We will use similar methods to tackle the atomicity problem in Section 10. Given a path \(\pi\) in the factor graph \(\Gamma_{B}\), we can think of constructing an associated equivalence relation \(\sigma\in\Sigma(\pi)\) by reading the vertices in order and adding to \(\sigma\) the entries \(1\) to \(n\) so that each vertex of \(\pi\) is a consecutive sub-equivalence relation of \(\sigma\). In this way, vertices can be thought of as giving instructions to place the next entry into a particular equivalence class. If \(\pi\) is a path in a factor graph and \(|\Sigma(\pi)|>1\), there must be at least one vertex in \(\pi\) that gives more than one option for the position of the next entry of an equivalence relation in \(\Sigma(\pi)\). Any such vertex must have its largest entry \(b\) in a class of size one, otherwise the class of this entry would be uniquely determined by the classes of previous entries. This informs the next definition. **Definition 6.6**.: A vertex in a factor graph is a _special vertex_ if the largest entry is in a class of size one. **Example 6.7**.: We return to the factor graph of \(C=\operatorname{Av}(|\,1\,2\,|\,3\,|)\), shown in Figure 1; the vertex labeled \(|\,1\,|\,2\,|\,3\,|\) is special since \(3\) is in a class of size one, but the vertex \(|\,1\,|\,2\,3\,|\) is not special since \(3\) is in a class of size two. We will now give an example showing that some, but not all, special vertices give rise to a choice for the next entry of some associated equivalence relations. **Example 6.8**.: Consider again the avoidance set \(C=\operatorname{Av}(B)\) from Example 6.3, where \(B=\overline{\operatorname{\mathcal{Eq}}}_{4}\backslash X\) and \[X=\{|\,1\,2\,3\,4\,|,\,\,|\,1\,2\,3\,|\,4\,|,\,\,\,|\,1\,|\,2\,4\,|\,3\,|,\, \,\,|\,1\,3\,4\,|\,2\,|,\,\,\,|\,1\,|\,2\,3\,|\,4\,|,\,\,\,|\,1\,2\,4\,|\,3\, |\}.\] The factor graph \(\Gamma_{B}\) of \(C\) is shown in Figure 2. It can be seen that \(|\,1\,2\,3\,|\,4\,|\) and \(|\,1\,|\,2\,3\,|\,4\,|\) are the only special vertices. The equivalence relations associated with paths ending at \(|\,1\,2\,3\,|\,4\,|\) only have one class, so it is not possible to add the new entry to an existing class at this vertex. This forces a new class to be added at \(|\,1\,2\,3\,|\,4\,|\), meaning that \(|\,1\,2\,3\,|\,4\,|\) is a special vertex which offers no choice in the position of the new entry. On the other hand, the vertex \(|\,1\,|\,2\,3\,|\,4\,|\) can give a choice in the position of the next entry of an associated equivalence relation, for example the associated equivalence relations of the path \[|\,1\,|\,2\,4\,|\,3\,|\!\to|\,1\,3\,4\,|\,2\,|\!\to|\,1\,|\,2\,3\,|\,4\,|\] include both \(|\,1\,6\,|\,2\,4\,5\,|\,3\,|\) and \(|\,1\,|\,2\,4\,5\,|\,3\,|\,6\,|\). We will now address this distinction in special vertices, introducing ambiguous vertices as those which give a choice in the position of the next entry of at least one associated equivalence relation. **Definition 6.9**.: Suppose \(C=\operatorname{Av}(B)\) is an avoidance set, \(b\) is the maximum length of an element in \(B\) and \(\sigma\in C_{[b,\infty)}\). A class of \(\sigma\) which does not contain any of the largest \(b-1\) elements in \(\sigma\) will be referred to as an _inactive class_. **Example 6.10**.: Consider the equivalence relation \[\sigma=|\,1\,5\,|\,2\,3\,|\,4\,6\,7\,|\!\in\operatorname{Av}(|\,1\,|\,2\,|\,3 \,|\,4\,|).\] Here \(b=4\) and the largest three elements of \(\sigma\) are \(7,6\) and \(5\). The class \(\{2,3\}\) does not contain any of these elements so is an inactive class. The class \(\{4,6,7\}\) contains both \(6\) and \(7\), so this is not an inactive class. **Definition 6.11**.: A special vertex \(\nu\) is _ambiguous_ if there is a vertex \(\mu\) such that \((\mu,\nu)\) is an edge and there exists an equivalence relation \(\sigma\) such that \(\sigma\) has an inactive class and \(\Pi(\sigma)\) ends at \(\mu\). **Example 6.12**.: Following the discussion in Example 6.8, the vertex \(|\,1\,2\,3\,|\,4\,|\) in Figure 2 is not ambiguous since any relation associated with a path ending at \(|\,1\,2\,3\,4\,|\) can only have one class, so cannot have any inactive classes. On the other hand, the vertex \(|\,1\,|\,2\,3\,|\,4\,|\) is ambiguous since the relation \(|\,1\,|\,2\,4\,5\,|\,3\,|\) with associated path \(|\,1\,|\,2\,4\,|\,3\,|\!\to|\,1\,3\,4\,|\,2\,|\) has an inactive class. **Proposition 6.13**.: _Special vertices in cycles are always ambiguous._ Proof.: Let \(\nu\) be a special vertex in a cycle \(\pi\) and let \(\mu\) be the vertex preceding \(\nu\) in \(\pi\). Without loss of generality, assume that \(\pi\) starts and ends at \(\nu\). Suppose \(\nu\) has \(t\) classes. Let \(\eta\) be the concatenation of \(\pi^{t}\) and the subpath of \(\pi\) from \(\nu\) to \(\mu\). Consider any \(\rho\in\Sigma(\eta)\) for which each of the \(t+1\) visits to \(\nu\) is an instruction to add a new class. Then \(\rho\) has at least \(t+1\) classes. Since \(\nu\) has precisely \(t\) classes, one of which is the singleton \(\{b\}\), and since \((\mu,\nu)\) is an edge, \(\mu\) has at most \(t\) classes. Therefore \(\rho\) has more classes than \(\mu\), and hence at least one of them must be inactive, proving that \(\nu\) is ambiguous. Ambiguous vertices are the only vertices which do not necessarily uniquely determine the class of the next entry of an equivalence relation whose associated path contains that vertex. They allow the next entry to be added to an inactive class, if one exists, or to be the first element in a new class. This means that more than one equivalence relation may be associated with a path containing an ambiguous vertex. Therefore, if \(\pi\) is a path containing ambiguous vertices, we can describe an equivalence relation \(\sigma\in\Sigma(\pi)\) by specifying whether the next entry is added to a new class of \(\sigma\) or to an inactive class of \(\sigma\) at each ambiguous vertex. In this way, \(\sigma\) is fully specified by \(\pi\) and the location of the next entry of \(\sigma\) for each ambiguous vertex of \(\pi\). **Example 6.14**.: Let \(B=\overline{\mathcal{L}\mathcal{A}}_{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! A consequence of Lemma 6.15 is that there is a one-to-one correspondence between paths with no ambiguous vertices in \(\Gamma_{B}\) and their associated equivalence relations in \(C_{[b,\infty)}\). The next lemma follows immediately from Lemma 6.15. **Lemma 6.16**.: _If a factor graph \(\Gamma_{B}\) contains no ambiguous vertices and \(\rho,\sigma\in C_{[b,\infty)}\) then \(\sigma\leq\rho\) if and only if \(\Pi(\sigma)\leq\Pi(\rho)\). _ So far, we have enough information to give the following partial version of our intended result, which considers the special case in which the factor graph of \(C=\operatorname{Av}(B)\) contains no ambiguous vertices. **Proposition 6.17**.: _If the factor graph \(\Gamma_{B}\) contains no ambiguous vertices then \(C=\operatorname{Av}(B)\) is wqo if and only if \(\Gamma_{B}\) is wqo._ Proof.: By Lemma 6.16, the poset of paths in \(\Gamma_{B}\) is isomorphic to the poset of equivalence relations in \(C_{[b,\infty)}\), and so \(C_{[b,\infty)}\) is wqo if and only if \(\Gamma_{B}\) is wqo. Then by Lemma 6.1, \(C\) is wqo if and only if \(\Gamma_{B}\) is wqo. ## 7. Two types of cycles which imply non-wqo The purpose of this section is to show non-wqo for avoidance sets of equivalence relations whose factors graphs contain an in-out cycle or a special vertex in a cycle. We do this by utilising the relationship between the poset of paths in the factor graph and the poset of equivalence relations in an avoidance set explored in Section 6. The only ambiguous vertices we will need to consider are special vertices in cycles; we state the results in terms of special vertices, though it is their ambiguity (guaranteed by 6.13) which is key to the outcome. We will need to consider ambiguous vertices more generally in the following sections. **Lemma 7.1**.: _If \(\Gamma_{B}\) contains an in-out cycle then \(C=\operatorname{Av}(B)\) is not wqo._ Proof.: Since \(\Gamma_{B}\) contains an in-out cycle, it is not wqo by Proposition 5.10 so there is an infinite antichain of paths \(\pi_{1},\pi_{2},\dots\) in \(\Gamma_{B}\). Aiming for a contradiction, suppose \(C\) is wqo. Take equivalence relations \(\sigma_{i}\in\Sigma(\pi_{i})\) for \(i=1,2,\dots\). Since \(C\) is wqo, \(\sigma_{j}\leq\sigma_{k}\) for some \(j\neq k\). Then by Proposition 6.5\(\pi_{j}\leq\pi_{k}\), a contradiction. We conclude that \(C\) is not wqo. Now we turn our attention to avoidance sets whose factor graphs contain special vertices in cycles. **Definition 7.2**.: An avoidance set \(C=\operatorname{Av}(B)\) is _unbounded_ if there is no (finite) upper bound on the number of equivalence classes of its members; otherwise \(C\) is _bounded_. **Lemma 7.3**.: _An avoidance set \(C=\operatorname{Av}(B)\) is unbounded if and only if \(\Gamma_{B}\) contains a cycle with a special vertex in it._ Proof.: (\(\Rightarrow\)) For any path in \(\Gamma_{B}\), the only vertices where a class might be added to an associated equivalence relation are special vertices. Since \(\Gamma_{B}\) is a finite digraph, the only way to allow an unbounded number of classes in equivalence relations is for some path to visit a special vertex twice, i.e. if there is a special vertex in a cycle. (\(\Leftarrow\)) Suppose there is a special vertex \(\nu_{a}\) in a cycle \(\eta\) in \(\Gamma_{B}\), and without loss of generality assume \(\eta\) starts and ends at \(\nu_{a}\). Consider the equivalence relations \(\theta_{k}\in\Sigma(\eta^{k})\), where \(k\geq 1\), which add a new class each time an ambiguous vertex is entered (including \(\nu_{a}\)). Each equivalence relation \(\theta_{k}\) has at least \(k\) classes. Since this holds for any \(k\geq 1\), \(C\) is unbounded. **Lemma 7.4**.: _If an avoidance set is unbounded then it is not wqo._ Proof.: Suppose \(C=\operatorname{Av}(B)\) is unbounded. By Lemma 7.3, \(\Gamma_{B}\) contains a cycle \(\xi\) with a special vertex \(\nu_{a}\) in it, and \(\nu_{a}\) is ambiguous by Proposition 6.13. Let \(\mu\) be the vertex preceding \(\nu_{a}\) in \(\xi\). For \(k\geq 3\), let \(\pi_{k}\) be the path that starts at \(\mu\), proceeds \(k\) times around \(\xi\) and ends at \(\nu_{a}\). We will look at the equivalence relations \(\sigma_{k}\) in each \(\Sigma(\pi_{k})\) such that for each \(k\): * \(\sigma_{k}\) has underlying set \([1,n^{(k)}]\). * \(n^{k}_{1}\) is added to an inactive class of \(\sigma_{k}\) the second time \(\nu_{a}\) is entered. * \(n^{k}_{2}\) is added to an inactive class of \(\sigma_{k}\) the last time \(\nu_{a}\) is entered. * at all other visits to special vertices, a new class is added to \(\sigma_{k}\). We claim that the set \(\{\sigma_{k}:k\geq 3\}\) forms an infinite antichain. Aiming for a contradiction, suppose that \(\sigma_{i}\leq\sigma_{j}\) for some \(j>i\geq 3\). Suppose \(f:[1,n^{(i)}]\to[1,n^{(j)}]\) is the underlying embedding. It can be seen that \(n^{i}_{1},n^{i}_{2},n^{j}_{1},n^{j}_{2}\) are the only entries added on entering \(\nu_{a}\) which are not the smallest element of their classes. Therefore \(f\) must map \(n^{i}_{1}\) to \(n^{j}_{1}\), and since \(i<j\) this forces \(f\) to map \(n^{i}_{2}\) to an element of \([1,n^{(j)}]\) which is the smallest element in its class. This is a contradiction, since \(n^{i}_{2}\) is not the smallest element of its class, so this prevents \(f\) from preserving equivalence classes. Therefore \(\sigma_{i}\nlessneq\sigma_{j}\), so \(\{\sigma_{k}:k\geq 3\}\) is an infinite antichain and \(C\) is not wqo. ## 8. Coloured Equivalence Relations We have dealt with factor graphs where there is a cycle containing a special vertex or there is an in-out cycle in Section 7. Now we look at the remaining avoidance sets. Since the factor graph of any such avoidance set has no special vertices in cycles, there is a bound on the number of classes members of the avoidance set may have. This motivates the concept of _coloured equivalence relations_ and the _coloured factor graphs_ associated with them; the idea being that we can 'encode' equivalence classes of members of a bounded avoidance set using only a finite amount of additional information. In this section we introduce these concepts and then explore the relationship between the coloured and uncoloured versions. Unlike the uncoloured case, there will be a one-to-one correspondence between coloured equivalence relations and paths in their coloured factor graphs, bypassing the multiple choices previously arising at ambiguous vertices. This will enable us to tackle the wqo question for the remaining avoidance sets in Section 9. **Definition 8.1**.: For \(k\geq 1\), a _\(k\)-colouring_ of an equivalence relation \(\sigma\) is an injective mapping from the set of equivalence classes of \(\sigma\) to \([k]\). In this context we call the elements of \([k]\)_colours_. An equivalence relation \(\sigma\) together with a \(k\)-colouring is called a _\(k\)-coloured equivalence relation_. When the value of \(k\) is not important, we will speak of _colourings_ and _coloured_ equivalence relations. An equivalence relation without a colouring will be called an _uncoloured_ equivalence relation. We will distinguish between coloured and uncoloured equivalence relations by writing coloured equivalence relations with their colourings as superscripts. For example, if \(\sigma\) is an uncoloured relation and \(c\) is a colouring of \(\sigma\), the coloured equivalence relation of \(\sigma\) with \(c\) will be written \(\sigma^{c}\). In concrete examples we will underline the equivalence classes and put their colours as subscripts, e.g. see Example 8.4. **Definition 8.2**.: Let \(\sigma^{c_{1}}\), \(\rho^{c_{2}}\) be two \(k\)-coloured equivalence relations. We say that \(\sigma^{c_{1}}\leq_{\mathrm{col}}\rho^{\sigma_{2}}\) if there exists a contiguous embedding \(f\) of \(\sigma\) into \(\rho\) which respects colourings; more specifically, for every equivalence class \(C\) of \(\sigma\), we require \(c_{1}(C)=c_{2}(D)\), where \(D\) is the unique equivalence class of \(\rho\) such that \(f(C)\subseteq D\). We also say that \(\sigma^{c_{1}}\) is a _coloured sub-equivalence relation_ of \(\rho^{c_{2}}\). We call \(\leq_{\mathrm{col}}\) the _coloured consecutive embedding order_. From now on we will denote \(\leq_{\mathrm{col}}\) simply by \(\leq\), since it is always clear from the nature of the equivalence relations which order is meant. Also note that \(\sigma^{c_{1}}\leq\rho^{c_{2}}\) implies \(\sigma\leq\rho\). Given a coloured equivalence relation \(\sigma^{c}\) on a set \(X\) and any subset \(Y\) of \(X\), the restriction of \(\sigma^{c}\) to points in \(Y\) is denoted \(\sigma^{c}\!\upharpoonright_{Y}\). As with uncoloured equivalence relations, \(\sigma^{c}\!\upharpoonright_{Y}\) is a coloured sub-equivalence relation of \(\sigma^{c}\) and any coloured sub-equivalence relation of \(\sigma^{c}\) can be expressed as a restriction of \(\sigma^{c}\) to a subset of **Definition 8.3**.: Two coloured equivalence relations \(\sigma^{c_{1}},\rho^{c_{2}}\) are _isomorphic_ if there exists a contiguous bijection from \(\sigma^{c_{1}}\) to \(\rho^{c_{2}}\) that preserves equivalence classes and colourings. We take \(\overline{\mathcal{L}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! As there are no cycles in \(\Gamma_{B}\), \(C\) is finite and so \(C^{\mathrm{col}}\) is also finite. The elements of \(C^{\mathrm{col}}\) are all \(3\)-colourings of equivalence relations in \(C\); in other words, all \(3\)-colourings of the equivalence relations on \(<4\) points and of \(|\,1\,|\,2\,3\,4\,|\), \(|\,1\,5\,|\,2\,3\,4\,|\), \(|\,1\,|\,2\,3\,4\,|\), \(|\,1\,|\,2\,3\,4\,|\), \(|\) and \(|\,1\,2\,3\,|\,4\,|\). Let \(\sigma^{c}\in C^{\mathrm{col}}_{[b,\infty)}\) be a coloured equivalence relation, and without loss of generality assume its underlying set is \([n]\) for some \(n\in\mathbb{N}\). We associate \(\sigma^{c}\) with the path \(\Pi^{\prime}(\sigma^{c})\) given by \[\sigma^{c}\!\upharpoonright_{[1,b]}\to\sigma^{c}\!\upharpoonright_{[2,b+1]} \!\!\!\downarrow\to\cdots\to\sigma^{c}\!\upharpoonright_{[n-b+1,n]}\!\!\downarrow\] in \(\Gamma^{\mathrm{col}}_{B}\). The notation \(\rho^{c}\downarrow\) means the unique equivalence relation on \([|\rho|]\) isomorphic to \(\rho^{c}\), in line with uncoloured relations, as introduced in Section 6. On the other hand, we associate a path \(\pi\) in \(\Gamma^{\mathrm{col}}_{B}\) with the coloured equivalence relation \(\Sigma^{\prime}(\pi)\in C^{\mathrm{col}}_{[b,\infty)}\) such that \(\Pi^{\prime}(\Sigma^{\prime}(\pi))=\pi\). Note that, unlike the analogue for uncoloured equivalence relations, \(\Sigma^{\prime}(\pi)\) will always be a single coloured equivalence relation. Figure 4. The uncoloured and coloured factor graphs of \(\mathrm{Av}(B)\) from Example 8.5. **Example 8.6**.: Consider again the avoidance set from Example 8.5, \(C=\operatorname{Av}(B)\) for \(B=\overline{\operatorname{\mathcal{L}}\!\!\operatorname{\mathcal{G}}}_{4}\backslash X\) and \[X=\{|\,1\,|\,2\,3\,4\,|,|\,1\,2\,3\,|\,4\,|\}.\] The coloured factor graph of \(C\) is shown in Figure 4. Here the path \[|\big{|}\underline{1}_{3}|\,\big{|}\underline{234}_{1}|\to|\underline{123}_{1 }|\big{|}\,\big{|}\underline{4}_{3}|\] is associated with the coloured equivalence relation \(|\big{|}\underline{15}_{3}|\,\big{|}\underline{234}_{1}|\). On the other hand, the coloured equivalence relation \(|\,\big{|}\underline{\overline{1}}_{1}|\,\big{|}\underline{234}_{2}|\, \underline{\overline{5}}_{3}|\) is associated with the path \[|\big{|}\underline{\overline{1}}_{1}|\,\big{|}\underline{234}_{2}|\to| \big{|}\underline{123}_{2}|\,\big{|}\underline{4}_{3}|.\] It can be seen that \(\Pi^{\prime}\) and \(\Sigma^{\prime}\) are mutual inverses, as stated in the following proposition. **Proposition 8.7**.: 1. _If_ \(\sigma^{c}\in C^{\operatorname{col}}_{[b,\infty)}\)_, then_ \(\sigma^{c}=\Sigma^{\prime}(\Pi^{\prime}(\sigma^{c}))\)_._ 2. _If_ \(\pi\) _is a path in_ \(\Gamma^{\operatorname{col}}_{B}\)_, then_ \(\pi=\Pi^{\prime}(\Sigma^{\prime}(\pi))\)_._ In this way, there is bijective correspondence between coloured equivalence relations and their associated paths. Moreover, this correspondence respects the coloured consecutive ordering in the following sense. **Proposition 8.8**.: _If \(\sigma^{c_{1}},\rho^{c_{2}}\in C^{\operatorname{col}}_{[b,\infty)}\) then \(\sigma^{c_{1}}\leq\rho^{c_{2}}\) if and only if \(\Pi^{\prime}(\sigma^{c_{1}})\leq\Pi^{\prime}(\rho^{c_{2}})\) in \(\Gamma^{\operatorname{col}}_{B}\)._ Proof.: \((\Rightarrow)\) This is analogous to the uncoloured version (Proposition 6.5). \((\Leftarrow)\) Since \(\Pi^{\prime}(\sigma^{c_{1}})\leq\Pi^{\prime}(\rho^{c_{2}})\) and each path is associated with a single coloured equivalence relation, \(\sigma^{c_{1}}=\Sigma^{\prime}(\Pi^{\prime}(\sigma^{c_{1}}))\leq\Sigma^{ \prime}(\Pi^{\prime}(\rho^{c_{2}}))=\rho^{c_{2}}\). We note in passing that decidability of the wqo and atomicity problems for avoidance sets of the poset of \(k\)-coloured equivalence relations under the coloured consecutive embedding order is an immediate consequence of Proposition 8.8 ## 9. WQO under the Consecutive Embedding Order In this section we establish decidability of wqo for avoidance sets of equivalence relations under the consecutive embedding order. We use the tools introduced in Section 8 to relate wqo in factor graphs to wqo in coloured factor graphs. This will allow us to show that in the remaining cases all avoidance sets are wqo. We finish the section by combining this with the results of Section 7 to show decidability of wqo for avoidance sets of equivalence relations under the consecutive embedding order in general. **Lemma 9.1**.: _If \(\Gamma_{B}\) contains no special vertices in cycles and is wqo, then \(\Gamma_{B}^{\rm col}\) is also wqo._ Proof.: Since \(\Gamma_{B}\) is wqo, it has no in-out cycles by Proposition 5.10. Aiming for a contradiction, suppose that \(\Gamma_{B}^{\rm col}\) is not wqo, so has an in-out cycle \(\bar{\eta}\). Let \(\bar{\eta}\) have in-edge \((\mu^{c_{1}},\nu^{c_{2}})\) and out-edge \((\sigma^{c_{3}},\rho^{c_{4}})\). The cycle \(\bar{\eta}\) must correspond to a cycle \(\eta\) in \(\Gamma_{B}\), and by assumption \(\eta\) contains no special vertices, so neither does \(\bar{\eta}\). Since \(\Gamma_{B}\) is wqo, \(\eta\) is not an in-out cycle by Proposition 5.10. We split considerations into two cases. _Case 1: \(\eta\) is not an out-cycle._ The assumption implies that \((\sigma,\rho)\) is the only edge starting at \(\sigma\) in \(\Gamma_{B}\). On the other hand, in \(\Gamma_{B}^{\rm col}\) there are at least two edges starting at \(\sigma^{c_{3}}\): an edge in \(\bar{\eta}\) and the out-edge \((\sigma^{c_{3}},\rho^{c_{4}})\). Both of these edges in \(\Gamma_{B}^{\rm col}\) correspond to the edge \((\sigma,\rho)\) in \(\Gamma_{B}\). This means that there is another colouring \(c_{5}\) of \(\rho\) such that \((\sigma^{c_{3}},\rho^{c_{5}})\) is an edge in \(\bar{\eta}\). Since \((\sigma^{c_{3}},\rho^{c_{4}})\), \((\sigma^{c_{3}},\rho^{c_{5}})\) are edges, we have that \(\rho^{c_{4}}\!\upharpoonright_{[1,b-1]}\cong\sigma^{c_{3}}\!\upharpoonright_{[2, b]}\cong\rho^{c_{5}}\!\upharpoonright_{[1,b-1]}\). Therefore, \(\rho^{c_{4}}\!\upharpoonright_{[1,b-1]}=\rho^{c_{5}}\!\upharpoonright_{[1,b-1]}\). Since \(c_{4}\) and \(c_{5}\) are distinct colourings, \(b\) is coloured differently under each of these. If \(|\rho_{b}|\neq 1\), the colour of \(b\) would be uniquely determined by the colour of other elements in its class, so \(c_{4}\) and \(c_{5}\) would be identical. Therefore, it must be the case that \(|\rho_{b}|=1\), so \(\rho\) is a special vertex in the cycle \(\eta\), a contradiction. _Case 2: \(\eta\) is not an in-cycle._ Now \((\mu,\nu)\) is the only edge ending at \(\nu\) in \(\Gamma_{B}\). However, there are at least two edges ending at \(\mu^{c_{1}}\) in \(\Gamma_{B}^{\rm col}\): an edge in \(\bar{\eta}\) and the in-edge \((\mu^{c_{1}},\nu^{c_{2}})\). Both of these must correspond to \((\mu,\nu)\) in \(\Gamma_{B}\), so there is another colouring \(c_{6}\) of \(\mu\) such that \((\mu^{c_{6}},\nu^{c_{2}})\) is the edge in \(\bar{\eta}\). Then since \((\mu^{c_{1}},\nu^{c_{2}})\), \((\mu^{c_{6}},\nu^{c_{2}})\) are edges, \(\mu^{c_{1}}\!\upharpoonright_{[2,b]}\cong\nu^{c_{2}}\!\upharpoonright_{[1,b-1]} \cong\mu^{c_{6}}\!\upharpoonright_{[2,b]}\), meaning that \(\mu^{c_{1}}\!\upharpoonright_{[2,b]}=\mu^{c_{6}}\!\upharpoonright_{[2,b]}\). If \(|\mu_{1}|\neq 1\), the colour of \(1\) is determined by the colour of other elements in its class, so \(c_{1}\) and \(c_{6}\) are not distinct. Therefore, \(|\mu_{1}|=1\), and from this we can see that \(\mu\) has one more class than \(\nu\). Now consider traversing the subpath of \(\eta\) from \(\nu\) to \(\mu\). Since \(\mu\) has one more class than \(\nu\), at some point in this path there is an edge \((\nu^{\prime},\mu^{\prime})\) such that \(\nu^{\prime}\) has fewer classes than \(\mu^{\prime}\); the only way for this to happen is for \(\mu^{\prime}\) to be special, a contradiction. In each of the two cases we obtained a contradiction, and this completes the proof. **Lemma 9.2**.: _If the factor graph \(\Gamma_{B}\) has no special vertices in cycles and is wqo, then the avoidance set \(C={\rm Av}(B)\) is also wqo._ Proof.: Aiming for a contradiction, suppose that \(\Gamma_{B}\) is wqo but \(C_{[b,\infty)}\) is not. Take an antichain \(\sigma_{1},\sigma_{2},...\in C_{[b,\infty)}\). Since \(\Gamma_{B}\) is wqo, so is \(\Gamma_{B}^{\rm col}\) by Lemma 9.1, meaning that for some \(i,j\) and colourings \(c_{k},c_{l}\) of \(\sigma_{i},\sigma_{j}\) respectively, we have that \(\Pi^{\prime}(\sigma_{i}^{c_{k}})\leq\Pi^{\prime}(\sigma_{j}^{c_{l}})\). Then by Proposition 8.8, \(\sigma_{i}^{c_{k}}\leq\sigma_{j}^{c_{l}}\) and hence \(\sigma_{i}\leq\sigma_{j}\), a contradiction. Therefore if \(\Gamma_{B}\) is wqo, so is \(C_{[b,\infty)}\) and then by Lemma 6.1, \(C\) is wqo as required. We can now prove our main results concerning the wqo question for the consecutive order: **Theorem 9.3**.: _A finitely based avoidance set \(C=\operatorname{Av}(B)\) is wqo under the consecutive embedding ordering if and only if \(\Gamma_{B}\) has no in-out cycles and no special vertices in cycles._ Proof.: (\(\Rightarrow\)) Suppose \(C=\operatorname{Av}(B)\) is wqo. By Lemma 7.4 and Lemma 7.3, \(\Gamma_{B}\) cannot contain any special vertices in cycles. By Lemma 7.1, \(\Gamma_{B}\) cannot contain any in-out cycles. (\(\Leftarrow\)) Suppose \(\Gamma_{B}\) contains no in-out cycles or special vertices in cycles. Since \(\Gamma_{B}\) contains no in-out cycles, it is wqo by Proposition 5.10 and therefore we can apply Lemma 9.2 to see that \(C\) is wqo, completing the proof. **Theorem 9.4**.: _It is decidable whether a finitely based avoidance set \(C=\operatorname{Av}(B)\) is wqo under the consecutive embedding ordering._ Proof.: It is decidable whether \(\Gamma_{B}\) is wqo by Proposition 5.10 since we can check for in-out cycles. It is also decidable whether \(\Gamma_{B}\) contains special vertices in cycles. Therefore the conditions of Theorem 9.3 are decidable and so the result follows. **Example 9.5**.: Consider again Example 6.8, where \(B=\overline{\operatorname{\mathcal{L}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! ## 10. Atomicity under the consecutive embedding ordering In this section we show decidability of atomicity for avoidance sets of the poset of equivalence relations under the consecutive embedding order. To do this we use the relationship between the poset of paths in the factor graph and the poset \(C_{[b,\infty)}\) discussed in Section 6. Unless otherwise specified, \(B\) is understood to be an arbitrary finite set of relations, and \(C=\operatorname{Av}(B)\). In one direction the connection is straightforward: **Lemma 10.1**.: _If \(C=\operatorname{Av}(B)\) is atomic, then \(\Gamma_{B}\) is atomic._ Proof.: Aiming for a contradiction, suppose that \(\Gamma_{B}\) is not atomic. Take two paths \(\pi,\eta\) in \(\Gamma_{B}\) such that there is no path containing both \(\pi\) and \(\eta\), and let \(\alpha\in\Sigma(\pi)\) and \(\beta\in\Sigma(\eta)\). Since \(C\) is atomic, there is an equivalence relation \(\theta\in C\) which contains both \(\alpha\) and \(\beta\). Since \(b\leq|\alpha|\leq|\theta|\), \(\Pi(\theta)\) is a path in \(\Gamma_{B}\). By Proposition 6.5, \(\Pi(\alpha),\Pi(\beta)\leq\Pi(\theta)\), in other words \(\pi,\eta\leq\Pi(\theta)\). This is a contradiction, and \(\Gamma_{B}\) is atomic. In the reverse direction, the situation is complicated by the fact that a \(\Gamma_{B}\) may be atomic for two different reasons - if it is strongly connected or a bicycle (see Proposition 5.11) - and also because in general atomicity of \(\Gamma_{B}\) is not sufficient for that of \(C\). **Lemma 10.2**.: _If \(\Gamma_{B}\) is strongly connected then \(C_{[b,\infty)}\) is atomic._ Proof.: Take \(\alpha,\beta\in C_{[b,\infty)}\) and let \(\eta,\zeta\) be paths such that \(\Pi(\alpha)=\eta\) and \(\Pi(\beta)=\zeta\). Since \(\Gamma_{B}\) is strongly connected there is a path \(\xi\) from the end vertex of \(\eta\) to the start vertex of \(\zeta\). Let \(\pi=\eta\xi\zeta\). Consider the equivalence relation \(\theta\in\Sigma(\pi)\) which is formed by making the same choices at ambiguous vertices as \(\alpha\) when \(\eta\) is traversed and the same choices as \(\beta\) when \(\zeta\) is traversed. This means that \(\alpha,\beta\leq\theta\), hence \(C_{[b,\infty)}\) satisfies the JEP and so is atomic by Proposition 2.9. **Lemma 10.3**.: _Suppose \(\Gamma_{B}\) contains no ambiguous vertices. Then \(C_{[b,\infty)}\) is atomic if and only if \(\Gamma_{B}\) is atomic._ Proof.: If \(\Gamma_{B}\) contains no ambiguous vertices, we saw in Lemma 6.16 that the poset of paths in \(\Gamma_{B}\) is isomorphic to the poset of equivalence relations in \(C_{[b,\infty)}\). It follows that \(\Gamma_{B}\) is atomic if and only if \(C_{[b,\infty)}\) is atomic. **Lemma 10.4**.: _If \(\Gamma_{B}\) contains an ambiguous vertex which is not in a cycle, then \(C=\operatorname{Av}(B)\) is not atomic._ Proof.: Let \(\nu\) be such a vertex. Then there is an edge \((\mu,\nu)\), and a path \(\pi\) ending at \(\mu\), such that there exists an equivalence relation \(\sigma\in\Sigma(\pi)\) with an inactive class. Let \(\xi\) be the concatenation of \(\pi\) and the edge \((\mu,\nu)\). Then we can take \(\sigma_{1}\in\Sigma(\xi)\) to add a new class to \(\sigma\) at \(\nu\), and \(\sigma_{2}\in\Sigma(\xi)\) to add the new entry to an inactive class at \(\nu\). Suppose that there is an equivalence relation \(\theta\in C\) containing both \(\sigma_{1}\) and \(\sigma_{2}\). Note that \(\theta\in C_{[b,\infty)}\) since \(|\sigma_{1}|,|\sigma_{2}|\geq b\). As \(\nu\) is not in a cycle, \(\Pi(\theta)\) can only enter it once. Then since both \(\sigma_{1}\) and \(\sigma_{2}\) are consecutive sub-equivalence relations of \(\theta\), in \(\theta\) we must both add a new class and add to an inactive class at \(\nu\), a contradiction. Therefore \(C\) does not satisfy the JEP, meaning that it is not atomic by Proposition 2.9. **Lemma 10.5**.: _If \(\Gamma_{B}\) is a bicycle, but not a cycle, with an ambiguous vertex then \(C=\operatorname{Av}(B)\) is not atomic._ Proof.: Let \(\nu\) be an ambiguous vertex. If \(\nu\) is in neither the initial nor terminal cycle, this is dealt with in Lemma 10.4. Below we consider the case where \(\nu\) is in the initial cycle of \(\Gamma_{B}\), and the case where it is in the terminal cycle is almost identical. Let \(\mu\) be the vertex in the initial cycle of \(\Gamma_{B}\) that joins the connecting path, let \(\gamma\) be the vertex neighbouring \(\mu\) on the connecting path. Let \(\pi\) be the path that starts at \(\mu\), traverses the initial cycle twice, and ends by traversing the edge from \(\mu\) to \(\gamma\). Suppose \(\sigma\) is the equivalence relation in \(\Sigma(\pi)\) that adds a new class each time an ambiguous vertex is entered except for the last time it enters \(\nu\), when it adds to an inactive class. Note that there will definitely be an inactive class the last time \(\nu\) is entered as at least one new class has been added since the path last visited this vertex. Let \(\rho\) be the equivalence relation in \(\Sigma(\pi)\) that adds a new class every time an ambiguous vertex is entered, including the last visit to \(\nu\). Aiming for a contradiction, suppose that there is an equivalence relation \(\theta\in C\) such that \(\sigma,\rho\leq\theta\). Consider the path \(\Pi(\theta)\), which exists since \(|\theta|\geq|\sigma|\geq b\). Since \(\Pi(\sigma)=\Pi(\rho)=\pi\), both \(\Pi(\sigma)\) and \(\Pi(\rho)\) end at \(\gamma\). As \(\Pi(\sigma),\Pi(\rho)\leq\Pi(\theta)\) and \(\gamma\) is not in the initial cycle, the end vertices of \(\Pi(\sigma)\) and \(\Pi(\rho)\) must coincide in \(\Pi(\theta)\). This means that when \(\Pi(\theta)\) enters \(\nu\) for the last time the new entry of \(\theta\) must be added to both an inactive class and to a new class, a contradiction. Therefore \(C\) does not satisfy the JEP and hence is not atomic by Proposition 2.9. We can now prove the first main result of this section, which is a characterisation of atomicity of \(C=\operatorname{Av}(B)\) in terms of \(\Gamma_{B}\): **Theorem 10.6**.: _A finitely based avoidance set \(C=\operatorname{Av}(B)\) of equivalence relations under the consecutive embedding ordering is atomic if and only if the following hold:_ 1. _For each_ \(\sigma\in C_{[1,b-1]}\) _there is_ \(\rho\in C_{b}\) _such that_ \(\sigma\leq\rho\)_; and_ 2. _The factor graph_ \(\Gamma_{B}\) _is strongly connected or is a bicycle with no ambiguous vertices._ Proof.: (\(\Rightarrow\)) Suppose \(C\) is atomic. To show that (i) must hold consider an arbitrary \(\sigma\in C_{[1,b-1]}\). Take any \(\theta\in C_{b}\). Since \(C\) is atomic, there must be an element \(\gamma\in C_{[b,\infty)}\) containing both \(\sigma\) and \(\theta\). Since \(|\gamma|\geq b\), we can take any \(b\) consecutive points, that include those of the embedding of \(\sigma\), to obtain a relation of length \(b\) containing \(\gamma\). Therefore (i) holds. Now we show that (ii) holds. By Lemma 10.1, \(\Gamma_{B}\) is atomic so by Proposition 5.11, either \(\Gamma_{B}\) is strongly connected or \(\Gamma_{B}\) is a bicycle. Furthermore, in the latter case, either \(\Gamma_{B}\) is actually a cycle, in which case it is again strongly connected, or else it has no ambiguous vertices by Lemma 10.5. This completes the proof of the forward direction. (\(\Leftarrow\)) Suppose (i) and (ii) hold. Lemmas 10.2 and 10.3 together imply that \(C_{[b,\infty)}\) is atomic. To extend this to \(C\), let \(\sigma,\rho\in C\) be arbitrary. Then there exist \(\sigma^{\prime},\rho^{\prime}\in C_{[b,\infty)}\) such that \(\sigma\leq\sigma^{\prime}\) and \(\rho\leq\rho^{\prime}\), where (i) is used if either \(\sigma\) or \(\rho\) has length \(<b\). By atomicity of \(C_{[b,\infty)}\), there is an equivalence relation \(\theta\in C_{[b,\infty)}\) such that \(\sigma^{\prime},\rho^{\prime}\leq\theta\). Then \(\sigma,\rho\leq\theta\) as well, meaning that \(C\) satisfies the JEP and so is atomic by Proposition 2.9. In order to turn the above characterisation into a decidability result, we need the following: **Proposition 10.7**.: _It is decidable whether a special vertex is ambiguous._ Proof.: Let \(\nu\) be a special vertex in a factor graph \(\Gamma_{B}\). Suppose \(\nu\) has \(t\) equivalence classes and \(\Gamma_{B}\) has \(n\) vertices. Suppose that \(\nu\) is ambiguous. Then there exists an edge \((\mu,\nu)\) and an equivalence relation \(\sigma\) with an inactive class such that \(\pi=\Pi(\sigma)\) ends at \(\mu\). Suppose that \(\pi\) starts at \(\rho\), an equivalence relation with \(l\) classes. Since \(\nu\) is ambiguous, \(\sigma\) must have at least \(t\) classes, as if we were to extend \(\Pi(\sigma)\) to \(\nu\), there would be an inactive class to which an entry of the associated equivalence relation could be added. This means that a new class is added to \(\sigma\) in at least \(t-l\) vertices of \(\pi\); note that all such vertices must be special. Let \(\tau_{1},\ldots,\tau_{t-l}\) be the first \(t-l\) of these vertices. Now we will describe a path \(\pi^{\prime}\) of bounded length ending at \(\mu\) such that there is an equivalence relation in \(\Sigma(\pi^{\prime})\) with an inactive class. We let \(\pi^{\prime}\) be the path that starts at \(\rho\), visits \(\tau_{1},\ldots,\tau_{t-l}\) in order, and ends \(\mu\), always taking the shortest route between these'stations'. Each time we take the shortest route this path is of length \(\leq n-1\), so the length of \(\pi^{\prime}\) is \(\leq(t-l+1)(n-1)\leq t(n-1)\), which is a constant not dependent on \(\sigma\). Any equivalence relation \(\sigma^{\prime}\) constructed by traversing \(\pi^{\prime}\) and adding a new class at \(\tau_{1},\ldots,\tau_{t-l}\) has at least \(t\) classes so must have an inactive class. So \(\pi^{\prime}\) is a path of bounded length from which we can see that \(\nu\) is ambiguous. We have shown that if \(\nu\) is ambiguous there is a path of length \(\leq t(n-1)\) ending at \(\mu\) that has an associated equivalence relation with an inactive class. Therefore, to determine whether \(\nu\) is ambiguous, we can examine all paths of length \(\leq t(n-1)\) ending at \(\mu\) and see if any have an associated equivalence relation with an inactive class. If so, \(\nu\) is ambiguous and, if not, \(\nu\) is not ambiguous. Since we have this bound on the length of paths to check, the decidability result follows. **Theorem 10.8**.: _It is decidable whether a finitely based avoidance set \(\operatorname{Av}(B)\) is atomic under the consecutive embedding ordering._ Proof.: The condition (i) from Theorem 10.6 is decidable since there are finitely many elements in \(C_{[1,b-1]}\) and \(C_{b}\) and so we can check whether all elements of \(C_{[1,b-1]}\) are contained in elements of \(C_{b}\). It is also decidable whether \(\Gamma_{B}\) is strongly connected or a bicycle, and by Proposition 10.7 it is decidable whether \(\Gamma_{B}\) contains ambiguous vertices. Therefore the conditions of Theorem 10.6 are decidable, and the result follows. **Example 10.9**.: Consider \(C=\operatorname{Av}(\left|\,1\,2\,3\,\right|,\left|\,1\,3\,\right|\left|\,2 \,\right|)\), as in Example 9.6, whose factor graph is shown in Figure 5. The equivalence relations in \(C_{[1,b-1]}=C_{[1,2]}\) are \(\left|\,1\,\right|,\left|\,1\,2\,\right|\) and \(\left|\,1\,\right|2\,\right|\). It can be seen that \(\left|\,1\,\right|,\left|\,1\,2\,\right|,\left|\,1\,2\,\right|\leq\left|\,1 \,2\,\right|3\)\(\in C_{3}\) so (i) holds from Theorem 10.6. Since the factor graph of \(C\) is strongly connected, Theorem 10.6 gives that \(C\) is atomic. **Example 10.10**.: The avoidance set \(C=\operatorname{Av}(\left|\,1\,2\,\right|\left|\,3\,\right|,\left|\,1\,2\, \right|\left|\,3\,\right|)\) from Example 9.7 has a factor graph which is a bicycle, shown in Figure 6. The equivalence relations in \(C_{[1,b-1]}=C_{[1,2]}\) are \(\left|\,1\,\right|,\left|\,1\,2\,\right|\) and \(\left|\,1\,\right|2\,\right|\). Then condition (i) holds since \(\left|\,1\,\right|,\left|\,1\,2\,\right|,\left|\,1\,2\,\right|\leq\left|\,1 \,\right|2\,3\)\(\in C_{3}.\) Moreover, the factor graph contains no special vertices, and therefore no ambiguous vertices, so \(C\) is atomic by Theorem 10.6. **Example 10.11**.: Let \(B=\overline{\boldsymbol{Eq}}_{4}\backslash X\) where \[X=\{|\,1\,2\,3\,4\,|,\ |\,1\,2\,3\,|\,4\,|,\ |\,1\,|\,2\,4\,|\,3\,|,\ |\,1\,3\,4\,|\,2\,|,\ |\,1\,|\,2\,3\,|\,4\,|,\ |\,\,|\,1\,2\,4\,|\,3\,|\}\] and consider \(C=\operatorname{Av}(B)\). The factor graph of \(C\) is shown in Figure 2; this graph is neither strongly connected nor a bicycle, so \(C\) is not atomic by Theorem 10.6. ## 11. Concluding remarks and open problems A comparison between the main results of our paper, and it predecessor [13] is perhaps somewhat intriguing, and points to possible further investigations. In each paper both the atomicity and wqo problems are shown to be decidable for consecutive embedding orderings by translating them into the appropriate factor graphs. The definition of these factor graphs can be viewed as completely analogous between the two papers. However, the criteria for atomicity ([13, Theorems 5.1, 6.7] and our Theorem 10.6) and wqo ([13, Theorems 5.2, 7.20] and our Theorem 9.3) are all saying slightly different things. Underlying these differences is perhaps an even more intriguing difference in the notion of ambiguity: while in [13] this refers to paths, for us it is a property of vertices. This, in the authors' opinion, justifies further investigation of these properties for consecutive embedding orderings: **Question 11.1**.: _Are the atomicity and wqo problems decidable for consecutive embedding orderings of: (a) digraphs; (b) tournaments; (c) partial orders._ **Question 11.2**.: _Does there exist a general framework encompassing the results of [13] and the present paper, as well as the structures listed in Question 11.1?_ **Question 11.3**.: _Does there exist a (preferably natural) collection of relational structures for which either the atomicity or wqo problems under the consecutive embeddings are not decidable? Can these structures be chosen to have a single relation in their signature? Or even a single binary relation?_
2306.15289
Numerical Investigation of Water Entry of Hydrophobic Spheres
We perform numerical simulations to study the dynamics of the entry of hydrophobic spheres in a pool of water using ANSYS. To track the air-water interface during the translation of the sphere in the pool of water, we use the volume of fluid (VOF) model. The continuum surface force (CSF) method computes the surface tension force. To simulate the hydrophobic surface properties, we also include wall adhesion. We perform simulations with different diameters and impact speeds of the sphere. Our simulations capture the formation of different types of air cavities, pinch-offs of these cavities, and other finer details similar to the experiments performed at the same parameters. Finally, we compare the coefficient of drag among the different hydrophobic cases. We further perform simulations of hydrophilic spheres impacting the pool of water and compare the drag coefficient with the analogous hydrophobic cases. We conclude that the spheres with hydrophobic surfaces have a lower drag coefficient than their hydrophilic counterparts. This lower drag of the hydrophobic spheres is attributed to the formation of the air cavity by the hydrophobic surfaces while translating through the pool of water, which reduces the area of the sphere in contact with water. In contrast, no such air cavity forms in the case of hydrophilic spheres.
Jaspreet Singh, Anikesh Pal
2023-06-27T08:23:12Z
http://arxiv.org/abs/2306.15289v1
# Numerical Investigation of Water Entry of Hydrophobic Spheres. ###### Abstract We perform numerical simulations to study the dynamics of the entry of hydrophobic spheres in a pool of water using ANSYS. To track the air-water interface during the translation of the sphere in the pool of water, we use the volume of fluid (VOF) model. The continuum surface force (CSF) method computes the surface tension force. To simulate the hydrophobic surface properties, we also include wall adhesion. We perform simulations with different diameters and impact speeds of the sphere. Our simulations capture the formation of different types of air cavities, pinch-offs of these cavities, and other finer details similar to the experiments performed at the same parameters. Finally, we compare the coefficient of drag among the different hydrophobic cases. We further perform simulations of hydrophilic spheres impacting the pool of water and compare the drag coefficient with the analogous hydrophobic cases. We conclude that the spheres with hydrophobic surfaces have a lower drag coefficient than their hydrophilic counterparts. This lower drag of the hydrophobic spheres is attributed to the formation of the air cavity by the hydrophobic surfaces while translating through the pool of water, which reduces the area of the sphere in contact with water. In contrast, no such air cavity forms in the case of hydrophilic spheres. + Footnote †: journal: Elsevier ## 1 Introduction The vertical impact of a solid object with water has been of interest to many researchers owing to the formation of an air cavity and the subsequent translation of the object in the water. This entire process plays a significant role in a variety of problems. In military applications [6; 7; 8; 23; 24; 25] understanding the water entry phenomenon can assist in designing air-to-sea launchable projectiles that can preciously neutralize the underwater sea mines and torpedoes. Similarly, in marine applications [4; 11; 20], the intermittent entry of the ships or ocean structures during harsh conditions results in impact loading that can cause damage to ship hulls and ocean structures. The knowledge of the water entry phenomenon can help in estimating the impact forces and can subsequently assist in predicting the failure of marine structures. Apart from these engineering applications, water entry of solid objects find applications in bio-locomotion [10] and astrophysical phenomenon [26; 29]. The phenomenon of water entry of the sphere has been characterized by the Weber number \(We=\rho W^{2}R_{0}/\sigma\), the Bond number \(Bo=\rho gR_{0}^{2}/\sigma\), and the Reynolds number \(Re=\rho W_{0}R_{0}/\mu\). The Weber number represents the relative magnitude of the inertia and surface tension forces, while the Bond number describes the relative magnitude of the gravitational forces to the capillary forces. Typically the Froude number, \(Fr=W_{0}^{2}/gR_{0}=We/B\), has been used as a parameter to define different regimes of the water entry phenomenon based on the type of cavity collapse. The earlier experiments reporting the qualitative and quantitative characteristics of the physical events associated with the various stages of the water entry of a sphere were performed by [30; 16]. Theoretical investigations were carried out by [21; 18; 27] to study the normal impact of a body with a liquid surface. [28] experimentally studied the initial stage associated with the impact of a solid sphere onto a liquid surface. They reported that at a high Reynolds number, a high-speed horizontal jet emerges immediately after the initial contact of the sphere with water. The initial stage of impact is followed by the formation, pinch-off, and growth of air cavities. [17] and [14] carried out experimental and numerical investigations of water entry of circular disks at different \(Fr\) to study these follow up phenomena. [15] studied the cavity formation by pulling the cylinder vertically through a water surface at a constant speed and demonstrated that the resulting air cavity collapses owing to the hydrostatic pressure, leading to a rapid and axisymmetric pinch-off in a single point. They also found two separate scaling regimes of pinch-off depths with varying \(Fr\) and correlated all these observations with the capillary wave effect. [5] experimentally studied the dynamics of a giant cylindrical air cavity formed owing to the translation of a disk through the water. They found that for finite values of \(Fr\). the collapse of the air-cavity is not a strictly self-similar phenomenon. However, in the limit of infinite \(Fr\), the collapse of the air-cavity is self-similar. The formation and collapse of an air cavity induced by high-speed impact and penetration of a rigid projectile into the water were analytically investigated by [22]. It was deduced that the time of pinch-off of the air cavity is independent of the diameter of the sphere at high impact speeds, whereas the location of the pinch-off is weakly dependent on the impact velocity. A theoretical approach was also adopted by [12] to study the complete process of formation to collapse of a transient cavity of air in the water created by the impact of a cylinder and sphere at high \(Re\) and high \(We\). Their analytical solution concludes that the cavity dynamics for a cylindrical and a spherical object are very different and depend on \(Fr\). The static water contact angle \(\theta\) of a sphere is an important parameter governing the dynamics of water entry. When \(\theta\geq 90^{\circ}\) the surface of the sphere is not wettable, and the sphere is hydrophobic, whereas if \(0^{\circ}\leq\theta\leq 90^{\circ}\) the surface of the sphere is wettable and the sphere is called hydrophilic. [13] concluded that the degree of splash made by an object during water entry depends on the wettability of the object. [3] performed experiments and theoretical modeling of water entry of a small hydrophobic sphere of \(R_{0}<<2.7mm\) at low \(Bo\). They reported that the air cavity formed during the impact and the translation through water collapses owing to the combined effect of surface tension and gravity. Their parametric study revealed the formation of four distinct types of cavities, namely quasi-static, shallow seal, deep seal, and surface seal, with the increase in \(We\). In a quasi-static cavity, the air entrainment is minimal, and the cavity takes the shape of a hydrostatic meniscus. A considerable quantity of air is entrained in both the shallow seal and deep seal cavities, resulting in the formation of a long slender cavity. Capillary waves also generate during the formation of such cavities. These two types of cavities differ in terms of their pinch-off locations. The surface seal cavity is characterized by the closure of the splash curtain owing to some combination of the curvature pressures and aerodynamic pressures before its pinch off at depth. [19] performed numerical simulations to study the transient behavior of water-entry supercavitating flow around a cylindrical projectile at different impact velocities in the presence of turbulent drag-reducing additives. They compared the cavity lengths, velocity attenuations, penetration distances, and drag coefficients for water and drag-reducing solution cases at high and low impact velocity cases. They reported a significant enhancement in supercavitation and reduction in drag owing to the drag-reducing additives at low velocities compared to high velocities. Numerical simulations of spheres made of different materials entering water at an initial speed of 2.17 m/s were carried out by [1] using an Eulerian-Lagrangian method, and the results were verified against the experiments. In the present study, we perform three-dimensional numerical simulations to explore the phenomenon of water entry and the subsequent translation of a hydrophobic sphere in water. We consider a hydrophobic solid sphere of radius \(R_{0}\), and density \(\rho_{s}\) moving in the downward direction with a velocity of \(W_{0}\) and impacting a fluid of density \(\rho\), viscosity \(\mu\) and surface tension of \(\sigma\). The numerical setup is in figure 1. We use a volume of fluid function to represent the air-water interface. Our focus is to capture various types of air cavities and their evolution during the translation of the sphere through the water. Therefore, we present quantitative results for four different values of \(W_{0}\) corresponding to four different \(We\), similar to the experiments of [3]. We also simulate the analogous hydrophilic cases and present a comparison of the evolution of the drag force during water entry and translation for both hydrophobic and hydrophilic cases. ## 2 Numerical Method We use [2] for mesh generation and the subsequent numerical calculations. The material and the surface properties of the solid sphere are kept constant in all the cases. The conservation of mass and momentum is represented by the following equations \[\nabla\cdot\mathbf{u}=0, \tag{1}\] \[\frac{\partial}{\partial t}(\rho\mathbf{u})+\nabla\cdot(\rho\mathbf{u} \mathbf{u})=-\nabla p+\nabla\cdot[\mu(\nabla\mathbf{u}+\nabla\mathbf{u}^{T})] +\rho\mathbf{g}+\mathbf{F}, \tag{2}\] where \(\bf u\) is the velocity vector, \(p\) is the pressure, \(\rho\) is the density, \(\mu\) is the dynamic viscosity, \(F\) is the surface tension force and \(g\) is the acceleration because of gravity. The motion of the air-water interface is represented by \[\frac{\partial\alpha_{l}}{\partial t}+\nabla\cdot({\bf u}\alpha_{l})=0, \tag{3}\] where \(\alpha_{l}\) represents the liquid phase volume fraction [9]. The density and viscosity are calculated using: \[\rho=\rho_{l}\alpha_{l}+\rho_{g}(1-\alpha_{l}), \tag{4}\] \[\mu=\mu_{l}\alpha_{l}+\mu_{g}(1-\alpha_{l}). \tag{5}\] Here \(\rho_{g}\), \(\rho_{l}\), are the densities, and \(\mu_{g}\), \(\mu_{l}\) are the viscosities of the gas and liquid phases, respectively. If the volume fraction of the fluid in the cell is \(\alpha_{g}\)(of the \(g^{th}\) fluid ), then \(\alpha_{g}\) =1, \(\alpha_{g}\) =0, and 0\({}_{l}\alpha_{g}\) i1 represents that cell is full of fluid, an empty cell, and the cell contains the interface between the fluid and other fluids respectively. In our cases \(We\) and \(Bo\) ranges from 1.9 to 420 and 0.088 to 0.27 respectively. We also include the gravitational forces along with the surface tension forces owing to their effect on the pinch-off depth of the cavity. We perform simulations for different sphere radii and impact velocities. The sphere density is \(7700kg/m^{3}\) as the experiments of [3]. Since the radius \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline Case Velocity & Radius (\(R_{0}\)) & \(Re\) & \(We\) & \(Bo\) & Number of Mesh Elements \\ \hline \(0.3m/s\) & \(1.4mm\) & 83.83 & 1.9 & 0.27 & 1.25 Million \\ \hline \(2.3m/s\) & \(1mm\) & 4591.7 & 72 & 0.14 & 1.25 Million \\ \hline \(3.1m/s\) & \(0.79mm\) & 4840.7 & 109 & 0.088 & 1.1 Million \\ \hline \(5.4m/s\) & \(1mm\) & 10780.5 & 420 & 0.14 & 8 Million \\ \hline \end{tabular} \end{table} Table 1: Various parameters considered in the simulations \begin{table} \begin{tabular}{|c|c|c|} \hline Property & Air phase & Water phase \\ \hline Density \(kg/m^{3}\) & 1.225 & 998.2 \\ \hline Dynamic viscosity \(Pa\ s\) & \(1.7894e^{-5}\) & 0.001003 \\ \hline Surface Tension (\(\sigma\)) \(N/m\) & - & 0.072 \\ \hline \end{tabular} \end{table} Table 2: The properties of the water and air phase used in the simulations and velocity of the sphere are changing, the values of \(We\) and \(Bo\) change accordingly. We list the different simulation parameters in table 1. The contact angle (\(\theta_{c}\)) of the sphere wall with water is \(166^{o}\), thus, making the surface of the sphere super-hydrophobic. The contact angle for the hydrophilic cases is \(14^{o}\). To reduce the computational cost, instead of simulating the entire dropping event of the sphere from the initial position with zero initial velocity, we positioned the sphere at 2mm above the air-water interface with an approximate initial velocity in the normal direction of the water surface according to the velocity that it would be attained, based on the experiments [3]. For comparing the computational results with experimental findings, we calculate the reference time according to the initial position of the solid sphere in the experiments of [3]. The properties of the fluids used for the present simulations are in table 2. [3] ignored the presence of turbulent fluctuations in their theoretical derivations of the different pinch-off depth and time equations. Ignoring turbulent fluctuations is a reasonable assumption due to the formation of the air cavity. The air cavity prevents the formation of a wake of water past the sphere, making the flow highly streamlined around the sphere. Therefore, we assume laminar flow for the hydrophobic sphere for our simulations. However, for a hydrophilic sphere wake without an air cavity forms behind the sphere. Therefore, we use the \(SSTk-\omega\) turbulence model for simulating the hydrophilic case. Since the problem involves moving boundaries, we use three-dimensional dynamic mesh modeling to translate the sphere. ## 3 Results and discussions When the hydrophobic sphere enters the pool of water, four different types of cavities form for low Bond numbers. The sphere impact on the air-water interface creates an axisymmetric air cavity that expands radially before closing under the combined influence of the hydrostatic pressure, the surface tension, and the aerodynamic pressure. For \(B<<1\) and \(1<We<<\tilde{D}^{-1}\) (density ratio of air-liquid \(\rho_{a}/\rho\) ), the hydrostatic (\(\rho gz\)) and the aerodynamic pressures are negligible as compared to the curvature pressures and pinch-off near the surface is expected as observed in the experiments of [3]. Figure 2 demonstrates the impact of a hydrophobic sphere at an impact velocity of \(0.3m/s\), \(We=1.9\), and \(B=0.27\). The sphere sinks and immerses in water after \(t\sim 0.01s\). The cavity collapses when the contact line (indicated by the arrow in figure 2h) approaches the apex of the sphere at a depth of approximately the capillary length \(\sqrt{\sigma/(\rho g)}=2.71mm\). The vertical and radial extents of the cavity are 5.45 mm and 6.71 mm, respectively, and are also of the order of the capillary length. We observe minimal air entrainment, and a tiny air bubble remains attached to the sphere after the pinch-off (see figure 2 h), similar to that observed in the experiments. This type of cavity is called a quasi-static cavity. Figure 3 demonstrates the translation of the sphere of radius 1 mm after hitting the water surface with velocity 2.3 m/s. A tongue-shaped cavity forms after the impact. As the sphere moves downwards the contact line remains fixed near the equator. The interface stretches upwards, resulting in a highly sloped cavity wall similar to the experiments. This cavity pinches off at a depth of 2.744mm owing to the effect of surface tension force. This pinch-off depth is of the order of the capillary length and is similar to the first pinch-off depth of 2.714mm found by [3]. This type of cavity is called shallow-seal due to its near-surface collapse. The volume of air entrained in this case is much greater than the size of the sphere compared to the previous case. This entertained volume of air further pinches off at a depth of 12.233mm. Our simulation also captures the formation of the jet inside the secondary air bubble separated from the big air cavity, and the tiny air bubble (as indicated by the red arrows in figure 3(k) between the air bubble separated from the big air cavity and air cavity attached to the sphere, similar to that observed in the experiment. Increasing the impact velocity of the sphere to 3.1 m/s, but reducing the diameter to 0.79 mm results in a cavity, as shown in figure 4. A longer air cavity forms and the pinch-off occurs at a greater depth than the previous case. This type of cavity is called a deep seal cavity. The pinch-off location is approximately halfway between the sphere and the surface. The distance of the submerged sphere from the surface of the water when the pinch-off occurs is 31.568mm and is similar to the experiment (31.164 mm). Figure 5 shows the cavity dynamics for the sphere impacting water at 5.4m/s. After the impact, a splash curtain forms that domes over to close the cavity from above. This collapse is due to the combined effect of the aerodynamic and curvature pressures acting on the splash curtain. Such a type of cavity is known as surface-seal. After the surface seal, the cavity expands, and the pressure inside it decreases, resulting in the detachment of the cavity from the surface. With an increase in the depth, a water jet penetrates the cavity from above owing to a larger hydrostatic pressure than the curvature pressure. Our simulation accurately captures this jet penetration phenomenon. The pinch-off depth for this case is 40.43mm, which is close to that observed value of 38.11mm in the experiments of [3]. We further assess the cavity dynamics from the vertical variation of the pressure and the vertical velocity field along the \(x=0,y=0\) line at different time instances. For \(W_{0}=2.3\)m/s, \(R_{0}=1\)mm (\(We=72,Bo=0.14\)), the vertical variation of pressure and velocity along \(x,y=0\) line is demonstrated in figure 6(a) and (b) respectively, at time instances before the pinch-off, at the pinch off and the second pinch off. The maximum pressure is at the bottom of the sphere due to fluid stagnation. A sharp increase in pressure at the pinch-off location also occurs due to the stagnation of the fluid. We demonstrate this by the arrows in figure 6(a). The pinch-offs take place owing to the dominance of surface tension forces over viscous forces. The vertical velocity is zero at the pinch-off locations. Note that the pressure inside the cavity decreases linearly. This decrease in pressure occurs due to the expansion of the sealed cavity. During the first pinch-off, the velocity becomes negative, signifying air leaving the cavity in the form of a nozzle in the upwards direction. We also observe a negative velocity at the location of the separated air bubble. This indicates the formation of an upward-moving jet within the air bubble, as also demonstrated in the supplementary Movie 1. Notice that the vertical velocity becomes approximately zero after the second pinch-off before becoming negative due to the momentum balance between the downward-moving fluid and the upward-moving jet of air. For \(W_{0}=3.1m/s,R_{0}=0.79mm\) (\(We=109,Bo=0.088\)), the pressure and velocity variation in the vertical direction along \(x,y=0\) line at time intervals: before necking, at necking and pinch off after necking are shown in figures 7 (a) and (b) respectively. In this case, a deep seal pinch-off occurs. The pressure distribution is similar to that observed during the second pinch-off for the previous case. We can see that the pressure inside the cavity decreases linearly due to the expansion of the cavity. During the necking phenomenon, negative velocity indicates an upward flow of air. The supplementary Movie 2 demonstrates this process. At the necking region, the air cavity becomes narrow. The air inside the cavity is at a pressure higher than the surrounding, and tries to escape from this narrow necking region. Since the air is trying to escape from a converging region, the air velocity in the upward direction increases significantly, as indicated by a negative velocity in figure 7(b). At pinch-off (indicated by the blue arrow in figure 7(a)), the fluid almost stagnates, increasing pressure in the zone. The vertical velocity remains constant inside the air cavity. However, post pinch-off, the vertical velocity nearly becomes zero, similar to that observed for the previous case. The pressure and the vertical velocity plot at \((We=420,Bo=0.14)\) for three-time intervals: before the pinch-off, at pinch-off, and after pinch-off; \(W_{0}=5.4m/s,R_{0}=1mm\) is shown in figures 8 (a) and (b). We can observe that the velocity before pinch-off is highly positive, indicating the rapid downward movement of the air into the cavity formed owing to the fast-moving sphere into the pool of water. The rapid movement of the air results in negative pressure at this location, and therefore the water rushes towards the axis of the cavity, eventually leading to the pinch-off. The pinch-off occurs near the surface as manifested by the increase in the pressure (indicated by the red arrow in figure 8(a)). We can also observe that the vertical velocity inside the cavity fluctuates and becomes negative at the pinch-off location, signifying the upwards movement of the air. We also assess the drag force for the different cases by evaluating the drag coefficient. Additionally, we simulate the translation of a hydrophilic sphere at \(Bo,We=(0.14,132),(0.088,109)\) and compare its drag coefficient with the analogous hydrophobic case. When a hydrophobic sphere translates through water, the lower half remains in contact with water, and the upper half remains inside the air, thus, resulting in a significant reduction in the drag coefficient. However, for the hydrophilic sphere, no such cavity forms during its translation through the pool of water. Figure 9(a) shows the comparison of the coefficient of drag (\(C_{D}\)) among different cases with the same \(Bo=0.14\) and different Weber numbers. We have also included a hydrophilic case as \(We=72\) to estimate the drag reduction for a hydrophobic case at the same \(We\). Similarly, figure 9(b) demonstrates the time evolution of \(C_{D}\) for the hydrophilic and hydrophobic spheres at \(We=109\) and \(Bo=0.088\). We measure \(C_{D}\) when the sphere touches the water surface. In all the cases, \(C_{D}\) reaches its peak value when a large portion of the sphere submerges in the water for a short period. The peak of \(C_{D}\) depends on the speed at which the sphere enters water. We find that for a higher impact speed of the sphere, the initial \(C_{D}\) is high. When the sphere completely submerges into the water, \(C_{D}\) starts dropping. At a higher speed of 5.4 m/s (\(We=420\)), the decrease in the drag coefficient is significantly quicker than that at 2.3 m/s (\(We=72\)). We can also observe that the \(C_{D}\) for a hydrophobic sphere is less than the hydrophilic sphere at \(We=72,109\) and \(Bo=0.14,0.088\). For the hydrophobic sphere, only the lower portion remains in contact with water, whereas the upper part remains inside the air cavity resulting in lower drag. In contrast, the absence of an air cavity in the hydrophilic spheres results in relatively more drag. ## 4 Conclusions We perform numerical simulations to study the air cavity that forms during the translation of a hydrophobic sphere from air to water at different speeds. When a sphere of radius 1 mm hits the water surface at a speed of 2.3 m/s, it produces a tongue-shaped air cavity that pinches off very near the surface. Such cavities are known as shallow seals. A sphere of radius 0.79 mm with an impact speed of 3.1 m/s results in a similar air cavity. However, this cavity pinches off at a depth relatively greater than the sphere impacting at 2.3 m/s and is known as a deep seal cavity. When the sphere with a radius of 1 mm impacts the pool of water at a speed of 5.4 m/s, a splash curtain domes over to close the air cavity from above, forming a surface seal cavity. The qualitative features of the different types of cavities obtained from our simulations agree very well with the experiments of [3]. We further report the pressure and the vertical velocity distribution along a vertical line passing through the center of the sphere to assess the cavity dynamics quantitatively. The pressure increases at the locations of the pinch-offs for all the cases due to the stagnation of the fluid at the pinch-off locations. We also capture the fine details associated with the dynamics of air cavities at different speeds. At 2.3 m/s, we see two pinch-offs, along with the existence of a tiny air bubble and the formation of a jet inside the secondary air bubble separated from the big air cavity. Similarly, for 5.4 m/s, we observe a splash curtain and a water jet penetrating the air cavity from above. All these details agree with the experiments of [3]. We also present a comparison of the drag coefficient among the different cases. When the sphere comes in contact with the water, we observe a peak in the value of \(C_{D}\) for all the cases. As the sphere submerges in the pool of water, an envelope of air surrounds it, resulting in a decrease in the drag coefficient. We find that with an increase in the impact speed from 2.3 m/s to 5.4 m/s, the rate of decrease of \(C_{D}\) increases. We also compare the drag coefficient between hydrophobic and hydrophilic spheres and find that the presence of the air cavity behind the hydrophobic sphere significantly reduces the drag.
2302.09357
Stochastic Online Instrumental Variable Regression: Regrets for Endogeneity and Bandit Feedback
Endogeneity, i.e. the dependence of noise and covariates, is a common phenomenon in real data due to omitted variables, strategic behaviours, measurement errors etc. In contrast, the existing analyses of stochastic online linear regression with unbounded noise and linear bandits depend heavily on exogeneity, i.e. the independence of noise and covariates. Motivated by this gap, we study the over- and just-identified Instrumental Variable (IV) regression, specifically Two-Stage Least Squares, for stochastic online learning, and propose to use an online variant of Two-Stage Least Squares, namely O2SLS. We show that O2SLS achieves $\mathcal O(d_{x}d_{z}\log^2 T)$ identification and $\widetilde{\mathcal O}(\gamma \sqrt{d_{z} T})$ oracle regret after $T$ interactions, where $d_{x}$ and $d_{z}$ are the dimensions of covariates and IVs, and $\gamma$ is the bias due to endogeneity. For $\gamma=0$, i.e. under exogeneity, O2SLS exhibits $\mathcal O(d_{x}^2 \log^2 T)$ oracle regret, which is of the same order as that of the stochastic online ridge. Then, we leverage O2SLS as an oracle to design OFUL-IV, a stochastic linear bandit algorithm to tackle endogeneity. OFUL-IV yields $\widetilde{\mathcal O}(\sqrt{d_{x}d_{z}T})$ regret that matches the regret lower bound under exogeneity. For different datasets with endogeneity, we experimentally show efficiencies of O2SLS and OFUL-IV.
Riccardo Della Vecchia, Debabrota Basu
2023-02-18T15:02:10Z
http://arxiv.org/abs/2302.09357v3
# Online Instrumental Variable Regression: ###### Abstract The independence of noise and covariates is a standard assumption in online linear regression with unbounded noise and linear bandit literature. This assumption and the following analysis are invalid in the case of endogeneity, i.e., when the noise and covariates are correlated. In this paper, we study the _online setting of Instrumental Variable (IV) regression_, which is widely used in economics to identify the underlying model from an endogenous dataset. Specifically, we upper bound the identification and oracle regrets of the popular Two-Stage Least Squares (2SLS) approach to IV regression but in the online setting. Our analysis shows that Online 2SLS (O2SLS) achieves \(\mathcal{O}(d^{2}\log^{2}T)\) identification and \(\mathcal{O}(\gamma\sqrt{dT\log T})\) oracle regret after \(T\) interactions, where \(d\) is the dimension of covariates and \(\gamma\) is the bias due to endogeneity. Then, we leverage O2SLS as an oracle to design OFUL-IV, a linear bandit algorithm. OFUL-IV can tackle endogeneity and achieves \(\mathcal{O}(d\sqrt{T}\log T)\) regret. For datasets with endogeneity, we experimentally show the efficiency of OFUL-IV in terms of estimation error and regret. Online Instrumental Variable Regression Online Instrumental Variable Regression ###### Contents * 1 Introduction * 2 Related Work * 3 Preliminaries: Instrumental Variables & Offline Two-stage Least Squares (2SLS) * 4 Online Two-Stage Least Squares Regression * 4.1 Defining Regrets: Identification and Oracle * 4.2 Theoretical Analysis * 5 Linear Bandits with Endogeneity: OFUL-IV * 6 Conclusions and Future Works * A Useful Results * A.1 Random Variables * A.2 Norms of Vectors and Matrices * A.3 Technical Lemmas * B Concentration of The Minimum Eigenvalue of The Design Matrix * C Technical Lemmas for the Endogeneous Setting D Elliptical Lemma for the Endogeneous Setting E A Detailed Discussion on Different Definitions of Regret F Lemmas on Correlation between First and Second Stages G Regret Analysis for IV Regression: O2SLS H Regret Analysis for IV Linear Bandits: OFUL-IV I Concentration of Scalar and Vector-valued Martingales J Parameter Estimation and Concentration in First Stage K Experiments ## 1 Introduction Online regression is a founding component of online learning (Kivinen et al., 2004), sequential testing (Kazerouni and Wein, 2021), contextual bandits (Foster and Rakhlin, 2020), and reinforcement learning (Ouhamma et al., 2022). Specially, online linear regression is widely used and analysed to design efficient algorithms with theoretical guarantees (Greene, 2003; Abbasi-Yadkori et al., 2011; Hazan and Koren, 2012). In linear regression, the _outcome_ (or output variable) \(Y\in\mathbb{R}\), and the _input features_ (or covariates, or treatments) \(\mathbf{X}\in\mathbb{R}^{d}\) are related by a structural equation: \[Y=\boldsymbol{\beta}^{T}\mathbf{X}+\eta,\] where \(\boldsymbol{\beta}\) is the _true parameter_ and \(\eta\) is the observational noise with variance \(\sigma^{2}\). _The goal is to estimate \(\boldsymbol{\beta}\) from an observational dataset._ Two common assumptions in the analysis of linear regression are (i) bounded observations and covariates (Vovk, 1997; Bartlett et al., 2015; Gaillard et al., 2019), and (ii) _exogeneity_, i.e. independence of the noise \(\eta\) and the input features \(X\) (\(\mathbb{E}[\eta|X]=0\)) (Abbasi-Yadkori et al., 2011; Ouhamma et al., 2021). Under exogeneity, researchers have studied scenarios, where the observational noise is unbounded and has only bounded variance \(\sigma^{2}\). Ouhamma et al. (2021) show that the unbounded stochastic setting asks for different technical analysis than the bounded adversarial setting popular in online regression literature. Additionally, in real-life, exogeneity is often violated, and we encounter _endogeneity_, i.e. dependence between noise and covariates (\(\mathbb{E}[\eta|X]\neq 0\)) (Greene, 2003; Angrist et al., 1996). Endogeneity arises due to omitted explanatory variables, measurement errors, dependence of the output and the covariates on unobserved confounding variables etc. (Wald, 1940; Mogstad et al., 2021; Zhu et al., 2022). In this paper, _we analyse online linear regression that aims to estimate \(\boldsymbol{\beta}\) accurately from endogeneous observational data, where the noise is stochastic and unbounded._ **Instrumental Variable Regression.** Historically, _Instrumental Variables_ (IVs) are introduced to identify and quantify the causal effects of endogenous covariates (Newey and Powell, 2003). IVs are widely used in economics (Wright, 1928; Mogstad et al., 2021), causal inference (Rubin, 1974; Hernan and Robins, 2020; Harris et al., 2022), bio-statistics and epidemiology (Burgess et al., 2017). **Example 1.1**.: _Carneiro et al. (2011); Mogstad et al. (2021) aim to estimate the number of returning students to college using the National Longitudinal Survey of Youth data. The return depends on multiple covariates \(X\), such as whether the individual attended college, her AFQT scores, her family income, her family conditions (mother's years of education, number of siblings etc.). Often the family conditions have unobserved confounding effects on the college attendance and scores. This endogenous nature of data leads to bias in traditional linear regression estimates, such as Ordinary Least Squares (OLS). To mitigate this issue, Carneiro et al. (2011); Mogstad et al. (2021) leverage two IVs (\(Z\)): average log income in the youth's county of residence at age 17, and the presence of a four-year college in the youth's county of residence at age 141. The logic is that a youth might find going to college more attractive when labour market opportunities are weaker and a college is nearby. Using these two IVs, the youth's attendance to college is estimated. Then, in the next stage this estimate of college attendance is used with family conditions to predict the return of the youth to college. This two stage regression approach with IVs produces a more accurate estimate of youths' return to the college than OLS models assuming exogeneity._ Footnote 1: One can argue whether these are either sufficient or weak IVs. For simplicity, we assume sufficiency here, i.e. the IVs can decouple the unobserved confounding. This approach to conduct two stages of linear regression using instrumental variables is called _Two Stage Least Squares Regression_ (\(2\mathsf{SLS}\)) (Angrist and Imbens, 1995; Angrist et al., 1996). \(2\mathsf{SLS}\) has become the standard tool in economics, social sciences, and statistics to study the effect of treatments on outcomes involving endogeneity. Recently, in machine learning, researchers have extended traditional \(2\mathsf{SLS}\) techniques to nonlinear structures, non-compliant instruments, and corrupted observations using deep learning (Liu et al., 2020; Xu et al., 2020, 2021), graphical models (Stirn and Jebara, 2018), and kernel regression (Zhu et al., 2022), respectively. The existing analysis of \(2\mathsf{SLS}\) are asymptotic, i.e. what can be learned if we have access to infinite number of samples in an offline setting (Singh et al., 2020; Liu et al., 2020; Narekishvili et al., 2022). In applications, this analysis is vacuous as one has access to only finite samples. Additionally, in practice, it is natural to acquire the data sequentially as treatments are chosen on-the-go, and then to learn the structural equation from the sequential data (Venkatraman et al., 2016). This setting motivates us to analyse the online extension of \(2\mathsf{SLS}\), referred as \(\mathsf{O2SLS}\). Additionally, in an interactive setting, if a policy maker aims to build more schools at some of the lower income areas as a form of intervention, she observes only the changes corresponding to it. This is referred as bandit feedback in online learning literature and studied under the linear bandit formulation (Abbasi-Yadkori et al., 2011a). This motivates us to further extend \(\mathsf{O2SLS}\) to _linear bandits, where bandit feedback and endogeneity occur simultaneously_. In this paper, we investigate these two questions: 1. _What is the upper bound on the loss in performance for deploying parameters estimated by \(\mathsf{O2SLS}\) instead of the true parameters \(\boldsymbol{\beta}\)? How estimating the true parameters \(\boldsymbol{\beta}\) influence different performance metrics under endogeneity?_ 2. _Can we design efficient algorithms for linear bandits with endogeneity by using \(\mathsf{O2SLS}\)?_ **Our Contributions.** Our investigation has led to 1. _A Non-asymptotic Analysis of \(\mathsf{O2SLS}\):_ First, we identify three notions of regret: _identification regret_, _oracle regret_, and _population regret_. Though all of them are of same order under endeogeneity, we show that the relations are more nuanced under endogeneity and unbounded noise. We focus specifically on the identification regret, i.e. the sum of differences between the estimated parameters \(\{\boldsymbol{\beta}_{t}\}_{t=1}^{T}\), and the true parameter \(\boldsymbol{\beta}\), and _oracle regret_, i.e. the sum of differences between the losses incurred by the estimated parameters \(\{\boldsymbol{\beta}_{t}\}_{t=1}^{T}\), and the true parameter \(\boldsymbol{\beta}\). In Section 4, we theoretically show that \(\mathsf{O2SLS}\) achieve \(\mathcal{O}(d^{2}\log^{2}T)\) identification regret and \(\mathcal{O}(d^{2}\log^{2}T+\gamma\sqrt{dT\log T})\) oracle regret after receiving \(T\) samples from the observational data. Identification regret of \(\mathsf{O2SLS}\) is \(d\log T\) multiplicative factor higher than regret of online linear regression under exogeneity, and oracle regret is \(\mathcal{O}(\gamma\sqrt{dT\log T})\) additive factor higher. These are the costs that \(\mathsf{O2SLS}\) pay for tackling endogeneity in two stages. In our knowledge, _we are the first to propose a non-asymptotic regret analysis of \(\mathsf{O2SLS}\) with stochastic and unbounded noise_. 2. OFUL-IV _for Linear Bandits with Endogeneity:_ In Section 5, we study the linear bandit problem with endogeneity. _We design an extension of OFUL algorithm used for linear bandit with exogeneity, namely_OFUL-IV_, to tackle this problem. OFUL-IV uses O2SLS to estimate the parameters, and corresponding confidence bounds on \(\mathbf{\beta}\) to balance exploration-exploitation. We show that OFUL-IV achieve \(\mathcal{O}(d\sqrt{T}\log T)\) regret after \(T\) interactions. We experimentally show that OFUL-IV incur lower regret than OFUL under endogeneity (end of Section 5). ## 2 Related Work **Online Regression without Endogeneity.** Our analysis of O2SLS extends the tools and techniques of online linear regression without endogeneity. Analysis of online linear regression began with (Foster, 1991; Littlestone et al., 1991). Vovk (1997, 2001) show that forward and ridge regressions achieve \(\mathcal{O}(dY_{\max}^{2}\log T)\) for outcomes with bound \(Y_{\max}\). Bartlett et al. (2015) generalise the analysis further by considering the features known in hindsight. Gaillard et al. (2019) improve the analysis further to propose an optimal algorithm and a lower bound. _These works perform an adversarial analysis with bounded outcomes, covariates, and observational noise, while we focus on the stochastic setting._ Ouhama et al. (2021) study the stochastic setting with bounded input features and unbounded noise. But they need to assume independence of noise and input features. _In this paper, we analyse online 2SLS under endogeneity and unbounded (stochastic) noise._ We do not assume to know the bound on the outcome and derive high probability bounds for any bounded sequence of features. **Linear Bandits without Endogeneity.** Linear bandits generalise the setting of online linear regression under bandit feedback (Abbasi-Yadkori et al., 2011, 2012; Foster and Rakhlin, 2020). To be specific, in bandit feedback, the algorithm observes only the outcomes for the input features that it has chosen to draw during an interaction. Popular algorithm design techniques, such as optimism-in-the-face-of-uncertainty and Thompson sampling, are extended to propose OFUL (Abbasi-Yadkori et al., 2012) and LinTS (Abeille and Lazaric, 2017), respectively. OFUL and LinTS algorithms demonstrate \(\mathcal{O}(d\sqrt{T}\log T)\) and \(\mathcal{O}(d^{1.5}\sqrt{T}\log T)\) regret guarantees under exogeneity assumption. _Here, we use \(\mathsf{O2SLS}\) as a regression oracle to develop_OFUL-IV _for linear bandits with endogeneity. We prove that_OFUL-IV _achieves \(\mathcal{O}(d\sqrt{T}\log T)\) regret._ **Instrument-armed Bandits.** Kallus (2018) is the first to study endogeneity, and instrumental variables in stochastic bandit setting. Stirn and Jebara (2018) propose a Thompson sampling-type algorithm for stochastic bandits, where endogeneity arises due to non-compliant actions. But both Kallus (2018) and Stirn and Jebara (2018) study only the finite-armed bandit setting where arms are independent of each other. In this paper, _we study the stochastic linear bandit setting with endogeneity, which requires different techniques for analysis and algorithm design._ ## 3 Preliminaries: Instrumental Variables & Offline Two-stage Least Squares (2SLS) We are given an observational dataset \(\{\mathbf{x}_{i},y_{i}\}_{i=1}^{n}\) consisting of \(n\) pairs of input features and outcomes, such that \(y_{i}\in\mathbb{R}\) and \(\mathbf{x}_{i}\in\mathbb{R}^{d}\).2 These inputs and outcomes are stochastically generated using a linear model Footnote 2: Matrices and vectors are represented with bold capital and bold small letters, e.g. \(\mathbf{A}\) and \(\mathbf{a}\), respectively. \[y_{i}=\mathbf{\beta}^{\top}\mathbf{x}_{i}+\eta_{i},\] (Second stage) where \(\mathbf{\beta}\in\mathbb{R}^{d}\) is the _unknown true parameter vector_ of the linear model, and \(\eta_{i}\sim\mathcal{N}(0,\sigma_{\eta}^{2})\) is the unobserved error term representing all causes of \(y_{i}\) other than \(\mathbf{x}_{i}\). It is assumed that the error terms \(\eta_{i}\) are independently and identically distributed, and have bounded variance \(\sigma^{2}\). The parameter vector \(\mathbf{\beta}\) quantifies the causal effect on \(y_{i}\) due to a unit change in a component of \(\mathbf{x}_{i}\), while retaining other causes of \(y_{i}\) constant. The goal of linear regression is to estimate \(\mathbf{\beta}\) by _minimising the square loss over the dataset_(Brier, 1950), i.e. \(\hat{\mathbf{\beta}}\triangleq\operatorname*{argmin}_{\mathbf{\beta}^{\prime}}\sum_{i= 1}(y_{i}-\mathbf{\beta}^{\prime\top}\mathbf{x}_{i})^{2}\). The obtained solution is called the Ordinary Least Square (OLS) estimate of \(\mathbf{\beta}\)(Wasserman, 2004), and used as a corner stone of online regression (Gaillard et al., 2019) and linear bandit algorithms (Foster and Rakhlin, 2020). Specifically, if the input feature matrix \(\mathbf{X}_{n}\in\mathbb{R}^{n\times d}\) is defined as \([\mathbf{x}_{1},\mathbf{x}_{2},\ldots,\mathbf{x}_{n}]^{\top}\), the outcome vector is \(\mathbf{y}_{n}\triangleq[y_{1},\ldots,y_{n}]^{\top}\), and the noise vector is \(\mathbf{\eta}_{n}\triangleq[\eta_{1},\ldots,\eta_{n}]^{\top}\), the OLS estimator is expressed as \[\widehat{\mathbf{\beta}}_{\mathrm{OLS}}\triangleq(\mathbf{X}_{n}^{\top}\mathbf{X} _{n})^{-1}\mathbf{X}_{n}^{\top}\mathbf{y}_{n}=\mathbf{\beta}+(\mathbf{X}_{n}^{\top} \mathbf{X}_{n})^{-1}\mathbf{X}_{n}^{\top}\mathbf{\eta}_{n}\] _If \(\mathbf{X}_{n}\) and \(\mathbf{\eta}_{n}\) are independent_, the second term has zero expected value conditioned on \(\mathbf{X}_{n}\). Hence, the OLS estimator is asymptotically unbiased, i.e. \(\widehat{\mathbf{\beta}}_{\mathrm{OLS}}\to\infty\) as \(n\to\infty\). In practice, the input features \(\mathbf{x}\) and the noise \(\mathbf{\eta}\) are often correlated (Greene, 2003, Chapter 8). As in Figure 1, this dependence, called endogeneity, is modelled with _a confounding unobserved random variable \(\mathbf{\epsilon}\)_. To compute an unbiased estimate of \(\mathbf{\beta}\) under endogeneity, a popular technique is to introduce the Instrumental Variables (IVs) \(\mathbf{z}\)(Angrist et al., 1996; Newey and Powell, 2003). IVs are chosen such that they are highly correlated with endogenous components of \(\mathbf{x}\) (relevance condition) but are independent of the noise \(\eta\) (exogeneity condition for \(\mathbf{z}\)). This leads to the Two-stage Least Squares (2SLS) approach to IV regression (Angrist and Imbens, 1995; Angrist et al., 1996). Here, we further assume that IVs, i.e. \(\mathbf{Z}_{n}\triangleq[\mathbf{z}_{1},\ldots,\mathbf{z}_{n}]^{\top}\), cause linear effects on the endogenous covariates. Specifically, for the just-identified IVs, \[\mathbf{X}_{n}=\mathbf{Z}_{n}\mathbf{\Theta}+\mathbf{E}_{n},\] (First stage) where \(\mathbf{\Theta}\in\mathbb{R}^{d\times d}\) is an unknown first-stage parameter matrix and \(\mathbf{E}_{n}\triangleq[\mathbf{\epsilon}_{1},\ldots,\mathbf{\epsilon}_{n}]^{\top}\) is the unobserved noise matrix leading to confounding in the second stage. This is a "classic" _multiple regression_, where the covariates \(\mathbf{z}\) are independent of the noise terms \(\mathbf{\epsilon}\sim\mathcal{N}(0,\sigma_{\mathsf{z}}^{2}\mathbb{I}_{d})\)(Wasserman, 2004, Ch. 13). Thus, the first-stage is amenable to OLS regression. This formulation leads us to the 2SLS estimator: \[\widehat{\mathbf{\beta}}_{\mathrm{2SLS}}=\left(\mathbf{Z}_{n}^{\top}\mathbf{X}_{n} \right)^{-1}\mathbf{Z}_{n}^{\top}\mathbf{y}_{n}.\] (2SLS) As long as \(\mathbb{E}[\mathbf{z}_{i}\eta_{i}]=0\) in the true model, we observe that \[\widehat{\mathbf{\beta}}_{\mathrm{2SLS}}=\left(\mathbf{Z}_{n}^{\top}\mathbf{X}_{n} \right)^{-1}\mathbf{Z}_{n}^{\top}\mathbf{X}_{n}\mathbf{\beta}+\left(\mathbf{Z}_{n }^{\top}\mathbf{X}_{n}\right)^{-1}\mathbf{Z}_{n}^{\top}\mathbf{\eta}_{n}\overset{p }{\to}\mathbf{\beta},\] as \(n\to\infty\). This works because IV solves for the unique parameter that satisfies \(\frac{1}{n}\mathbf{Z}_{n}^{\top}\eta\overset{p}{\to}0\). Since \(\mathbf{x}\) and \(\mathbf{\eta}\) are correlated, 2SLS estimator is not unbiased in finite-time. Figure 1: The DAG for 2SLS. The unobserved noises are \(\mathbf{\epsilon}\) and \(\eta\) (in grey), while \(\mathbf{z},\mathbf{x},y\) are observed quantities. **Assumption 3.1**.: _The assumptions for conducting 2SLS with just-identified IVs are (Greene, 2003):_ 1. _Well behaved data._ _For every_ \(n\in\mathbb{N}\)_, the matrices_ \(\mathbf{Z}_{n}^{\top}\mathbf{Z}_{n}\) _and_ \(\mathbf{Z}_{n}^{\top}\mathbf{X}_{n}\) _are full rank, and thus invertible._ 2. _Endogeneity of_ \(x\)_._ _The second stage input features_ \(x\) _and noise_ \(\eta\) _are not independent:_ \(x\not\perp\eta\)_._ 3. _Exogeneity of_ \(z\)_._ _The IV random variables are independent of the noise in the second stage:_ \(z\perp\!\!\!\perp\eta\)_._ 4. _Relevance Condition__._ _The variables_ \(z\) _and_ \(x\) _are correlated:_ \(z\not\perp\!\!\!\perp x\)_._ _This implies that there exists_ \(\mathfrak{r}>0\)_:_ \[\left|\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! ``` 1:for\(t=1,2,\ldots,T\)do 2: Observe \(\mathbf{z}_{t}\), \(\mathbf{x}_{t}\) 3: Compute \(\mathbf{\beta}_{t-1}\) according to Equation (O2SLS) 4: Predict \(\widehat{y}_{t}=\mathbf{\beta}_{t-1}^{\top}\mathbf{x}_{t}\) 5: Observe \(y_{t}\) and compute loss \(\ell_{t}\left(\mathbf{\beta}_{t-1}\right)\) 6:endfor ``` **Algorithm 1**O2SLS _2SLS to incorporate this additional knowledge. We skip this modification and use \(\mathbf{\beta}_{t-1}\) to predict. Previously, Venkatraman et al. (2016) studied \(\mathsf{O2SLS}\) for system identification but provided only an asymptotic analysis._ ### Defining Regrets: Identification and Oracle To analyse the online regression algorithms, it is essential to define proper performance metrics, specifically regrets. Typically, regret quantifies what an online (or sequential) algorithm cannot achieve as it does not have access to the whole dataset rather observes it step by step. Here, we discuss and define different regrets that we leverage in our analysis of O2SLS. In econometrics and bio-statistics, where 2SLS is popularly used the focus is accurate identification of the underlying structural model \(\mathbf{\beta}\). Identifying \(\mathbf{\beta}\) leads to understanding of the underlying economic or biological causal relations and their dynamics. In ML, Venkatraman et al. (2016) applied O2SLS for online linear system identification. Thus, given a sequence of estimators \(\left\{\mathbf{\beta}_{t}\right\}_{t=1}^{T}\) and a sequence of covariates \(\left\{\mathbf{x}_{t}\right\}_{t=1}^{T}\), the cost of identifying the true parameter \(\mathbf{\beta}\) can be quantified by \[\widetilde{R}_{T}(\mathbf{\beta})\triangleq\sum_{t=1}^{T}(\mathbf{x}_{t}^{\top}\mathbf{ \beta}_{t-1}-\mathbf{x}_{t}^{\top}\mathbf{\beta})^{2}. \tag{3}\] We refer to \(\widetilde{R}_{T}(\mathbf{\beta})\) as _identification regret_ over horizon \(T\). In the just identified setting that we are considering, the identification regret is equivalent to the regret of counterfactual prediction (Eqn. 5, Hartford et al. (2016)). Counterfactual predictions are important to study the causal questions: what would have changed in the outcome if Treatment \(a\) is used instead of treatment \(b\). One of the modern applications of IVs are to facilitate such counterfactual predictions (Hartford et al., 2016; Bennett et al., 2019; Zhu et al., 2022). Alternatively, one might be interested in evaluating and improving the quality of prediction obtained using an estimator \(\left\{\mathbf{\beta}_{t}\right\}_{t=1}^{T}\) with respect to an underlying oracle (or expert), which is typically the case in statistical learning theory and forecasting (Foster, 1991; Cesa-Bianchi and Lugosi, 2006). If the oracle has access to the true parameters \(\mathbf{\beta}\), the cost in terms of prediction that the estimators pay with respect to the oracle is \(\tilde{r}_{t}\triangleq\ell_{t}\left(\mathbf{\beta}_{t}\right)-\ell_{t}\left( \mathbf{\beta}\right)\). Thus, the regret in terms of the quality of prediction is defined as \[\overline{R}_{T}(\mathbf{\beta})\triangleq\sum_{t=1}^{T}(y_{t}-\mathbf{x}_{t}^{\top} \mathbf{\beta}_{t-1})^{2}-\sum_{t=1}^{T}(y_{t}-\mathbf{x}_{t}^{\top}\mathbf{\beta})^{2}. \tag{4}\] We refer to \(\overline{R}_{T}(\mathbf{\beta})\) as the _oracle regret_. This regret is studied for stochastic analysis of online regression (Ouhamma et al., 2021) and is also useful for analysing bandit algorithms (Foster and Rakhlin, 2020). As O2SLS is interesting for learning causal structures, we focus on the identification regret. On the other hand, to compare with the existing results in online linear regression, we also analyse the oracle regret of O2SLS. Though we know that they are of similar order (w.r.t. \(T\)) in the exogenous setting, we show that they differ significantly for O2SLS under endogeneity. **Remark 4.2**.: _In online learning theory focused on Empirical Risk Minimisation (ERM), another type of regret is considered where the oracle has access to the best offline estimator \(\boldsymbol{\beta}_{T}\triangleq\operatorname*{argmin}_{\boldsymbol{\beta}}\sum_{t =1}^{T}(y_{t}-\boldsymbol{x}_{t}^{\top}\boldsymbol{\beta})^{2}\) given the observations over \(T\) steps Cesa-Bianchi and Lugosi (2006). Thus, the new formulation of regret becomes_ \[R_{T}=\sum_{t=1}^{T}(y_{t}-\boldsymbol{x}_{t}^{\top}\boldsymbol{\beta}_{t-1})^{ 2}-\min_{\boldsymbol{\beta}}\sum_{t=1}^{T}(y_{t}-\boldsymbol{x}_{t}^{\top} \boldsymbol{\beta})^{2}. \tag{5}\] _We refer to it as the population regret. Under exoogeneity, Ouhama et al. (2021) shows that oracle regret and population regret differs by \(o(\log^{2}T)\). We show that under endeogeneity their expected values differ by \(\Omega(T)\). Thus, we avoid studying this notion of regret in this paper. More details are in Appendix E._ ### Theoretical Analysis **Confidence Interval of \(\boldsymbol{\beta}_{t}\).** The central result in our analysis is concentration of the O2SLS estimates \(\boldsymbol{\beta}_{t}\) around \(\boldsymbol{\beta}\). **Lemma 4.1** (Confidence Ellipsoid for the Second-stage Parameters).: _Let us define the design matrix to be \(\mathbf{G}_{\boldsymbol{z},t}=\mathbf{Z}_{t}^{\top}\mathbf{Z}_{t}+\lambda \mathbb{I}_{d}\) for some \(\lambda>0\). Then, for \(\sigma_{\eta}\)-sub-Gaussian first stage noise \(\eta_{t}\), the true parameter \(\boldsymbol{\beta}\) belongs to the set_ \[\mathcal{E}_{t}=\left\{\boldsymbol{\beta}\in\mathbb{R}^{d}:\|\boldsymbol{\beta }_{t}-\boldsymbol{\beta}\|_{\widehat{\mathbf{H}}_{t}}\leq\sqrt{\mathfrak{b}_{ t}(\delta)}\right\}, \tag{6}\] _with probability at least \(1-\delta\in(0,1)\), for all \(t\geq 0\). Here, \(\mathfrak{b}_{t}(\delta)\triangleq\frac{d\sigma_{\eta}^{2}}{4}\!\log\left( \frac{1+tL_{\gamma}^{2}/\lambda}{\delta}\right)\), \(\widehat{\mathbf{H}}_{t}\triangleq\widehat{\mathbf{G}}_{t}^{\top}\mathbf{G}_ {\boldsymbol{z},t}\widehat{\mathbf{G}}_{t}\), and \(\widehat{\mathbf{G}}_{t}\) is the estimate of the first-stage parameter at time \(t\) (Appendix J)._ Lemma 4.1 extends the well-known elliptical lemma for OLS and Ridge estimators under exoogeneity to the O2SLS estimator under endogeneity. It shows that the size of the confidence intervals induced by O2SLS estimate at time \(T\) is \(\mathcal{O}(\sqrt{d\log T})\), which is of the same order as that of the exogenous elliptical lemma (Abbasi-Yadkori et al., 2011a). **Identification Regret Bound.** Now, we state the identification regret upper bound of O2SLS and a brief proof sketch. **Theorem 4.1** (Identification Regret of O2Sls).: _If Assumption 3.1 holds true, then for bounded IVs \(\|\boldsymbol{x}\|^{2}\leq L_{z}^{2}\), if \(\eta_{t}\) is the \(\sigma_{\eta}\)-sub-Gaussian second stage noise and \(\boldsymbol{\epsilon}_{t}\) is the component-wise \(\sigma_{\boldsymbol{\epsilon}}\)-sub-Gaussian first stage noise, the regret of O2SLS at step \(T>1\) satisfies_ \[\widetilde{R}_{T}\leq\underset{\begin{subarray}{c}\mathbf{b}_{T-1}(\delta)\\ \underline{\text{Estimation}}\\ \mathcal{O}(d\log T)\end{subarray}}{\underbrace{\left((C_{1}^{2}+dC_{2}^{2}) f(T)+C_{4}\right)}}\] _with probability at least \(1-\delta\in(0,1)\). Here, \(\mathfrak{b}_{T-1}(\delta)\) is the confidence bound of O2SLS estimate around \(\boldsymbol{\beta}\) (Lemma 4.1) and \(f(T)\triangleq\left(\frac{C_{\eta}^{2}}{\lambda}+\frac{\log(T)+1}{\lambda_{ \min}(\boldsymbol{\Sigma})/2}\right)\). \(C_{1}\), \(C_{2}\), \(C_{3}^{\prime}\), \(C_{4}\) are \(d\) and \(T\)-independent positive constants (Appendix G), and \(\lambda_{\min}(\boldsymbol{\Sigma})\) is the minimum eigenvalue of the true covariance matrix of IVs, i.e. \(\boldsymbol{\Sigma}\triangleq\mathbb{E}[\boldsymbol{z}\boldsymbol{z}^{\top}]\)._ _Proof Sketch._ For brevity, we define \(\Delta\boldsymbol{\beta}_{t-1}\triangleq\boldsymbol{\beta}_{t-1}-\boldsymbol{\beta}\). By applying Cauchy-Schwarz inequality in Eq. (3), we decouple the effects of parameter estimates and feature norms \[\sum_{t=1}^{T}\left(\Delta\boldsymbol{\beta}_{t-1}^{\top}\boldsymbol{x}_{t} \right)^{2}\leq\sum_{t=1}^{T}\left\|\Delta\boldsymbol{\beta}_{t-1}\right\|_{ \widehat{\mathbf{H}}_{t-1}}^{2}\|\boldsymbol{x}_{t}\|_{\widehat{\mathbf{H}}_{t -1}^{-1}}^{2}\] Now, we bound this term by (a) using the confidence bound to control the concentration of \(\mathbf{\beta}_{t}\) around \(\mathbf{\beta}\), and (b) by bounding the sum of feature norms. _Step a: Confidence Intervals of \(\mathbf{\beta}_{t}\)._ We directly use Lemma 4.1 to bound \(\left\|\Delta\mathbf{\beta}_{t-1}\right\|_{\widetilde{\mathbf{H}}_{t-1}}^{2}\) by \(\mathfrak{b}_{t-1}\). _Step b: Bounding the Second Stage Features._ Now, we need to bound the sum of the feature norms. We use Lemma C.3 to obtain \(\sum_{t=1}^{T}\left\|\mathbf{x}_{t}\right\|_{\widetilde{\mathbf{H}}_{t-1}^{-1}}^{2} \leq\left(C_{1}^{2}+dC_{2}^{2}\right)f(T)+C_{4}\). The idea is to substitute \(\mathbf{x}_{t}\) with (First stage) equation. This leads to two terms \(\sum_{t=1}^{T}\left\|\mathbf{\Theta}^{\top}\mathbf{z}_{t}\right\|_{\widetilde{\mathbf{ H}}_{t-1}^{-1}}^{2}\) and \(\sum_{t=1}^{T}\left\|\mathbf{\epsilon}_{t}\right\|_{\widetilde{\mathbf{H}}_{t-1}^{-1}}^ {2}\). Then, we bound the first term by \(C_{1}^{2}f(T)\) using boundedness of the first-stage features and the concentration property of the minimum eigenvalue of the design matrix of the first stage, i.e. \(\left\|\mathbf{\Theta}_{\mathbf{z},t}^{-1}\right\|_{2}\triangleq\left\|\left(\sum_{s= 1}^{t-1}\mathbf{z}_{s}^{T}\mathbf{z}_{s}\right)^{-1}\right\|_{2}\). The concentration of the minimum eigenvalue leads to the term \(f(T)\triangleq\left(\frac{C_{3}}{\lambda}+\frac{\log(T)+1}{\lambda_{\min}( \mathfrak{S})/2}\right)\). Then, we bound \(\sum_{t=1}^{T}\left\|\mathbf{\epsilon}_{t}\right\|_{\widetilde{\mathbf{H}}_{t-1}^ {-1}}^{2}\) using component-wise sub-Gaussianity of the first stage noise. This leads to a bound \(dC_{2}^{2}f(T)+C_{4}\) with probability \(1-\delta\). _Final step._ Since \(\mathfrak{b}_{t-1}\) is non-decreasing in \(t\), we conclude that \(\sum_{t=1}^{T}\left(\Delta\mathbf{\beta}_{t-1}^{\top}\mathbf{x}_{t}\right)^{2}\) is upper bounded by \(\mathfrak{b}_{T-1}(\delta)((C_{1}^{2}+dC_{2}^{2})f(T)+C_{4})\). Thus, _we conclude that the identification regret of \(\mathsf{O}\mathfrak{S}\mathsf{S}\mathsf{L}\)s is \(\mathcal{O}(d^{2}\log^{2}T)\) for bounded IVs and unbounded noises. **Remark 4.3**.: _Theorem 4.1 entails a regret \(\widetilde{R}_{T}=\mathcal{O}\left(d^{2}\log^{2}(T)\right)\), where \(d\) is dimension of IV. This regret bound is \(d\log T\) more than the regret of online ridge regression, i.e. \(\mathcal{O}(d\log T)\)(Gaillard et al., 2019). This is due to the fact that we perform \(d\) linear regressions in the first-stage and using the predictions of first stage for the second-stage regression. These two regression steps in cascade induce the proposed regret bound._ **Oracle Regret Bound.** Now, we provide a proof sketch of the oracle regret. Further details are in Appendix G. **Theorem 4.2** (Oracle Regret of \(\mathsf{O}\mathsf{2}\mathsf{S}\mathsf{L}\)s).: _Under the same hypothesis of Theorem 4.1, the Oracle Regret of \(\mathsf{O}\mathfrak{2}\mathsf{S}\mathsf{L}\)s at step \(T>1\) satisfies_ \[\begin{split}\overline{R}_{T}\leq&\underbrace{ \widetilde{R}_{T}}_{\begin{subarray}{c}\text{Identif.}\\ \text{Regret}\\ \text{O}(d^{2}\log^{2}T)\end{subarray}}\\ +&\underbrace{\sqrt{\mathfrak{b}_{T-1}(\delta)}}_{ \begin{subarray}{c}\text{Estimation}\\ \text{O}(\sqrt{d\log T})\end{subarray}}\underbrace{\sigma_{\eta}C_{1}\sqrt{f(T) \log\left(\frac{\log T}{\delta}\right)}}_{\begin{subarray}{c}\text{First Stage Feature Norm}\\ \text{\ \ The proof proceeds by bounding each of these three terms. **Term 1: Second Stage Regression Error.** We observe that Term \((\bullet 1\bullet)\) is same as \(\widetilde{R}_{T}\). By Theorem 4.1, we know \(\widetilde{R}_{T}=\mathcal{O}(d^{2}\log^{2}T)\). **Term 2: Coupling of First-stage Data and Second-stage Parameter Estimation.** Now, we bound the second term using concentration inequalities of martingales. First, we observe that \(w_{t}\triangleq\left(\boldsymbol{\beta}_{t-1}-\boldsymbol{\beta}\right)^{ \top}\Theta^{\top}\boldsymbol{z}_{s}\) is a martingale with respect to the filtration \[\mathcal{F}_{t-1}=\sigma\left(\boldsymbol{z}_{1},\boldsymbol{\epsilon}_{1}, \boldsymbol{\eta}_{1},\ldots,\boldsymbol{z}_{t-1},\boldsymbol{\epsilon}_{t-1},\eta_{t-1},\boldsymbol{z}_{t}\right).\] We also note that \(w_{t}\) is \(\mathcal{F}_{t-1}-\)measurable since \(\boldsymbol{\beta}_{t-1}\) and \(\boldsymbol{z}_{t}\) are too. By concentration property of scalar-valued martingale concentration (Theorem I.2), we get that with probability \(1-\delta\) \[(\bullet 2\bullet) \leq\left|\sum_{t=1}^{T}\eta_{t}w_{t}\right|\] \[\leq\sqrt{2\left(1+\sigma_{\eta}^{2}\sum_{t=1}^{T}w_{t}^{2} \right)\log\left(\frac{\sqrt{1+\sigma_{\eta}^{2}\sum_{t=1}^{T}w_{t}^{2}}}{ \delta}\right)}.\] Now, we focus on bounding the quantity appearing under square root. By applying Cauchy-Schwarz inequality and a reasoning similar to bounding Term \((\bullet 1\bullet)\), we get \(\sum_{t=1}^{T}w_{t}^{2}\leq\mathfrak{b}_{T-1}(\delta)C_{1}^{2}f(T)\). Hence, we conclude that Term \((\bullet 2\bullet)\) is \(\mathcal{O}(\sqrt{d}\log T)\) ignoring the \(\log\log\) terms. **Term 3: Coupling of First- and Second-stage Noises.** Finally, we bound Term \((\bullet 3\bullet)\) containing the correlation between the first- and second-stage noise. This term is referred as the self-fulfilling bias (Li et al., 2021). We bound this term by splitting it into two. \[\sum_{t=1}^{T}\eta_{t}\Delta\boldsymbol{\beta}_{t-1}^{\top}\boldsymbol{ \epsilon}_{t}=\underbrace{\sum_{t=1}^{T}\Delta\boldsymbol{\beta}_{t-1}^{T} \left(\boldsymbol{\epsilon}_{t}n_{t}-\boldsymbol{\gamma}\right)}_{\text{ Martingale Concentration Term}}+\underbrace{\sum_{t=1}^{T}\Delta\boldsymbol{\beta}_{t-1}^{\top}\boldsymbol{ \gamma}}_{\text{Bias Term}}\] Here, \(\boldsymbol{\gamma}\triangleq\mathbb{E}[\eta_{s}\boldsymbol{\epsilon}_{s}]\). This leads to the first term, which is a summation of martingale difference sequence and can be bounded using concentration inequalities in Lemma F.2. The technical challenge is to derive the sub-exponential parameters induced by \(\boldsymbol{\epsilon}_{t}\eta_{t}\) in the martingale difference, since the individual terms are products of two dependent random variables \(\boldsymbol{\epsilon}_{t}\) and \(\eta_{t}\). By applying Bernstein's inequality on the martingale difference and \(\left\|\Delta\boldsymbol{\beta}_{t-1}\right\|_{\widehat{\mathbf{H}}_{t-1}} \leq\sqrt{\mathfrak{b}_{T-1}(\delta)}\) with probability \(1-\delta\), we obtain \[\underbrace{\frac{\sqrt{\mathfrak{b}_{T-1}(\delta)}}{\text{Estimation}}}_{ \mathcal{O}(\sqrt{d\log T})}\underbrace{C_{5}\sqrt{2d\ f(T)}+C_{6}}_{\text{ Concentration Term}\mathcal{O}(\sqrt{d\log T})}\] _The Bias Term_ is the one where the correlation \(\boldsymbol{\gamma}\) appears explicitly. We bound this term (Lemma F.3) by bounding the sum of the square root of the smallest eigenvalues of the first stage covariates design matrix \(\sum_{t=1}^{T}\sqrt{\left\|\mathbf{G}_{\boldsymbol{z},t-1}^{-1}\right\|_{2}}\). We reuse the upper bound on the individual terms (Lemma B.2), where we show that the minimum eigenvalue of the first stage design matrix grows \(\Omega(t)\). Thus, we get that \(\sqrt{\lambda_{\max}\big{(}\mathbf{G}_{\boldsymbol{z},t}^{-1}\big{)}}\) is \(\mathcal{O}(\frac{1}{\sqrt{t}})\). This leads to the following bound on the Bias Term \[\underbrace{\sqrt{\mathfrak{b}_{T-1}(\delta)}}_{\text{Estimation}}\underbrace{ \gamma L_{\widehat{\mathbf{G}}^{-1}}\left(\frac{C_{3}}{\sqrt{\lambda}}+2\frac{ \sqrt{2T}}{\sqrt{\lambda_{\min}(\boldsymbol{\Sigma})}}\right)}_{\text{Correlated noise - Bias Term}}\] Thus, we conclude the proof and get that the oracle regret of \(\mathsf{O}\mathsf{2}\mathsf{S}\mathsf{L}\mathsf{S}\) is \(\mathcal{O}(\gamma\sqrt{dT\log T}+d^{2}\log^{2}T)\). **Remark 4.4**.: _Under exogeneity and unbounded stochastic noise, the oracle regret of online linear regression is \(\mathcal{O}(d^{2}\log^{2}T)\)(Ouhamma et al., 2021). Under endogeneity and unbounded stochastic noise, \(\mathsf{O}\mathsf{2}\mathsf{S}\mathsf{L}\mathsf{S}\) incurs an extra \(\mathcal{O}(\gamma\sqrt{dT\log T})\) factor in the oracle regret. This term appears due to the correlation between the second and the first-stage noises, and it is proportional to the degree of correlation between the noises in these two stages. Thus, the bias introduced by the correlation of noises acts as the dominant term. In \(\mathsf{2}\mathsf{S}\mathsf{L}\mathsf{S}\) literature, this is referred as the self-fulfilling bias (Li et al., 2021). When the noises are independent, i.e. \(\gamma=0\), we retrieve an oracle regret of the same order as that of the exogenous case._ ## 5 Linear Bandits with Endogeneity: \(\mathsf{O}\mathsf{F}\mathsf{U}\mathsf{I}\mathsf{I}\mathsf{V}\) We formulate stochastic Linear Bandits with Endogeneity (LBE) with a two-stage linear model of data generation (Eqn. (2)). Then, we propose an index-based optimistic algorithm, \(\mathsf{O}\mathsf{F}\mathsf{U}\mathsf{I}\mathsf{I}\mathsf{V}\). Our analysis shows that \(\mathsf{O}\mathsf{F}\mathsf{U}\mathsf{I}\mathsf{I}\mathsf{V}\) achieves \(\mathcal{O}(d\sqrt{T}\log T)\) regret. Our experimental results show that \(\mathsf{O}\mathsf{F}\mathsf{U}\mathsf{I}\mathsf{V}\) achieve lower regret and more accurately estimates than \(\mathsf{O}\mathsf{F}\mathsf{U}\)(Abbasi-Yadkori et al., 2011a). **In bandit setting**, we observe \(\mathbf{x}_{t}\) and \(y_{t}\) depending on arm (or intervention) \(A_{t}\in\mathcal{A}_{t}\) drawn at time \(t\in\{1,\ldots,T\}\). \[\mathbf{x}_{t} =\mathbf{\Theta}^{\top}\mathbf{z}_{t,A_{t}}+\mathbf{\epsilon}_{t}\] (LBE-first) \[y_{t} =\mathbf{\beta}^{\top}\mathbf{x}_{t}+\eta_{t}\] (LBE-second) Here, \(y_{t}\) is the reward at round \(t\). Each arm \(a\) corresponds to _a vector of IVs_\(\mathbf{z}_{t,a}\in\mathcal{Z}_{t}\subset\mathbb{R}^{d}\), and a vector of endogenous variables \(\mathbf{x}_{t,a}\in\mathcal{X}_{t}\subset\mathbb{R}^{d}\) (generated as per (LBE-first)). Here, \(\mathcal{X}_{t}\) and \(\mathcal{Z}_{t}\) are sets of IVs and endogenous variables corresponding to \(\mathcal{A}_{t}\). Similar to regression setting, we have two sources of unobserved noises: \(\mathbf{\epsilon}_{t}\) (\(\sigma_{\mathbf{\epsilon}}^{2}\mathbb{I}\)-sub-Gaussian) are i.i.d. vector error terms at round \(t\) which is independent of \(\mathbf{z}\), and \(\eta_{t}\) (\(\sigma_{\eta}^{2}\)-sub-Gaussian), representing all causes of \(y_{t}\) other than \(\mathbf{x}_{t}\). True parameters \(\mathbf{\beta}\in\mathbb{R}^{d}\) and \(\mathbf{\Theta}\in\mathbb{R}^{d\times d}\) are unknown to the agents. This is an extension of the classical stochastic linear bandit (Lattimore and Szepesvari, 2020, Ch. 19). Now, we state the protocol of LBE. At each round \(t=1,2,\ldots,T\), the agent 1. Observes a sample \(\mathbf{z}_{t,a}\in\mathcal{Z}_{t}\) and \(\mathbf{x}_{t,a}\in\mathbf{x}_{t}\) of contexts for all \(a\in\mathcal{A}_{t}\) 2. Chooses an arm \(A_{t}\in\mathcal{A}_{t}\) 3. Obtains the reward \(y_{t}\) computed from (LBE-first) 4. Updates the parameter estimates \(\widehat{\mathbf{\Theta}}_{t}\) and \(\mathbf{\beta}_{t}\) \(\mathsf{O}\mathsf{F}\mathsf{U}\mathsf{I}\mathsf{V}\):Algorithm Design.If the agent had full information in hindsight, she could infer the best arm (or intervention) in \(\mathcal{A}_{t}\) as \[a_{t}^{*}=\operatorname*{argmax}_{a\in\mathcal{A}_{t}}\mathbb{E}[\mathbf{x}_{t,a}^ {\top}\mathbf{\beta}]\] We denote the corresponding variables as \(\mathbf{z}_{t}^{*}\) and \(\mathbf{x}_{t}^{*}\). Thus, choosing \(a^{*}\) can be shown as choosing \(\mathbf{z}_{t}^{*}\) and \(\mathbf{x}_{t}^{*}\). But the agent does not know them and aims to select \(\{a_{t}\}_{t=1}^{T}\) leading to minimum regret (Eqn. (4)). Now, we extend the \(\mathsf{O}\mathsf{F}\mathsf{U}\mathsf{L}\) algorithm minimising regret in linear bandits with exogeneity (Abbasi-Yadkori et al., 2011a). The core idea is that the algorithm maintains a confidence set \(\mathcal{B}_{t-1}\subseteq\mathbb{R}^{d}\) around the parameter \(\mathbf{\beta}\), which is computed only using the observed data. Then, the algorithm chooses an optimistic estimate of \(\widetilde{\mathbf{\beta}}_{t-1}\) from that confidence set: \[\widetilde{\mathbf{\beta}}_{t-1}=\operatorname*{argmax}_{\mathbf{\beta}^{\prime}\in \mathcal{B}_{t-1}}\left(\max_{\mathbf{x}\in\mathcal{X}_{t}}\mathbf{x}^{\top}\mathbf{\beta}^ {\prime}\right)\] Then, she chooses the action leading to \(\mathbf{x}_{t}=\operatorname*{argmax}_{\mathbf{x}\in\mathcal{X}_{t}}\mathbf{x}^{\top} \widetilde{\mathbf{\beta}}_{t}\), which maximizes the reward according to the estimate \(\widetilde{\mathbf{\beta}}_{t}\). In brief, the algorithm chooses the pair \((\mathbf{x}_{t},\widetilde{\mathbf{\beta}}_{t-1})=\operatorname*{argmax}_{(\mathbf{x}, \mathbf{\beta}^{\prime})\in\mathcal{X}_{t}\times\mathcal{B}_{t-1}}\mathbf{x}^{\top} \mathbf{\beta}^{\prime}\). In order to tackle endogeneity, we choose to use the O2SLS estimate \(\mathbf{\beta}_{t-1}\) computed using data observed till \(t-1\). Then, we build an ellipsoid \(\mathcal{B}_{t-1}\) around it, such that \[\mathcal{B}_{t-1}\triangleq\left\{\mathbf{\beta}\in\mathbb{R}^{d}:\|\mathbf{\beta}_{ t}-\mathbf{\beta}\|_{\widetilde{\mathbf{H}}_{t}}\leq\sqrt{\mathfrak{b}_{t}^{\prime}( \delta)}\right\}\] and \[\mathfrak{b}_{t}^{\prime}(\delta)\triangleq 2\sigma_{\eta}^{2}\log\left( \frac{\det\left(\mathbf{G}_{\mathbf{x},t}\right)^{1/2}\lambda^{-d/2}}{\delta} \right).\] Given this confidence interval, we optimistically choose the arm \[A_{t}=\operatorname*{argmax}_{a\in\mathcal{A}_{t}}\left\langle\mathbf{x}_{t,a}, \mathbf{\beta}_{t-1}\right\rangle+\sqrt{\mathfrak{b}_{t-1}^{\prime}(\delta)}\left\| \mathbf{x}_{t,a}\right\|_{\widetilde{\mathbf{H}}_{t-1}^{-1}}. \tag{7}\] This arm selection index together with the O2SLS estimator yielding \(\mathbf{\beta}_{t-1}\) construct the OFUL-IV (Algorithm 2). **Theorem 5.1**.: _Under the same assumptions and notations of Theorem 4.1 and Theorem 4.2, Algorithm 2 incurs a regret_ \[R_{T}\leq 2\sqrt{T}\underbrace{\sqrt{\mathfrak{b}_{T-1}(\delta)}}_{ \begin{subarray}{c}Estimation\\ \mathcal{O}(\sqrt{d\log T})\end{subarray}}\underbrace{\sqrt{(C_{1}^{2}+dC_{2} ^{2})f(T)+C_{4}}}_{\begin{subarray}{c}Second Stage Feature norm\\ \mathcal{O}(\sqrt{d\log T})\end{subarray}}\] _with probability \(1-\delta\) and for horizon \(T>1\)._ _Proof Sketch._ **Step 1: Optimism.** We observe that \(R_{T}=\sum_{t=1}^{T}\mathbf{\beta}^{\top}\mathbf{x}_{*}-\mathbf{\beta}^{\top}\mathbf{x}_{t} \triangleq\sum_{t=1}^{T}r_{t}\). Since \((\mathbf{x}_{t},\widetilde{\mathbf{\beta}}_{t-1})\) is optimistic in \(\mathcal{X}_{t}\times\mathcal{B}_{t}\), and \(\mathbf{\beta}\in\mathcal{B}_{t}\), we obtain \(r_{t}\leq(\widetilde{\mathbf{\beta}}_{t-1}-\mathbf{\beta})^{\top}\mathbf{x}_{t}\). **Step 2: Decomposition.** Now, we decompose regret as \[(\widetilde{\mathbf{\beta}}_{t-1}-\mathbf{\beta})^{\top}\mathbf{x}_{t}=(\widetilde{\mathbf{ \beta}}_{t-1}-\mathbf{\beta}_{t-1})^{\top}\mathbf{x}_{t}+(\mathbf{\beta}_{t-1}-\mathbf{\beta} )^{\top}\mathbf{x}_{t}.\] The first term depends on tightness of confidence interval, while the second depends on accuracy of the estimate \(\mathbf{\beta}_{t-1}\). **Step 3: Confidence Bound.** Now, we can decouple the impact of parameter and observed data in both the terms using \(\left\|\widetilde{\mathbf{\beta}}_{t-1}-\mathbf{\beta}_{t-1}\right\|_{\widetilde{\mathbf{H}} _{t-1}}\left\|\mathbf{x}_{t}\right\|_{\widetilde{\mathbf{H}}_{t-1}^{-1}}\) and \(\left\|\mathbf{\beta}_{t-1}-\mathbf{\beta}\right\|_{\widetilde{\mathbf{H}}_{t-1}}\left\| \mathbf{x}_{t}\right\|_{\widetilde{\mathbf{H}}_{t-1}^{-1}}\), respectively. By construction of the optimistic confidence interval and concentrations bound of Lemma 4.1, both \(\left\|\widetilde{\mathbf{\beta}}_{t-1}-\mathbf{\beta}_{t-1}\right\|_{\widetilde{\mathbf{H} }_{t-1}}\) and \(\left\|\mathbf{\beta}_{t-1}-\mathbf{\beta}\right\|_{\widetilde{\mathbf{H}}_{t-1}}\) are bounded by \(\sqrt{\mathfrak{b}_{t-1}^{\prime}(\delta)}\). By determinant-trace inequality (Lemma A.1), we get that \(\mathfrak{b}_{T-1}^{\prime}(\delta)\leq\frac{d\sigma_{1}^{2}}{4}\log\left( \frac{1+TL_{1}^{2}/\lambda}{\delta}\right)\). **Final Step.** Since the regret \(R_{T}{\leq}\sqrt{T\sum_{t=1}^{T}r_{t}^{2}}\), we obtain \[R_{T}\leq\sigma_{\eta}\sqrt{dT\log\left(\frac{1+TL_{z}^{2}/\lambda}{\delta} \right)\left(\sum_{t=1}^{T}\left\|\mathbf{x}_{t}\right\|_{\widetilde{\mathbf{H}}_{t-1 }^{-1}}^{2}\right)}.\] Now, we bound the sum of the first-stage feature norms \(\sum_{t=1}^{T}\left\|\mathbf{x}_{t}\right\|_{\widetilde{\mathbf{H}}_{t-1}^{-1}}^{2}\) by \((C_{1}^{2}+dC_{2}^{2})f(T)+C_{4}\) (Lemma C.3), which is \(\mathcal{O}(d\log T)\). A detailed proof is in Appendix H. Thus, we conclude that _regret of_OFUL-IV _is \(\mathcal{O}(d\sqrt{T}\log T)\)_. OFUL-IV achieves regret of similar order under endogeneity as OFUL achieves under exoegneity. **Experimental Analysis.** Now, we compare performance of OFUL-IV and OFUL (Abbasi-Yadkori et al., 2011a) for LBE setting. OFUL builds a confidence ellipsoid centered at \(\mathbf{\beta}_{\text{Ridge},t}\) to concentrate around \(\mathbf{\beta}\), while OFUL-IV uses O2SLS to build an accurate estimate. We deploy the experiments in Python3 on a single Intel(R) Core(TM) i7-8665U [email protected]. For each algorithm, we report the mean and standard deviation of instantaneous regret and mean square error \(\left\|\mathbf{\beta}_{t}-\mathbf{\beta}\right\|^{2}\) over 100 runs. We run the algorithms with the same regularisation parameters equal to \(10^{-3}\). We denote the normal distribution with mean \(\mu\) and standard deviation \(\sigma\) as \(\mathcal{N}(\mu,\sigma)\), with \(\mathcal{N}_{n}\) we indicate its multivariate extension to \(n\) dimensions. For each experiment, we sample the true parameters in our model once according to \(\mathbf{\beta}\sim\mathcal{N}_{50}(\tilde{10},\mathbb{I}_{50})\) and \(\mathbf{\Theta}_{i,j}\sim\mathcal{N}(0,1)\) for each component. Then we sample at each time \(t\) the vectors \(\mathbf{z}_{t,a}{\sim}\mathcal{N}_{50}(\tilde{0},\mathbb{I}_{50})\), \(\mathbf{\epsilon}_{t,a}{\sim}\mathcal{N}_{50}(\tilde{0},\mathbb{I}_{50})\), and the scalar noise \(\eta_{t,a}=\frac{1}{13}\left(\widetilde{\eta}_{t,a}+\sum_{i=1}^{12}\mathbf{ \epsilon}_{t,a,i}\right)\) where \(\widetilde{\eta}_{t,a}{\sim}\mathcal{N}(0,1)\). The estimates obtained by OFUL-IV achieves 3-order less error than those of OFUL (Fig. 1(b)). Thus, OFUL-IV leads to lower regret than OFUL for linear bandits with endogeneity (Fig. 1(a)). Further experimental details and results of regression are deferred to Appendix K. Figure 2: We compare instantaneous regrets (left) of OFUL and OFUL-IV in a linear bandit setting. We show the MSE between the parameters estimated by the two algorithms and the true parameter \(\mathbf{\beta}\) (right). OFUL-IV incurs lower instantaneous regret and MSE. ## 6 Conclusions and Future Works In this paper, we study online IV regression, specifically the online 2SLS algorithm, for unbounded noise and endogenous data. We analyse the finite-time identification and oracle regrets of O2SLS. We observe that O2SLS incurs \(\mathcal{O}(d^{2}\log^{2}T)\) identification regret, which is \(d\log T\) higher than that of online linear regression under exogeneity. In contrast, O2SLS achieves \(\mathcal{O}(\|\gamma\|_{2}\sqrt{dT\log T})\) oracle regret as the correlation between the noises in the two-stages dominate the identification regret. But these two are of the same order in the exogenous setting. Following that, we study stochastic linear bandits with endogeneity. We propose OFUL-IV that uses O2SLS to estimate the model parameters. We show that OFUL-IV achieves \(\mathcal{O}(d\sqrt{T}\log T)\) regret. We experimentally show that OFUL-IV yields more accurate estimates of the true parameter and thus, lower regret. For simplicity, we consider the just-identified IVs. In future, we will like to extend our algorithms and analysis to weakly or over-identified IVs (Greene, 2003). Additionally, O2SLS and OFUL-IV work if the IVs are already specified. There has been significant work to identify IVs in offline setting (Newey and Powell, 2003; Chen et al., 2020). Still, it is an open question how optimally IVs can be identified online, while O2SLS is performed simultaneously.
2308.04628
Tietze type extensions for absolutely continuous functions in the plane
It is an open problem whether one can always extend an absolutely continuous function (in the sense of Ashton and Doust) on a compact subset of the plane to a larger compact set. In this paper we show that this can be done for a large family of initial domains whose components consist of polygons and convex curves. An application is given to the spectral theory of $AC(\sigma)$ operators.
Ian Doust, Alan Stoneham
2023-08-08T23:40:57Z
http://arxiv.org/abs/2308.04628v1
# Tietze type extensions for absolutely continuous functions in the plane ###### Abstract It is an open problem whether one can always extend an absolutely continuous function (in the sense of Ashton and Doust) on a compact subset of the plane to a larger compact set. In this paper we show that this can be done for a large family of initial domains whose components consist of polygons and convex curves. An application is given to the spectral theory of \(AC(\sigma)\) operators. 2020 _Mathematics Subject Classification_: Primary 26B30; Secondary 47B40, 54C20. _Key words and phrases_: Functions of bounded variation, Absolutely continuous functions, \(AC(\sigma)\) operators. Introduction A Hilbert space operator \(T\) is normal if and only if it admits a \(C(\sigma(T))\) functional calculus. Indeed, if there is any nonempty compact set \(\sigma\subseteq\mathbb{C}\) and a \(C^{*}\)-algebra isomorphism \(\Phi:C(\sigma)\to B(H)\) such that \(\Psi\) maps the identity function on \(\sigma\) to \(T\), then \(T\) is normal. In this case \(\sigma(T)\subseteq\sigma\) and the kernel of \(\Phi\) is just the set of continuous functions which vanish on \(\sigma(T)\). In [4] Doust and Ashton introduced a class of Banach space operators called \(AC(\sigma)\) operators which generalize the classes of well-bounded operators and trigonometrically well-bounded operators. In the setting of Hilbert spaces, all self-adjoint operators are well-bounded and all unitary operators are trigonometrically well-bounded, and in this context \(AC(\sigma)\) operators may be seen as an analogue of normal operators (see, for example, [12] and [5, 6, 7] for more details of these classes of operators). An operator is an \(AC(\sigma)\) operator if it admits a functional calculus for an algebra \(AC(\sigma)\) of 'absolutely continuous' functions on \(\sigma\) which was introduced in [3]. (Details of the spaces \(AC(\sigma)\) for compact sets \(\sigma\subseteq\mathbb{C}\) will be given in Section 2.) If \(T\) is an \(AC(\sigma)\) operator, then certainly \(\sigma(T)\subseteq\sigma\), but at present it remains an open problem as to whether \(T\) must in fact be an \(AC(\sigma(T))\) operator. Such questions are closely related to classical extension theorems. Suppose that \(\sigma_{0}\subseteq\sigma\) are two compact subsets of the plane. The Tietze Extension Theorem ensures that if \(f\in C(\sigma_{0})\) then there exists \(\hat{f}\in C(\sigma)\) such that \(f=\hat{f}|\sigma_{0}\) and such that \(\left\|f\right\|_{\infty}=\left\|\hat{f}\right\|_{\infty}\). The main focus of this paper concerns the open question as to whether a corresponding extension result holds for absolutely continuous functions. In both settings, the restriction map \(g\mapsto g|\sigma_{0}\) is a norm \(1\) algebra homomorphism. The extension question is asking whether this map is onto. The above fact about the restriction map means that if one can extend a function on \(\sigma_{0}\) to any suitably large square or disk, then one can extend it to any compact superset. This means that the primary issue in such questions is the nature of the original domain \(\sigma_{0}\), rather than the nature of \(\sigma\). For certain classes of sets, an extension of an absolutely continuous function is always possible. This is easily seen to be the case if, for example, \(\sigma_{0}\) is a compact subset of the real line. On the other hand, as the following example shows, not every 'natural' extension of an absolutely continuous function is absolutely continuous. **Example 1.1**.: Let \(C:[0,1]\rightarrow[0,1]\) denote the Cantor function. Let \(\sigma_{0}=\{(x,C(x))\,:\,x\in[0,1]\}\), let \(\sigma=[0,1]\times[0,1]\) and let \(f:\sigma_{0}\rightarrow\mathbb{C}\) be defined by \(f(x,y)=y\), \((x,y)\in\sigma_{0}\). Then (by the definitions in the next section) \(f\in AC(\sigma_{0})\). One might consider both the functions \(\hat{f}_{1}(x,y)=y\) and \(\hat{f}_{2}(x,y)=C(x)\) as natural continuous extensions of \(f\) to \(\sigma\), but only the first of these is absolutely continuous. A few particular extension results for absolutely continuous functions have appeared in the literature. For example, a formula which extends an absolutely continuous function on the boundary of a square to its interior was given in [4, Theorem 3.5]. The main result of this paper, Theorem 10.1, gives sufficient conditions on \(\sigma_{0}\) which ensure that absolutely continuous extensions to supersets always exist. The main theorem covers sets such as the one in Figure 1 which are made up of polygonal regions with polygonal holes and convex curves satisfying certain hypotheses. Since the sets may be made up of pieces of a somewhat different nature, the proof essentially progresses by dealing with different parts of the set separately and then patching the pieces together. This will involve proving a number of 'joining theorems'. These theorems, which are of independent interest, examine when one can say that a function which is absolutely continuous on each of two sets is then absolutely continuous on their union. We give some examples to show that even when the two sets are quite simple, such a conclusion may not hold. In the simplest situation, where \(\sigma_{0}\subseteq\sigma\subseteq\mathbb{R}\), one can always extend an absolutely continuous function on \(\sigma_{0}\) to one on \(\sigma\) without an increase in the variation norm. For the plane sets considered in this paper, it is not known whether one can always find an extension with the same norm. Certainly our methods do not produce such an extension. On the other hand we are able to at least show that the norm of the extension is controlled by some constant (depending on \(\sigma_{0}\)) times the norm of the function on \(\sigma_{0}\). In order Figure 1: An example of a plane set \(\sigma_{0}\) which will satisfy the hypotheses of the main extension result, Theorem 10.1. to keep the proof as simple as possible we have generally made no attempt to optimize the bounds, although in a few places we shall note that where our methods require that the constants that appear are greater than one. There are a number of relatively simple sets for which it is not known whether extensions always exist. These include closed disks and the graph of the Cantor function considered above. In the next section we shall recall the main definitions and properties of absolutely continuous functions. The following section will introduce the classes of compact sets which appear in the main result, Theorem 10.1. Although the great majority of the paper is purely concerned with the algebras of absolutely continuous functions, we shall return in the final section to revisit the original motivating question concerning \(AC(\sigma)\) operators referred to above. Due to this, we shall work throughout with complex-valued functions. However, all the results remain true for real-valued functions. ## 2 Definitions and properties of \(Ac(\sigma)\) Throughout this paper, \(\sigma_{0}\) and \(\sigma\) will denote nonempty compact subsets of the plane. We shall consider the plane to be either \(\mathbb{R}^{2}\) or \(\mathbb{C}\) as is notationally convenient, and identify the real line with the \(x\)-axis in the plane. For the moment then, fix \(\sigma\subseteq\mathbb{R}^{2}\). We shall now briefly recall the definition of the Banach algebra of functions of bounded variation on \(\sigma\) and of its subalgebra, the set of absolutely continuous functions on \(\sigma\). Further details and proofs may be found in [10]. Suppose that \(f:\sigma\to\mathbb{C}\). Given an ordered finite list of (not necessarily distinct) elements \(S=[\vec{x}_{0},\ldots,\vec{x}_{n}]\) in \(\sigma\), we set \[\operatorname{cvar}(f,S)=\sum_{k=1}^{n}|f(\vec{x}_{k})-f(\vec{x}_{k-1})|.\] The variation factor of the list \(S\), denoted \(\operatorname{vf}(S)\), is a positive integer which roughly measures the maximum number of times that the piecewise linear path joining the points in \(S\) in order crosses any line. More formally, let \(\gamma_{S}:[0,1]\to\mathbb{C}\) be a parameterization of the piecewise linear curve joining the elements in \(S\) in order. Given a line \(\ell\) in the plane, let \(L(S,\ell)=\gamma_{S}^{-1}(\ell)\subseteq[0,1]\) and let \(\operatorname{vf}(S,\ell)\) denote the number of connected components of \(L(S,\ell)\). The **variation factor of \(S\)** is then \(\operatorname{vf}(S)=\sup_{\ell}\operatorname{vf}(S,\ell)\). The main fact that we shall need in the later sections is that if \(P\) is an simple \(m\)-gon and \(S=[\vec{x}_{0},\ldots,\vec{x}_{n}]\) has \(k\) line segments \(\overline{\vec{x}_{j-1}}\,\vec{x}_{j}\) with one endpoint inside and one endpoint outside \(P\), then \(\operatorname{vf}(S)\geq\lceil k/m\rceil\). We refer the reader to [10, Section 2.1] and the appendix to [2] for further details of the properties of the variation factor. The **variation of \(f\) over \(\sigma\)** is found by taking a supremum over all such finite lists of points: \[\operatorname{var}(f,\sigma)=\sup_{S}\frac{\operatorname{cvar}(f,S)}{ \operatorname{vf}(S)}.\] The space of functions of bounded variation on \(\sigma\), denoted \(BV(\sigma)\), consists of all functions \(f:\sigma\to\mathbb{C}\) for which \[\left\|f\right\|_{BV(\sigma)}=\sup_{\boldsymbol{x}\in\sigma}|f(\boldsymbol{x} )|+\operatorname{var}(f,\sigma)\] is finite. Calculating \(\operatorname{var}(f,\sigma)\) precisely is often challenging, so we shall mainly be using the properties of variation in order to obtain good bounds. We recall here the most important of these properties. Further details can be found in [10]. **Theorem 2.1**.: _Suppose that \(\sigma\) is a nonempty compact subset of \(\mathbb{R}^{2}\). Then_ 1. \(BV(\sigma)\) _is a Banach algebra under the norm_ \(\left\|\cdot\right\|_{BV(\sigma)}\)_._ 2. \(BV(\sigma)\) _contains the (restrictions of) complex polynomials in two real variables._ 3. _If_ \(\sigma=[a,b]\subseteq\mathbb{R}\) _then_ \(BV(\sigma)\) _is the usual classical space of functions of bounded variation, and_ \(\operatorname{var}(f,[a,b])\) _is just the usual measure of the variation of a function on the interval_ \([a,b]\)_._ 4. _If_ \(\phi\) _is an invertible affine transformation of the plane and_ \(\sigma^{\prime}=\phi(\sigma)\)_, then_ \(\Phi(f)=f\circ\phi^{-1}\) _is an isometric isomorphism from_ \(BV(\sigma)\) _to_ \(BV(\sigma^{\prime})\)_._ **Definition 2.2**.: The space \(AC(\sigma)\) of **absolutely continuous functions** on \(\sigma\) is the closure of the complex polynomials in two real variables in \(BV(\sigma)\). We record some of the most important properties of \(AC(\sigma)\) spaces. **Theorem 2.3**.: _Suppose that \(\sigma\) is a nonempty compact subset of \(\mathbb{R}^{2}\)._ 1. _If_ \(\sigma=[a,b]\subseteq\mathbb{R}\) _then_ \(AC(\sigma)\) _is the usual classical space of absolutely continuous functions._ _._ 2. _(Affine invariance) If_ \(\phi\) _is an invertible affine transformation of the plane and_ \(\sigma^{\prime}=\phi(\sigma)\)_, then_ \(\Phi(f)=f\circ\phi^{-1}\) _is an isometric isomorphism from_ \(AC(\sigma)\) _to_ \(AC(\sigma^{\prime})\)_._ 3. _If_ \(\sigma_{0}\) _is a nonempty compact subset of_ \(\sigma\) _and_ \(f\in AC(\sigma)\)_, then_ \(f|\sigma_{0}\in AC(\sigma_{0})\) _with_ \(\left\|f|\sigma_{0}\right\|_{BV(\sigma_{0})}\leq\left\|f\right\|_{BV(\sigma)}\)_._ 4. _If_ \(\sigma_{x}\) _denotes the projection of_ \(\sigma\) _onto the_ \(x\)_-axis, and_ \(f\in AC(\sigma_{x})\)_, then_ \(\hat{f}:\sigma\to\mathbb{C}\) _defined by_ \(\hat{f}(x,y)=f(x)\) _is in_ \(AC(\sigma)\) _and satisfies_ \(\|\hat{f}\|_{BV(\sigma)}\leq\|f\|_{BV(\sigma_{x})}\)_._ When there is no risk of confusion, when \(f:\sigma\to\mathbb{C}\) and \(\sigma_{0}\subseteq\sigma\) we shall often write \(\left\|f\right\|_{BV(\sigma_{0})}\) for the norm of the restriction of \(f\) to the smaller set. An important fact about this sense of absolutely continuity is that it is a 'local property'. We shall say that a set \(U\) is a compact neighbourhood of a point \(\boldsymbol{x}\in\sigma\) if there exists a bounded open set \(V\subseteq\mathbb{R}^{2}\) containing \(\boldsymbol{x}\) such that \(U=\sigma\cap\overline{V}\). **Theorem 2.4** (Patching Lemma [10, Theorem 4.1]).: _Suppose that \(\sigma\) is a nonempty compact subset of \(\mathbb{R}^{2}\) and that \(f:\sigma\to\mathbb{C}\). Then \(f\in AC(\sigma)\) if and only if for every point \(\boldsymbol{x}\in\sigma\) there exists a compact neighbourhood \(U_{\boldsymbol{x}}\) of \(\boldsymbol{x}\) such that \(f|U_{\boldsymbol{x}}\in AC(U_{\boldsymbol{x}})\)._ We note that \(AC(\sigma)\) always contains a relatively rich collection of functions. It is shown in [11, Section 5] that the space of functions which admit a \(C^{1}\) extension to an open neighbourhood of \(\sigma\) is always dense in \(AC(\sigma)\), as is the space CTPP\((\sigma)\) of functions which are continuous and triangularly piecewise planar. On the other hand, while Lipschitz functions on a real interval are always absolutely continuous, this result does not extend to arbitrary compact subsets of the plane [10, Example 3.13]. ## 3 Classes of sets The aim of this paper is to identify the largest class of compact sets in the plane for which we can prove that any absolutely continuous function on such a set can always be extended to be absolutely continuous on any larger compact set. This class contains all finite unions of polygonal regions, as well as most sets which are finite unions of convex curves. Given a set \(A\subseteq\mathbb{R}^{2}\), we shall denote its closure by \(\operatorname{cl}(A)\), its interior by \(\operatorname{int}(A)\) and its boundary by \(\partial A\). ### Polygonal regions By a **polygonal region** we mean a connected subset of the plane with (simple) polygonal boundary. A set \(W\) is a (polygonal) **window** in a polygonal region \(P\) if it is the interior of a polygon \(P^{\prime}\) which lies in \(P\). Let \(PW^{+}\) denote the collection of all compact subsets of the plane which are finite unions of polygonal regions with finitely many windows removed. We allow here that a window may comprise the entire interior of a polygonal region. Note that for a set in \(PW^{+}\), one may have windows inside polygons inside windows inside polygons, and so forth (see Figure 2). Every set in \(PW^{+}\) is a union of a finite number of polygonal regions and line segments. While not all such sets are in \(PW^{+}\) (for example a single line segment), our main theorem will cover these cases. Let \(R\) be a rectangular region containing \(\sigma_{0}\in PW^{+}\) in its interior. Then \(\hat{\sigma}_{0}=\sigma_{0}\cup\partial R\) is also in \(PW^{+}\). The region \(R\setminus\hat{\sigma}_{0}\) can always be triangulated. Given \(f\in AC(\sigma_{0})\) our aim will be to inductively extend \(f\) to be absolutely continuous, first on the boundary of each of the triangles, and then on the whole triangular region. This will require that we first show that we can extend from a subset of a convex curve to the whole curve, and then (in Section 6) from the boundary of a polygon to its interior. We shall then show that the final function obtained is absolutely continuous on the whole of \(R\). This will require us to show (in Section 7) that we can join absolutely continuous functions defined on polygonal regions. Figure 2: A set \(\sigma\in PW^{+}\) inside a rectangle \(R\). The blue region \(R\setminus(\sigma\cup\partial R)\) can always be triangulated. ### Almost polygonally inscribed curves The class of polygonally inscribed curves, or PIC sets, was introduced in [1]. These are compact connected sets which can be written as a finite union of convex curves with certain constraints on how the curves can meet. For our main theorem we need to relax this condition slightly to deal with the union of a convex curve and a polygonal region. We cannot however completely eliminate all restrictions of how the curves may meet since, as we shall note in Section 8, a function can be absolutely continuous on each of two convex curves, but not on their union. **Definition 3.1**.: A (convex) **polygonal mosaic** in the plane is a finite collection \(\mathcal{M}\) of convex polygonal regions such that 1. \(\bigcup_{P\in\mathcal{M}}P\) is connected; 2. if \(P\) and \(Q\) are distinct elements of \(\mathcal{M}\), then \(P\cap Q\) is either * empty * a single point which is a vertex of both \(P\) and \(Q\), or * a line segment. **Definition 3.2**.: Let \(P\) be a convex polygonal region. A set \(c\subseteq P\) is a **convex curve in \(P\)** if it is a differentiable convex curve joining two vertices of \(P\) which only touches the boundary of \(P\) at those points. **Definition 3.3**.: A nonempty compact connected set \(\sigma\) is a **polygonally inscribed curve** if there exists a polygonal mosaic \(\mathcal{M}=\{P_{i}\}_{i=1}^{M}\) whose union contains \(\sigma\) and such that for each \(i\), \(c_{i}=\sigma\cap P_{i}\) is a convex curve in \(P_{i}\). The curves \(c_{i}\) will be called the **components** of \(\sigma\). The collection of all polygonally inscribed curves in the plane will be denoted by PIC. **Definition 3.4**.: We shall say that a connected compact set \(\sigma\subseteq\mathbb{R}^{2}\) is an **almost polygonally inscribed curve**, written \(\sigma\in\text{APIC}\), if there exists a polygonal mosaic \(\mathcal{M}=\{P_{i}\}_{i=1}^{M}\) whose union contains \(\sigma\) and such that for each \(i\), \(C_{i}=\sigma\cap P_{i}\) is nonempty and is the the union of at most one convex curve in \(P_{i}\) and a (possibly empty) subset of the sides of \(P_{i}\). Given \(\sigma\in\text{APIC}\) with a polygonal mosaic \(\mathcal{M}\), the **components** of \(\sigma\) determined by \(\mathcal{M}\) are the convex curves and line segments determined by \(\mathcal{M}\) whose union is \(\sigma\). Once split into components, every PIC or APIC set can be considered as the drawing of a planar graph, with the components giving the edges of the graph, and the intersections of the components the vertices. The graph is not uniquely determined by the set, since such a set can be decomposed into components in many different ways. It is worth noting the differences between PIC sets and APIC sets. Two components of a PIC set can meet tangentially, but only with strictly opposite convexity. The new behaviour permitted for an APIC set is that a curve and a line may meet tangentially at a vertex. Such a vertex will be called a **line tangential vertex**. **Definition 3.5**.: Let SPIC denote the class of all nonempty compact subsets of \(\mathbb{R}^{2}\) which are subsets of some almost polygonally inscribed curve. The components of sets in PIC or APIC are differentiable compact convex curves. A differentiable convex curve \(c\) joining \(\boldsymbol{x}\) to \(\boldsymbol{y}\) is said to be **projectable** if the orthogonal projection of \(c\) onto the line through \(\boldsymbol{x}\) and \(\boldsymbol{y}\) is just the line segment joining those points. As is shown in [1], any set in PIC can be broken into projectable component curves. This result can be extended to sets in APIC. **Lemma 3.6**.: _Suppose that \(\sigma\in\mathrm{APIC}\). Then there exists a polygonal mosaic \(\mathcal{M}=\{P_{i}\}_{i=1}^{M}\) for \(\sigma\) such that for all \(i\), if \(\sigma\) contains a convex curve in \(P_{i}\) this curve is projectable._ Proof.: Let \(\mathcal{M}_{0}=\{Q_{i}\}_{i=1}^{M_{0}}\) be a polygonal mosaic for \(\sigma\). Suppose that \(1\leq i\leq M_{0}\) and that \(\sigma\) contains a nonprojectable convex curve \(c\) in \(Q_{i}\). As is shown in [1, Section 5] we can recursively subdivide \(Q_{i}\) into (finitely many) smaller convex polygons so that whenever \(c\) passes through ones of the subpolygons, this part of \(c\) is projectable (see Figure 4). We can then replace \(Q_{i}\) in the mosaic with the subpolygons which contain part of \(c\), as well as any subpolygons (such as \(P_{4}\) in Figure 4) with sides containing parts of \(\sigma\). Doing this for each such nonprojectable curve will give the required mosaic. Figure 3: \(\sigma_{1}\in\mathrm{PIC}\), with a polygonal mosaic shown. The sets \(\sigma_{2},\sigma_{3}\) are in \(\mathrm{APIC}\setminus\mathrm{PIC}\) since they contain line tangential vertices. The set \(\sigma_{4}\) is not in APIC due to the curves meeting at a vertex with the same convexity. Note that any finite union of disjoint APIC sets is in SPIC since we can connect the components with line segments. On the other hand, the union of two PIC sets need not be in SPIC since one may get two convex curves meeting tangentially with the same convexity (as in \(\sigma_{4}\) in Figure 3). The following lemma will at least allow us to add line segments to an SPIC set. **Lemma 3.7**.: _Suppose that \(\sigma_{0}\in\mathrm{SPIC}\) and that \(\ell\) is a line segment in \(\mathbb{R}^{2}\). Then \(\sigma=\sigma_{0}\cup\ell\in\mathrm{SPIC}\)._ Proof.: By definition, there exists \(\tau_{0}\in\mathrm{APIC}\) with \(\sigma_{0}\subseteq\tau_{0}\). Let \(\mathcal{M}=\{P_{i}\}_{i=1}^{n}\) be a polygonal mosaic for \(\tau_{0}\) with \(c_{i}\) denoting the convex curve in \(P_{i}\). Without loss we may assume that \(\tau_{0}\) contains all the sides of all the polygons in \(\mathcal{M}\) since \(\mathcal{M}\) is also a polygonal mosaic for this larger set. Let \(M=\cup_{i=1}^{n}P_{i}\) and let \(\hat{\ell}\) denote the full line in \(\mathbb{R}^{2}\) containing \(\ell\). If \(\hat{\ell}\) does not intersect \(M\), then one can certainly connect \(\ell\) to \(M\) with a line segment \(\ell^{\prime}\), and then find suitable polygons containing \(\ell\) and \(\ell^{\prime}\) so that adding these polygons to \(\mathcal{M}\) gives a polygonal mosaic for \(\tau=\tau_{0}\cup\ell\cup\ell^{\prime}\). Thus \(\tau\) is an APIC set containing \(\sigma\) and hence \(\sigma\in\mathrm{SPIC}\). Otherwise \(\hat{\ell}\) intersects at least one polygon in \(\mathcal{M}\). Let \(\ell^{\prime}\) denote the smallest line segment containing \(\ell\) for which \(\hat{\ell}\cap M=\ell^{\prime}\cap M\), and let \(\tau=\tau_{0}\cup\ell^{\prime}\). Our aim now is to adapt \(\mathcal{M}\) so that it is a polygonal mosaic for \(\tau\). First choose, if necessary, convex polygons \(Q_{1},\ldots,Q_{m}\) around any parts of \(\ell^{\prime}\) which are disjoint from \(M\). Next, suppose that \(1\leq i\leq n\) and that \(\ell_{i}=\ell^{\prime}\cap P_{i}\neq\emptyset\). If \(\ell_{i}\) is a subset of the boundary of \(P_{i}\) there will be nothing to be done as \(\ell_{i}\subset\tau_{0}\). We may suppose then that \(\ell_{i}\) intersects the interior of \(P_{i}\). Let \(c_{i}^{\circ}=c_{i}\cap\mathrm{int}(P_{i})\). There are three cases to consider. Figure 4: Lemma 3.6: Subdividing a polygon \(Q_{i}\) to produce a finer mosaic containing only projectable convex curves and sides of polygons. Here \(\sigma\cap Q_{i}=c\cup\ell_{1}\cup\ell_{2}\). The polygon \(Q_{i}\) would be replaced here with \(P_{1},P_{2},P_{3},P_{4}\). 1. \(\ell_{i}\cap c_{i}^{\circ}=\emptyset\). In this case, \(\ell_{i}\) splits \(P_{i}\) into two convex polygons, and in this case we will replace \(P_{i}\) in \(\mathcal{M}\) with the smaller polygon containing \(c_{i}\). 2. \(\ell_{i}\) intersects \(c_{i}^{\circ}\) at a single point \(\boldsymbol{z}\). In this case, as in the proof of the Partition Lemma [1, Lemma 1], we may replace \(P_{i}\) with two smaller convex polygons \(P_{i,1}\) and \(P_{i,2}\) whose union contains \(c_{i}\), and so that \(\ell_{i}\) lies in the union of the boundaries of these polygons. (See Figure 5.) 3. \(\ell_{i}\) intersects \(c_{i}^{\circ}\) in more than one point. In this case we can partition \(c_{i}\) into two or three parts as in Figure 6. By making these replacements for each polygon in \(\mathcal{M}\) which meets \(\hat{\ell}\), and by adding in \(Q_{1},\ldots,Q_{m}\) we will produce a polygonal mosaic for \(\tau\), which shows that \(\sigma\in\text{SPIC}\). (See Figure 7.) Figure 5: The cases where \(\ell_{i}\) meets \(c_{i}\) once. Figure 6: The cases where \(\ell_{i}\) meets \(c_{i}\) more than once. One may also have \(\ell_{i}\) intersecting \(c_{i}\) on a line segment whose endpoints are both in the interior of \(P_{i}\). ## 4 Extending from the real line The first, and rather elementary, step in the proof of the main theorem is to show that if \(\sigma_{0}\) is any compact subset of the real line, and \(f\in AC(\sigma_{0})\), then \(f\) can be extended to any larger compact subset. Suppose then that \(\sigma_{0}\subseteq\sigma\subseteq\mathbb{R}\). Let \(a_{0}=\min\sigma_{0}\) and let \(b_{0}=\max\sigma_{0}\). Let \(J=[a,b]\) denote a compact interval containing \(\sigma\) in its interior. Given \(f\in AC(\sigma_{0})\) define the extension \(\iota(f):J\to\mathbb{C}\) by making it constant on \([a,a_{0}]\) and \([b_{0},b]\) and piecewise linear on the open intervals of \([a_{0},b_{0}]\setminus\sigma_{0}\). **Lemma 4.1**.: _[_3_, Corollary 2.14]_ _The map \(\iota\) is a linear isometry from \(AC(\sigma_{0})\) to \(AC(J)\)._ Extending from a subset of the line to a larger subset of the plane is straightforward. By affine invariance, the following result can be applied to any function which is constant on a family of parallel lines. **Theorem 4.2**.: _Suppose that \(\sigma_{0}\) is a nonempty compact subset of \(\mathbb{R}\) and that \(\sigma\subseteq\mathbb{R}^{2}\) is a compact superset of \(\sigma_{0}\). If \(f\in AC(\sigma_{0})\) then there exists an extension \(\hat{f}\in AC(\sigma)\) with \(\|\hat{f}\|_{BV(\sigma)}=\|f\|_{BV(\sigma_{0})}\)._ Figure 7: Lemma 3.7: adapting a mosaic for the addition of an extra line segment. The top diagram shows a line segment \(\ell\) (in green) and a SPIC set \(\sigma_{0}\) (in red). A polygonal mosaic for \(\sigma_{0}\) is shown by the dashed polygons. The lower diagram shows the addition of three new polygons (in green) and splitting of two polygons into smaller pieces to form a polygonal mosaic for \(\sigma=\sigma_{0}\cup\ell\). Proof.: Combining Lemma 4.1 and Theorem 2.3(iv) gives an extension \(\hat{f}\) to \(\sigma\) with norm at most \(\|f\|_{BV(\sigma_{0})}\). That the norms must be equal follows from Theorem 2.3(iii). ## 5 Convex curves The results for subsets of the real line can be extended to subsets of curves as long as these curves are well enough behaved. The following theorem contains the content of Section 8 of [1]. (We remind the reader that a projectable curve is differentiable except at its endpoints.) **Theorem 5.1**.: _Suppose that \(c\) is a projectable convex curve in the plane. Then there exists a Banach algebra isomorphism \(\Phi:AC(c)\to AC[0,1]\) with \(\|\Phi\|\leq 2\) and \(\|\Phi^{-1}\|=1\)._ More specifically, if \(h:[0,1]\to c\) is a differentiable parameterization of \(c\), then we can take the isomorphism to be \(\Phi_{h}(f)=f\circ h\). **Lemma 5.2**.: _Suppose that \(c\) is a projectable convex curve parameterized by a differentiable function \(h:[0,1]\to c\), and that \(\emptyset\neq\sigma_{0}\subseteq c\). Let \(\tau_{0}=h^{-1}(\sigma_{0})\subseteq[0,1]\). Then \(\Phi_{0}(f)=f\circ h\) defines an isomorphism between \(AC(\sigma_{0})\) and \(AC(\tau_{0})\)._ Proof.: First note that since \(\Phi_{h}(f)=f\circ h\) defines an isomorphism between \(AC(c)\) and \(AC[0,1]\), [2, Theorem 4.7] implies that \(h\in AC[0,1]\) and \(h^{-1}\in AC(c)\). But by Theorem 2.3(iii), this means that \(h|\tau_{0}\in AC(\tau_{0})\) and \(h^{-1}|\sigma_{0}\in AC(\sigma_{0})\). It is an easy consequence of [2, Theorem 3.11] that \(\Phi_{0}\) is a Banach algebra isomorphism from \(BV(\sigma_{0})\) to \(BV(\tau_{0})\). Applying [2, Theorem 4.7] again then, we can deduce that \(\Phi_{0}\) acts as an isomorphism from \(AC(\sigma_{0})\) to \(AC(\tau_{0})\). **Theorem 5.3**.: _Suppose that \(c\) is a projectable convex curve, and that \(\emptyset\neq\sigma_{0}\subseteq c\). If \(f\in AC(\sigma_{0})\) then there exists an extension \(\hat{f}\in AC(c)\) with \(\|\hat{f}\|_{BV(c)}\leq 2\left\|f\right\|_{BV(\sigma_{0})}\)_ Proof.: Suppose that \(f\in AC(\sigma_{0})\). Let \(\Phi_{0}\) and \(h\) be as in Lemma 5.2. Let \(g=\Phi_{0}(f)\in AC(h^{-1}(\sigma_{0}))\). Lemma 4.1 allows us to extend \(g\) to \(\hat{g}\in AC[0,1]\), and so \(\hat{f}=\Phi_{h}^{-1}(\hat{g})\in AC(c)\). The norm bounds follow from the properties of \(\Phi\). One can extend Theorem 4.2 to cover the case where \(\sigma_{0}\) is a nonempty compact subset of a projectable curve. For the moment however, we shall just give a special case which we shall need in the later sections. In this result, the curve \(c\) need not be projectable. **Lemma 5.4**.: _Suppose that \(T\) is a triangle in \(\mathbb{R}^{2}\) with vertices at \(\boldsymbol{x}\), \(\boldsymbol{y}\) and \(\boldsymbol{z}\). Let \(c\) be a differentiable convex curve joining \(\boldsymbol{x}\) to \(\boldsymbol{y}\). Then every \(f\in AC(c)\) admits an extension \(\hat{f}\in AC(T)\) with \(\|\hat{f}\|_{BV(T)}\leq 2\left\|f\right\|_{BV(c)}\)._ Proof.: Let \(\boldsymbol{x}^{\prime}=(0,0)\), \(\boldsymbol{y}^{\prime}=(1,0)\) and \(\boldsymbol{z}^{\prime}=(\frac{1}{2},1)\) and let \(h\) be the affine transformation which maps \(\boldsymbol{x}\) to \(\boldsymbol{x}^{\prime}\), \(\boldsymbol{y}\) to \(\boldsymbol{y}^{\prime}\) and \(\boldsymbol{z}\) to \(\boldsymbol{z}^{\prime}\). Let \(T^{\prime}=h(T)\) and let \(c^{\prime}=h(c)\). Then \(c^{\prime}\) is a projectable convex curve which we may assume is the graph of the convex function \(k:[0,1]\to\mathbb{R}\). As above, we may define \(g\in AC[0,1]\) by \(g(t)=f(h^{-1}(t,k(t)))\), and then setting \(\hat{g}(x,y)=g(x)\) gives an extension of \(g\) to all of \(T^{\prime}\). Now setting \(\hat{f}=\hat{g}\circ h\) gives an absolutely continuous extension of \(f\) to \(T\). The bounds follow from Theorem 2.3(ii) and Theorem 4.2. ## 6 Filling in polygons The other main building block of the sets to which the main theorem will apply are polygonal regions of the plane. The first step is to show that one may always extend an absolutely continuous function defined on the boundary of such a set into its interior. In the case where the polygonal region is the unit square \(S=[0,1]\times[0,1]\), one can write an explicit formula for an extension. This result, with a slightly different bound, is [4, Theorem 3.5]. **Theorem 6.1**.: _Suppose that \(\sigma_{0}\) is the boundary of the unit square \(\sigma=S\) and that \(f\in AC(\sigma_{0})\). Then \(f\) admits an extension \(\hat{f}\in AC(\sigma)\) with_ \[\|\hat{f}\|_{BV(\sigma)}\leq 10\left\|f\right\|_{AC(\sigma_{0})}.\] Proof.: The restrictions of \(f\) to the four sides of \(S\) are absolutely continuous: \[b(x)=f(x,0),\qquad t(x)=f(x,1),\qquad\ell(y)=f(0,y),\qquad r(y)=f(1,y).\] For \((x,y)\in S\) let \[\hat{f}(x,y)=b(x)+\ell(y)-b(0) +(b(0)-b(1)+r(y)-\ell(y))x\] \[+(\ell(0)-\ell(1)+t(x)-b(x))y \tag{1}\] \[+(b(1)-b(0)+t(0)-t(1))xy.\] It is readily verified that \(\hat{f}\) is indeed an extension of \(f\). Further, using Theorem 4.2, it is clear that \(\hat{f}\in AC(\sigma)\). Note that \(\hat{f}\) can also be written as \[\hat{f}(x,y)=b(0)(x-1)(1-y) +b(1)x(1-y)+t(0)(x-1)y+t(1)xy \tag{2}\] \[+b(x)(1-y)+\ell(y)(1-x)+r(y)x+t(x)y\] and so \(\|\hat{f}\|_{\infty}\leq 8\left\|f\right\|_{\infty}\). Using Lemma 2.7 and Theorem 2.16 from [10], \[\operatorname{var}((b(0)-b(1))x,\sigma) \leq\operatorname{var}(b,[0,1])\operatorname{var}(x,\sigma)= \operatorname{var}(b,[0,1])\] \[\operatorname{var}(r(y)x,\sigma) \leq\operatorname{var}(r(y),\sigma)\left\|x\right\|_{\infty}+\left\| r\right\|_{\infty}\operatorname{var}(x,\sigma)=\left\|r\right\|_{BV[0,1]}\] while \[\operatorname{var}((b(1)-b(0) +t(0)-t(1))xy,\sigma)\leq(\operatorname{var}(b,[0,1])+ \operatorname{var}(t,[0,1])\operatorname{var}(xy,\sigma)\] \[\leq(\operatorname{var}(b,[0,1])+\operatorname{var}(t,[0,1])( \left\|x\right\|_{\infty}\operatorname{var}(y,\sigma)+\operatorname{var}(x, \sigma)\left\|y\right\|_{\infty}\] \[=2(\operatorname{var}(b,[0,1])+\operatorname{var}(t,[0,1]).\] Using similar estimates for the other terms in (1), this shows that \[\operatorname{var}(\hat{f},\sigma) \leq\operatorname{var}(b,[0,1])+\operatorname{var}(\ell,[0,1])+ \left\|r\right\|_{BV[0,1]}+\left\|\ell\right\|_{BV[0,1]}\] \[\qquad+\operatorname{var}(\ell,[0,1])+\left\|t\right\|_{BV[0,1]} +\left\|b\right\|_{BV[0,1]}\] \[\qquad+2(\operatorname{var}(b,[0,1])+\operatorname{var}(t,[0,1])\] \[\leq 4\operatorname{var}(f,\sigma_{0})+\left\|b\right\|_{BV[0,1]} +\left\|t\right\|_{BV[0,1]}+\left\|\ell\right\|_{BV[0,1]}+\left\|r\right\|_{BV[ 0,1]}.\] In the proof of Theorem 3.4 of [4] it was shown that \[\left\|b\right\|_{BV[0,1]}+\left\|t\right\|_{BV[0,1]}+\left\|\ell\right\|_{BV[ 0,1]}+\left\|r\right\|_{BV[0,1]}\leq 2\left\|f\right\|_{BV(\sigma_{0})}.\] Combining all of this gives \[\left\|\hat{f}\right\|_{BV(\sigma)} =\left\|\hat{f}\right\|_{\infty}+\operatorname{var}(\hat{f},\sigma)\] \[\leq 8\left\|f\right\|_{\infty}+4\operatorname{var}(f,\sigma_{0})+ 2\left\|f\right\|_{BV(\sigma_{0})}\leq 10\left\|f\right\|_{BV(\sigma_{0})}.\qed\] The second author [13] obtained a slightly better bound for an extension from the boundary of a triangle to its interior. As the construction is a little more complicated than the one for a square, we just record the result here. **Theorem 6.2**.: _[_13_, Theorem 6.3.6]_ _Suppose that \(\sigma_{0}\) is the boundary of a triangular region \(\sigma\) in the plane. If \(f\in AC(\sigma_{0})\) then \(f\) admits an extension \(\hat{f}\in AC(\sigma)\) with_ \[\left\|\hat{f}\right\|_{BV(\sigma)}\leq 7\left\|f\right\|_{AC(\sigma_{0})}.\] It is likely that the constants in both cases for the bounds on the extensions are not sharp, even for the explicit extensions constructed. It is known that the construction in the proof of Theorem 6.1 may produce an extension with a larger norm than the initial function. **Example 6.3**.: Consider the following example of the construction from Theorem 6.1. If \(b(x)=t(x)=x(1-x)\) and \(\ell(y)=r(y)=y(1-y)\), then \(\hat{f}(x,y)=x(1-x)+y(1-y)\). It is easy to check that \(\|\hat{f}\|_{BV(\sigma)}=6\). Now, as in the proof of Section 2.6 of [10], \(\operatorname{var}(f,\sigma_{0})\) is at most twice the parameterized variation of \(f\) around the edges of the square, so \(\left\|f\right\|_{BV(\sigma_{0})}\leq 1+2=3\). Using the results of [9] (see Theorem 6.3), one can obtain extensions to the region bounded by any polygon. **Theorem 6.4**.: _Suppose that \(\sigma\) is a polygonal region in \(\mathbb{R}^{2}\) with boundary \(\sigma_{0}\). Then there exists a homeomorphism \(h:\mathbb{R}^{2}\to\mathbb{R}^{2}\), made up of the composition of locally piecewise affine maps, such that_ * \(h(\sigma)\) _is the square_ \(S=[0,1]\times[0,1]\)_,_ * \(AC(\sigma_{0})\) _is isomorphic to_ \(AC(\partial S)\)_,_ * \(AC(\sigma)\) _is isomorphic to_ \(AC(S)\)_._ _In each case the algebra isomorphism is given by \(\Phi(g)=g\circ h^{-1}\)._ **Theorem 6.5**.: _Suppose that \(\sigma\) is a polygonal region in \(\mathbb{R}^{2}\) with boundary \(\sigma_{0}\), and that \(f\in AC(\sigma_{0})\). Then \(f\) admits an extension \(\hat{f}\in AC(\sigma)\) with_ \[\|\hat{f}\|_{BV(\sigma)}\leq K_{\sigma}\left\|f\right\|_{BV(\sigma_{0})}\] _where \(K_{\sigma}\) only depends on \(\sigma\)._ Proof.: Let \(h\) and \(\phi\) be the maps guaranteed by Theorem 6.4. Then \(g=\Phi(f)\in AC(\partial S)\) and so by Theorem 6.1, \(g\) admits an extension \(\hat{g}\in AC(S)\). The form of \(\Phi\) then ensures that \(\Phi^{-1}(\hat{g})\) is an extension of \(f\). Finally \(\|\hat{f}\|\leq\|\Phi^{-1}\|\left\|\hat{g}\right\|\leq 18\left\|\Phi^{-1} \right\|\left\|\Phi\right\|\left\|f\right\|\), so we can take \(K_{\sigma}=18\left\|\Phi^{-1}\right\|\left\|\Phi\right\|\) which only depends on \(\sigma\). Examining the proof of Theorem 6.4 in [9] shows that the norm \(K_{\sigma}\) in the above proof actually only depends on the number of sides of \(\sigma\). ## 7 Joining theorems Many of the later constructions will require that an extension be separately constructed on different parts of the larger set \(\sigma\). The challenge then is to show that the function is absolutely continuous on the whole of the set. In general, if \(\sigma=\sigma_{1}\cup\sigma_{2}\) then knowing that \(f|\sigma_{1}\in AC(\sigma_{1})\) and \(f|\sigma_{2}\in AC(\sigma_{2})\) is not enough to deduce that \(f\in AC(\sigma)\). **Example 7.1**.: Suppose that \(\{x_{k}\}_{k=1}^{\infty}\) is a strictly decreasing sequence of positive numbers with limit \(0\). Let \(\sigma=\{x_{k}\,:\,k=1,2,3,\dots\}\cup\{0\}\), \(\sigma_{1}=\{x_{2k}\,:\,k=1,2,3,\dots\}\cup\{0\}\) and \(\sigma_{2}=\{x_{2k-1}\,:\,k=1,2,3,\dots\}\cup\{0\}\). Define \(f:\sigma\to\mathbb{C}\) by \(f(x_{k})=\frac{(-1)^{k}}{k}\), with \(f(0)=0\). Then the restrictions of \(f\) to \(\sigma_{1}\) and to \(\sigma_{2}\) are absolutely continuous, but \(f\) is not even of bounded variation on \(\sigma\). If \(f_{1}:\sigma_{1}\to\mathbb{C}\) and \(f_{2}:\sigma_{2}\to\mathbb{C}\) agree on \(\sigma_{1}\cap\sigma_{2}\), we shall call the function determined by \(f(\boldsymbol{x})=f_{i}(\boldsymbol{x})\) for \(\boldsymbol{x}\in\sigma_{i}\) the **joined function** of \(f_{1}\) and \(f_{2}\). If \(\sigma_{1}\) and \(\sigma_{2}\) are disjoint, then the Patching Lemma (Theorem 2.4) implies that \(f\in AC(\sigma)\). The difficulty then occurs in the case that \(\sigma_{1}\) and \(\sigma_{2}\) overlap. Under suitable conditions on \(\sigma_{1}\) and \(\sigma_{2}\) one can at least rule out the possibility that the joined function fails to be of bounded variation. We shall say that \(\sigma_{1}\) and \(\sigma_{2}\)**join convexly** if for given any \(\boldsymbol{x}\in\sigma_{1}\setminus\sigma_{2}\) and \(\boldsymbol{y}\in\sigma_{2}\setminus\sigma_{1}\) the line segment joining \(\boldsymbol{x}\) and \(\boldsymbol{y}\) contains a point \(\boldsymbol{w}\in\sigma_{1}\cap\sigma_{2}\). **Theorem 7.2**.: [10, Theorem 3.8] _Suppose that \(\sigma_{1},\sigma_{2}\subseteq\mathbb{R}^{2}\) are nonempty compact sets which are disjoint except at their boundaries, and that \(\sigma_{1}\) and \(\sigma_{2}\) join convexly. Let \(\sigma=\sigma_{1}\cup\sigma_{2}\). If \(f_{1}\in BV(\sigma_{1})\) and \(f_{2}\in BV(\sigma_{2})\) agree on \(\sigma_{1}\cap\sigma_{2}\), then their joined function \(f\) lies in \(BV(\sigma)\) and \(\left\|f\right\|_{BV(\sigma)}\leq\left\|f_{1}\right\|_{BV(\sigma_{1})}+\left\| f_{2}\right\|_{BV(\sigma_{2})}\)._ If \(\sigma_{1}\) and \(\sigma_{2}\) lie on either side of a straight line then one can, at the cost of an extra factor, remove the convexity hypothesis. Let \(H^{+}\) and \(H^{-}\) denote the closed upper and lower half-planes in \(\mathbb{R}^{2}\) and let \(L\) be the \(x\)-axis. **Theorem 7.3**.: _Suppose that \(\sigma_{1}\subseteq H^{+}\) and \(\sigma_{2}\subseteq H^{-}\) are nonempty compact sets, that \(\sigma_{2}\cap L\subseteq\sigma_{1}\cap L\), and that \(\sigma=\sigma_{1}\cup\sigma_{2}\). If \(f:\sigma\to\mathbb{C}\) then_ \[\left\|f\right\|_{BV(\sigma)}\leq 2\big{(}\|f|\sigma_{1}\|_{BV(\sigma_{1})}+\|f| \sigma_{2}\|_{BV(\sigma_{2})}\big{)}.\] Proof.: Suppose that \(S=[\boldsymbol{x}_{0},\boldsymbol{x}_{1},\dots,\boldsymbol{x}_{n}]\) is a list of elements of \(\sigma\) and let \(J=\{1,\ldots,n\}\). Let \[J_{1} =\{j\in J\,:\,\mathbf{x}_{j-1},\mathbf{x}_{j}\in\sigma_{1}\},\] \[J_{2} =\{j\in J\,:\,\mathbf{x}_{j-1},\mathbf{x}_{j}\in\sigma_{2}\},\] \[J_{3} =J\setminus(J_{1}\cup J_{2}).\] Then, noting that \(J_{1}\) and \(J_{2}\) need not be disjoint, and considering empty sums as zero, \[\operatorname{cvar}(f,S)=\sum_{j=1}^{n}|f(\mathbf{x}_{j})-f(\mathbf{x}_{j-1})|\leq\sum _{k=1}^{3}\sum_{j\in J_{k}}|f(\mathbf{x}_{j})-f(\mathbf{x}_{j-1})|.\] As in the proof of [10, Lemma 4.2], \(\sum_{j\in J_{1}}|f(\mathbf{x}_{j})-f(\mathbf{x}_{j-1})|\leq\operatorname{var}(f, \sigma_{1})\operatorname{vf}(S)\) and \(\sum_{j\in J_{2}}|f(\mathbf{x}_{j})-f(\mathbf{x}_{j-1})|\leq\operatorname{var}(f, \sigma_{2})\operatorname{vf}(S)\). Note that if \(j\in J_{3}\) then one end of the line segment \(s_{j}=\overline{\mathbf{x}_{j-1}\,\mathbf{x}_{j}}\) lies \(H^{+}\) while the other end, which we shall denote \(\mathbf{u}_{j}=(u_{j},v_{j})\), lies in the open lower half-plane. Thus \(v=\sup_{j\in J_{3}}v_{j}\) is strictly negative and each of these line segments crosses the horizontal line \(y=v/2\). This implies that \(\operatorname{vf}(S)\) is at least the number of elements in \(J_{3}\). Thus \[\sum_{j\in J_{3}}|f(\mathbf{x}_{j})-f(\mathbf{x}_{j-1})|\leq\big{(}\|f|\sigma_{1}\|_{ \infty}+\|f|\sigma_{2}\|_{\infty}\big{)}\operatorname{vf}(S).\] It follows that \[\frac{\operatorname{cvar}(f,S)}{\operatorname{vf}(S)}\leq\|f|\sigma_{1}\|_{BV (\sigma_{1})}+\|f|\sigma_{2}\|_{BV(\sigma_{2})}\,,\] and so, on taking the supremum over all such lists \(S\), \[\|f\|_{BV(\sigma)}=\|f\|_{\infty}+\operatorname{var}(f,\sigma)\leq 2\big{(}\| f|\sigma_{1}\|_{BV(\sigma_{1})}+\|f|\sigma_{2}\|_{BV(\sigma_{2})}\big{)}.\qed\] By considering the case where \(\sigma_{1}\) and \(\sigma_{2}\) are disjoint and \(f\) is the characteristic function of \(\sigma_{1}\), one can observe that the factor \(2\) in the above theorem is necessary. Of course by affine invariance, the same result holds with the \(x\)-axis replaced by any other line in the plane. An important special case is when a line and a convex curve meet at a line tangential vertex. **Corollary 7.4**.: _Suppose that the line segment \(\ell=\overline{\mathbf{x}\,\mathbf{z}}\) meets the convex curve \(c\) tangentially at \(\mathbf{x}\), and let \(\sigma=\ell\cup c\). If \(f:\sigma\to\mathbb{C}\) then_ \[\|f\|_{BV(\sigma)}\leq 2\big{(}\|f\|_{BV(\ell)}+\|f\|_{BV(c)}\big{)}.\] Proof.: After applying an appropriate affine transformation one need consider the case where \(\ell=[0,1]\) and \(c\) lies in the closed lower half-plane. The result then follows from Theorem 7.3 with \(\sigma_{1}=\ell\) and \(\sigma_{2}=c\). For the later results we shall piece together functions defined on polygonal pieces. It is not hard to adapt Example 7.1 to show that even in the case of two polygonal sets \(\sigma_{1}\) and \(\sigma_{2}\), there is no constant \(C\) such that \(\left\|f\right\|_{BV(\sigma)}\leq C(\left\|f\right\|_{BV(\sigma_{1})}+\left\|f \right\|_{BV(\sigma_{2})})\). The next result will however allow us some control over the norms if we add triangular pieces one at a time. Let \(R_{1}\) and \(R_{2}\) be two distinct rays in \(\mathbb{R}^{2}\) starting at the origin and let \(S_{1}\) and \(S_{2}\) be the two closed sectors of the plane with boundary \(B=R_{1}\cup R_{2}\). A half-plane affine map will refer to a homeomorphism of the plane which is affine on each of a pair of complementary half-planes. (Further details can be found in [9, Section 4].) **Theorem 7.5**.: _Suppose that \(\sigma_{1}\subseteq S_{1}\) and \(\sigma_{2}\subseteq S_{2}\) are nonempty compact subsets of complementary sectors with boundary \(B\), \(\sigma_{1}\cap B=\sigma_{2}\cap B\), and \(\sigma=\sigma_{1}\cup\sigma_{2}\). If \(f:\sigma\to\mathbb{C}\), then_ \[\left\|f\right\|_{BV(\sigma)}\leq 8\big{(}\left\|f|\sigma_{1}\right\|_{BV( \sigma_{1})}+\left\|f|\sigma_{2}\right\|_{BV(\sigma_{2})}\big{)}.\] Proof.: There exists a half-plane affine map \(\alpha\) which maps \(B\) onto a straight line through the origin. Let \(\hat{\sigma}_{1}\), \(\hat{\sigma}_{2}\) and \(\hat{\sigma}\) be the images of \(\sigma_{1}\), \(\sigma_{2}\) and \(\sigma\) under \(\alpha\), and let \(g:\hat{\sigma}\to\mathbb{C}\) be \(g=f\circ\alpha^{-1}\). Then by [9, Theorem 4.5] and Theorem 7.3 \[\left\|f\right\|_{BV(\sigma)} \leq 2\|g\|_{BV(\hat{\sigma})}\] \[\leq 4\big{(}\|g\|_{BV(\hat{\sigma}_{1})}+\|g\|_{BV(\hat{\sigma}_{ 2})}\big{)}\] \[\leq 8\big{(}\|f\|_{BV(\sigma_{1})}+\|f\|_{BV(\sigma_{2})}\big{)}.\qed\] The factor \(8\) here is unlikely to be sharp, but again by considering characteristic functions, one can see that one needs at least a factor of \(3\) in the bound. **Theorem 7.6**.: _Suppose that \(\sigma_{1}\subseteq H^{+}\) and \(\sigma_{2}\subseteq H^{-}\) are nonempty compact sets, that \(\sigma_{2}\cap L\subseteq\sigma_{1}\cap L\), and that \(\sigma=\sigma_{1}\cup\sigma_{2}\). Suppose that \(f:\sigma\to\mathbb{C}\). If \(f|\sigma_{1}\in AC(\sigma_{1})\) and \(f|\sigma_{2}\in AC(\sigma_{2})\), then \(f\in AC(\sigma)\) and_ \[\left\|f\right\|_{BV(\sigma)}\leq 2\big{(}\|f|\sigma_{1}\|_{BV(\sigma_{1})}+\|f |\sigma_{2}\|_{BV(\sigma_{2})}\big{)}.\] Proof.: The bound on the variation norm comes from the previous theorem, so the task is to show that \(f\) is absolutely continuous. If \(\sigma_{1}\cap\sigma_{2}=\emptyset\), then the absolutely continuity follows immediately from Theorem 2.4 Suppose then that \(\sigma_{1}\cap\sigma_{2}\neq\emptyset\) so that \(\tau=\sigma_{1}\cap L\) is nonempty. Choose \(a_{0},b_{0}\in\mathbb{R}\) such that \(a_{0}\leq x\leq b_{0}\) for all \((x,y)\in\sigma\) and let \(\tau_{0}=[a_{0},b_{0}]\times\{0\}\). Let \(P=[a_{0},b_{0}]\times[c_{0},d_{0}]\) be a closed rectangle containing \(\sigma\). Let \(P_{1}=[a_{0},b_{0}]\times[0,d_{0}]\) and let \(P_{2}=[a_{0},b_{0}]\times[c_{0},d_{0}]\), so that \(\sigma_{1}\subseteq P_{1}\) and \(\sigma_{2}\subseteq P_{2}\). Now \(f|\tau\in AC(\tau)\) so by Lemma 4.1 and Theorem 2.3(iv) there is a function \(g\in AC(\tau_{0})\) such that \(g|\tau=f|\tau\). By setting \(g(x,y)=g(x,0)\) for all \(y\), we can then extend to \(g\in AC(P)\) with \(\left\|g\right\|_{BV(P)}=\left\|f\right\|_{BV(\tau)}\). Fix \(\epsilon>0\). Let \(f_{1}=f-g|\sigma\). Clearly \(f\equiv 0\) on \(\tau\), \(f_{1}|\sigma_{1}\in AC(\sigma_{1})\) and \(f_{1}|\sigma_{2}\in AC(\sigma_{2})\). As \(f_{1}|\sigma_{1}\in AC(\sigma_{1})\), by Lemma 3.2 and Theorem 6.1 of [11] there exists \(h_{1}\in\mathrm{CTPP}(P_{1})\) with \(\left\|f_{1}-h_{1}\right\|_{BV(\sigma_{1})}\leq\epsilon/8\). Now \(h_{1}|\tau_{0}\) is piecewise linear, and in particular it is absolutely continuous. We can therefore extend \(h_{1}\) to \(P_{2}\) by setting \(h_{1}(x,y)=h_{1}(x,0)\) for all \((x,y)\in P_{2}\). Note that \(h_{1}\in\mathrm{CTPP}(P)\). Now \[\left\|h_{1}\right\|_{BV(\sigma_{2})}=\left\|h_{1}\right\|_{BV( \tau_{0})} =\left\|f_{1}-h_{1}\right\|_{BV(\tau_{0})}\] \[=\left\|f_{1}-h_{1}\right\|_{BV(\tau)}\leq\left\|f_{1}-h_{1} \right\|_{BV(\sigma_{1})}<\epsilon/8.\] Similarly, construct \(h_{2}\in\mathrm{CTPP}(P_{2})\) with \(\left\|f_{1}-h_{2}\right\|_{BV(\sigma_{2})}<\epsilon/8\) and extend this to all of \(P\) as above, so that \(\left\|h_{2}\right\|_{BV(\sigma_{1})}<\epsilon/8\). Let \(h=h_{1}+h_{2}\in\mathrm{CTPP}(P)\). Then \[\left\|f_{1}-h\right\|_{BV(\sigma_{1})}\leq\left\|f_{1}-h_{1}\right\|_{BV( \sigma_{1})}+\left\|h_{2}\right\|_{BV(\sigma_{1})}<\epsilon/4.\] Similarly, \(\left\|f_{1}-h\right\|_{BV(\sigma_{2})}<\epsilon/4\) and so, by Theorem 7.3, \(\left\|f_{1}-h\right\|_{BV(\sigma)}<\epsilon\). Thus, \(f_{1}\in AC(\sigma)\). Since \(g|\sigma\in AC(\sigma)\), this implies that \(f\in AC(\sigma)\). Figure 8: Diagram for Theorem 7.6. The next theorem says that we can join functions which are absolutely continuous on polygonal regions. **Theorem 7.7**.: _Let \(Q_{1},\ldots,Q_{n}\) be disjoint bounded open sets whose boundaries consist of a finite number of line segments. Let \(P_{i}=\operatorname{cl}(Q_{i})\), \(1\leq i\leq n\), and let \(\sigma=\cup_{i=1}^{n}P_{i}\). Suppose that \(f:\sigma\to\mathbb{C}\). Then \(f\in AC(\sigma)\) if and only if \(f|P_{i}\in AC(P_{i})\) for each \(i\)._ Proof.: The forward implication is immediate. Suppose that \(f\in AC(P_{i})\) for each \(i\). We shall use the Patching Lemma to show that \(f\in AC(\sigma)\). Suppose that \(\boldsymbol{x}\in\sigma\). There are three mutually exclusive cases to consider. 1. \(\boldsymbol{x}\) lies in a single set \(P_{i}\). In this case there is certainly compact neighbourhood of \(\boldsymbol{x}\) in \(\sigma\) on which \(f\) is absolutely continuous. 2. \(\boldsymbol{x}\) lies in the boundary of two regions, say \(P_{1}\) and \(P_{2}\), but not at a corner of either of the region. In this case one may choose a small closed disk \(D\subseteq P_{1}\cup P_{2}\) centred at \(\boldsymbol{x}\) such that \(f\) is absolutely continuous on \(D\cap P_{1}\) and on \(D\cap P_{2}\). By Theorem 7.6, \(f\) is then absolutely continuous on \(D\), which is a compact neighbourhood of \(\boldsymbol{x}\) in \(\sigma\). 3. \(\boldsymbol{x}\) lies in \(m\geq 2\) regions, say \(P_{1},\ldots,P_{m}\), and it lies at a corner of at least one of these. Since we are only interested in the local behaviour near \(\boldsymbol{x}\), by intersecting with a small square around \(\boldsymbol{x}\) we may assume that each region is in fact a polygon. If \(\boldsymbol{x}\) lies in \(P_{i}\) but not at a corner, one can split \(P_{i}\) into two smaller polygons each with a corner at \(\boldsymbol{x}\), and of course \(f\) is absolutely continuous on each of the smaller polygons. We may therefore also assume that \(\boldsymbol{x}\) lies at a corner of each of \(P_{1},\ldots,P_{m}\), and by again splitting a polygon if necessary, we can assume that the polygons can be labelled so that * each \(P_{i}\) is convex, * \(P_{1},\ldots,P_{r}\) are in one closed half-plane, and that \(P_{r+1},\ldots,P_{m}\) are in the complementary closed half-plane (see Figure 9). (Note that it may be that all the polygons lie in a single half-plane.) One may now inductively apply Theorem 7.6 to show that the restriction of \(f\) to \(\cup_{i=1}^{r}P_{i}\) and \(\cup_{i=r+1}^{m}P_{i}\) are both absolutely continuous. Applying this theorem one more time shows that \(\cup_{i=1}^{m}P_{i}\) is a compact neighbourhood of \(\boldsymbol{x}\) on which \(f\) is absolutely continuous. In all cases, there is a compact neighbourhood of \(\boldsymbol{x}\) on which \(f\) is absolutely continuous, and so \(f\in AC(\sigma)\) by the Patching Lemma. As noted earlier, in the above theorem, it is not possible to bound the norm of \(f\) by some absolute constant times the sum of the norms of the restrictions of \(f\) to the components \(P_{i}\). On the other hand, one can give a bound which will depend on the geometry of the components. **Theorem 7.8**.: _Let \(P\in PW^{+}\) and let \(T\) be a closed triangular whose interior is disjoint from \(P\). Let \(\sigma=P\cup T\). Then there is a constant \(K\) such that if \(f:\sigma\to\mathbb{C}\) then_ \[\left\|f\right\|_{BV(\sigma)}\leq K\big{(}\left\|f\right\|_{BV(P)}+\left\|f \right\|_{BV(T)}\big{)}.\] Proof.: Extending the sides of \(T\) allows us to split the exterior of \(T\) into \(3\) sectors \(R_{1},R_{2},R_{3}\) whose vertices lie at vertices of \(T\), as in Figure 10. Applying Theorem 7.5 shows that \[\left\|f\right\|_{BV(T\cap R_{1})}\leq 8\big{(}\left\|f\right\|_{BV(T)}+ \left\|f\right\|_{BV(R_{1}\cap P)}\big{)}\leq 8\big{(}\left\|f\right\|_{BV(T)}+ \left\|f\right\|_{BV(P)}\big{)}.\] Applying Theorem 7.5 twice more shows that \[\left\|f\right\|_{BV(\sigma)}\leq 8^{3}\big{(}\left\|f\right\|_{BV(T)}+\left\|f \right\|_{BV(P)}\big{)}\] as required. **Corollary 7.9**.: _Suppose that \(P_{1}\in PW^{+}\) and that \(P_{2}\) is a closed polygonal region in the plane whose interior is disjoint from \(P_{1}\), Let \(\sigma=P_{1}\cup P_{2}\). Then there exists a constant \(K(P_{1},P_{2})\) such that if \(f:\sigma\to\mathbb{C}\) then_ \[\left\|f\right\|_{BV(\sigma)}\leq K(P_{1},P_{2})\big{(}\left\|f\right\|_{BV(P _{1})}+\left\|f\right\|_{BV(P_{2})}\big{)}.\] Figure 9: Diagram for Theorem 7.7. Here the bottom polygon is split in two so that \(P_{1},P_{2},P_{3}\) lie in one half-plane, and \(P_{4}\) lies in the complementary half-plane. Proof.: Triangulate \(P_{2}\) as \(\cup_{k=1}^{n}T_{k}\). Applying the previous theorem repeatedly then shows that \[\left\|f\right\|_{BV(\sigma)} \leq K^{n}\big{(}\left\|f\right\|_{BV(P_{1})}+\left\|f\right\|_{BV(T _{1})}+\cdots+\left\|f\right\|_{BV(T_{n})}\big{)}\] \[\leq K^{n}\big{(}\left\|f\right\|_{BV(P_{1})}+n\left\|f\right\|_{ BV(P_{2})}\big{)}.\qed\] One can of course extend this result to deal with any finite number of polygons. As can be seen from the proof, the constant \(K(P_{1},P_{2})\) can be chosen so that it only depends on the minimum number of triangles needed to triangulate one of the the polygons. (The constant needs to depend at least linearly in this number, but it seems unlikely that it needs the dependence given in the proof.) The next result says that if we can extend a function to be absolutely continuous on the 'holes' of a \(PW^{+}\) set, then the function will be absolutely continuous on the filled in set. **Theorem 7.10**.: _Suppose that \(\sigma_{0}\in PW^{+}\). Let \(W_{1},\ldots,W_{d}\) denote the bounded components of the complement of \(\sigma_{0}\), and let \(\sigma=\sigma_{0}\cup\left(\cup_{i=1}^{d}\operatorname{cl}(W_{i})\right)\). Then \(f\in AC(\sigma)\) if and only if \(f|\sigma_{0}\in AC(\sigma_{0})\) and \(f|\operatorname{cl}(W_{i})\in AC(\operatorname{cl}(W_{i}))\) for \(1\leq i\leq d\)._ Proof.: One only needs to prove the converse direction. Suppose then that \(f|\sigma_{0}\in AC(\sigma_{0})\) and \(f|\operatorname{cl}(W_{i})\in AC(\operatorname{cl}(W_{i}))\) for \(1\leq i\leq d\). One can triangulate \(\sigma_{0}\) and each of the sets \(W_{i}\). The set \(\sigma\) is then a union of triangles and the restriction of \(f\) to each of these is absolutely continuous so the result follows from Theorem 7.7. At this point we have enough to show that if \(\sigma_{0}\) is a polygonal region with finitely many polygonal windows, then, using Theorem 6.5, we can extend Figure 10: Joining a triangle to a polygonal region. the function to those windows and hence produce an absolutely continuous function on the 'filled-in' set. An alternative proof of this result may be found in [13, Theorem 6.4.2], where an explicit bound on the norm of the extension is given in terms of the total number of edges on the polygonal region and its windows. It is worth noting that in all the known examples where a joined function fails to be absolutely continuous, the issue has been a failure to control the variation norm. This leads to the following open question. **Question 7.11**.: If \(f_{1}\in AC(\sigma_{1})\) and \(f_{2}\in AC(\sigma_{2})\) and \(f\in BV(\sigma)\), must we have \(f\in AC(\sigma)\)? ## 8 An APIC joining theorem Theorem 7.6 is strong enough to prove that a function \(f\) defined on a PIC set \(\sigma=\cup_{k=1}^{n}c_{k}\) is absolutely continuous if and only if it is absolutely continuous on each of its component curves. Our next task is to extend this result to the class of APIC sets. Note that as is shown in Theorem 6.2.1 of [13], if \(c_{1}=\{(x,x^{2})\,:\,0\leq x\leq 1\}\) and \(c_{2}=\{(x,2x^{2})\,:\,0\leq x\leq 1\}\), and \(\sigma=c_{1}\cup c_{2}\) then one can find a function which is absolutely continuous on each of the components, but which is not absolutely continuous on \(\sigma\), so one cannot extend this result to arbitrary unions of convex curves. (Of course, this doesn't imply that one can't extend absolutely continuous functions from such sets.) We begin by giving showing that one can join functions of bounded variation on different components of an APIC set. **Lemma 8.1**.: _Let \(P\) be a convex \(N\)-gon with sides \(\ell_{1},\ldots,\ell_{N}\) and let \(c\) be a convex curve in \(P\). Suppose that \(\mathcal{C}\) is a nonempty subset of \(\{c,\ell_{1},\ldots,\ell_{N}\}\) and let \(\sigma=\cup\{\tau\,:\,\tau\in\mathcal{C}\}\). If \(f:\sigma\to\mathbb{C}\) then_ \[\max\bigl{\{}\left\|f\right\|_{BV(\tau)}\,:\,\tau\in\mathcal{C}\bigr{\}}\leq \left\|f\right\|_{BV(\sigma)}\leq K_{N}\Bigl{(}\sum_{\tau\in\mathcal{C}} \left\|f\right\|_{BV(\tau)}\Bigr{)}.\] Proof.: The left hand inequality is clear, so it remains to establish the equality on the right. Label the sets in \(\mathcal{C}\) as \(\tau_{0},\ldots,\tau_{r}\), where, if \(c\in\mathcal{C}\) we choose \(\tau_{0}=c\). Let \(\sigma_{0}=\tau_{0}\) and for \(j=1,\ldots,r\) let \(\sigma_{j}=\sigma_{j-1}\cup\tau_{j}\). Since \(P\) is convex, one may apply Theorem 7.3 to show that for each \(j\) \[\left\|f\right\|_{BV(\sigma_{j})}\leq 2\bigl{(}\left\|f\right\|_{BV(\sigma_{j-1})} +\left\|f\right\|_{BV(\tau_{j})}\bigr{)}\] from which the result follows easily. **Theorem 8.2**.: _Suppose that \(\sigma\in\mathrm{APIC}\). Let \(\mathcal{C}\) denote the components of \(\sigma\) with respect to a polygonal mosaic \(\mathcal{M}\). Then there exists \(K_{\sigma,\mathcal{M}}\) such that if \(f:\sigma\to\mathbb{C}\) then_ \[\left\|f\right\|_{BV(\sigma)}\leq K_{\sigma,\mathcal{M}}\sum_{\tau\in\mathcal{ C}}\left\|f\right\|_{BV(\tau)}.\] Proof.: Let \(\mathcal{M}=\{P_{i}\}_{i=1}^{n}\) be the polygonal mosaic. As above, for \(j=1,\ldots,n\), let \(\sigma_{j}=\cup_{i=1}^{j}(P_{i}\cap\sigma)\) and let \(\mathcal{C}_{j}\) denote the components of \(\sigma\) which lie in \(\sigma_{j}\). The previous lemma implies that \(\left\|f\right\|_{BV(\sigma_{1})}\) is bounded by a constant \(K_{1}\) (depending on the number of sides of \(P_{1}\)) times the sum of the norms of \(f\) on the components of \(\sigma\) in \(\mathcal{C}_{1}\). Suppose now that \(1\leq j<n\) and that \[\left\|f\right\|_{BV(\sigma_{j})}\leq K_{j}\sum_{s\in\mathcal{C}_{j}}\left\|f \right\|_{BV(s)}. \tag{3}\] Let \(r\) denote the number of sides of \(P_{j+1}\). Extending the sides of \(P_{j+1}\) one can split the exterior of \(P_{j+1}\) into \(r\) regions \(R_{1},\ldots,R_{r}\), each contained in a sector whose vertex is one of the vertices of the polygon, as in Figure 11. As in the proof of Theorem 7.8, one can now repeatedly apply Theorem 7.5 to show that \[\left\|f\right\|_{BV(\sigma_{k+1})}\leq 8^{r}\big{(}\left\|f\right\|_{BV(\sigma _{k})}+\left\|f\right\|_{BV(\sigma\cap P_{k+1})}\big{)}.\] Applying Lemma 8.1 and using (3) gives the result. Our next step is to ensure that a function \(f\) defined on an \(\mathrm{APIC}\) set is absolutely continuous if and only if it is absolutely continuous on each component curve. Recall from Example 7.1 that this result fails if one considers more general finite unions of convex curves. Figure 11: Splitting the exterior of a convex polygon into sectors \(R_{1},\ldots,R_{r}\). **Lemma 8.3**.: _Suppose that \(T\) is a triangle in \(\mathbb{R}^{2}\) with vertices at \(\boldsymbol{x}\), \(\boldsymbol{y}\) and \(\boldsymbol{z}\). Let \(c\) be a differentiable convex curve joining \(\boldsymbol{x}\) to \(\boldsymbol{y}\) through the interior of \(T\) which meets the side \(\ell=\overline{\boldsymbol{x}\,\boldsymbol{z}}\) tangentially at \(\boldsymbol{x}\). Let \(\sigma=c\cup\ell\) and suppose that \(f:\sigma\to\mathbb{C}\). If \(f|c\in AC(c)\) and \(f|\ell\in AC(\ell)\) then \(f\in AC(\sigma)\). Furthermore \(f\) has an extension \(\hat{f}\in AC(T)\) with_ \[\|\hat{f}\|_{BV(T)}\leq 3\,\|f\|_{BV(\sigma)}\,.\] Proof.: By applying a suitable affine transformation we may assume that \(\boldsymbol{x}=(0,0)\), that \(\ell=\{(0,y)\,:\,0\leq y\leq 1\}\), and that \(c\) joins \((0,0)\) to \((1,1)\). We may assume that \(c\) is the graph of the convex function \(k:[0,1]\to\mathbb{R}\). Let \(R=[0,1]\times[0,1]\). Let \(g(x,y)=f(0,y)\), \((x,y)\in R\). Then, as \(f|\ell\in AC(\ell)\), Theorem 2.3(iv) implies that \(g\in AC(R)\). Define \(f_{1}:c\to\mathbb{C}\) by \(f_{1}(x,y)=f(x,y)-g(x,y)\), \((x,y)\in c\). Now \(f_{1}\in AC(c)\), so as \(c\) is a convex curve, the function \(h:[0,1]\to\mathbb{C}\), \(h(x)=f_{1}(x,k(x))\) is also absolutely continuous. Indeed \(\hat{f}_{1}(x,y)=h(x)\) is then absolutely continuous on all of \(R\). It follows that \[\hat{f}(x,y)=\hat{f}_{1}(x,y)+g(x,y),\qquad(x,y)\in R\] is also absolutely continuous on \(R\). But of course \(\hat{f}\) is an extension of \(f\) and so \(f=\hat{f}|\sigma\in AC(\sigma)\). The estimate for \(\|\hat{f}\|_{BV(T)}\) simply follows from the triangle inequality and the facts that \(\|\hat{f}_{1}\|_{BV(T)}\leq 2\|f\|_{\sigma}\) (by Lemma 5.4) and \(\|g\|_{BV(T)}\leq\|f\|_{BV(\sigma)}\) (by Theorem 2.3(iv)). The following theorem is a generalization of [1, Theorem 6] for PIC sets. **Theorem 8.4**.: _Suppose that \(\sigma\in\mathrm{APIC}\) is represented as above as a union of convex curves and lines meeting these curves tangentially, and that \(f:\sigma\to\mathbb{C}\). Then \(f\in AC(\sigma)\) if and only if_ 1. \(f|c_{k}\in AC(c_{k})\) _for_ \(k=1,\ldots,n\)_, and_ 2. \(f|\ell_{j}\in AC(\ell_{j})\) _for_ \(j=1,\ldots,m\)_._ Proof.: The forward direction follows from the general results about restrictions of absolutely continuous functions. Suppose then that (1) and (2) hold. By the Patching Lemma, it suffices to show that \(f\) is absolutely continuous on a compact neighbourhood of each point in \(\sigma\). Note that if \(\boldsymbol{x}\in\sigma\) is not a line tangential vertex, then, by the PIC joining theorem [1, Corollary 3], \(f\) is absolutely continuous on a compact neighbourhood of \(\boldsymbol{x}\). So suppose that \(\boldsymbol{x}\) is a line tangential vertex. Fix a small polygonal region \(P\) centred at \(\boldsymbol{x}\) and split this into triangles \(T_{1},\ldots,T_{r}\) so that (the closure of) each triangle contains at most one curve \(c_{k}\) (and necessarily any line which meets \(c_{k}\) tangentially at \(\boldsymbol{x}\)). (See Figure 12.) Using Lemma 8.3, one can deduce that \(f\) is absolutely continuous on \(T_{j}\cap\sigma\) for all \(j\). Again, as in the proof of Theorem 7.7, one may inductively use Theorem 7.6 to show that \(f\) is absolutely continuous on the compact neighbourhood \(P\cap\sigma\) of \(\boldsymbol{x}\). Thus \(f\) is absolutely continuous on all of \(\sigma\). A consequence of the preceding results is that APIC is a 'Gelfand-Kolmogorov family' of compact sets, which generalizes [1, Theorem 7]. **Corollary 8.5**.: _Suppose that \(\sigma,\tau\in\mathrm{APIC}\). Then \(AC(\sigma)\) is isomorphic (as a Banach algebra) to \(AC(\tau)\) if and only if \(\sigma\) is homeomorphic to \(\tau\)._ A final result which we will need is to be able to extend an absolutely continuous function defined on an APIC set on each polygon of its polygonal mosaic. **Theorem 8.6**.: _Suppose that \(c\) is a projectable convex curve joining two vertices \(\boldsymbol{x},\boldsymbol{y}\) of a convex polygon \(P\) to each other through the interior of \(P\). Let \(\sigma_{0}\) be a compact set which is the union of \(c\) and a (possibly empty) subset of the boundary of \(P\), and suppose that \(f\in AC(\sigma_{0})\). Then \(f\) admits an extension \(\hat{f}\in AC(P)\)._ Proof.: Note that \(f\) is initially specified at at least two points of the boundary of \(P\), namely the endpoints of \(c\). First extend \(f\) to all of the sides of \(P\) by making it continuous and affine on any parts of \(\partial P\) which were not part of \(\sigma_{0}\) (and hence, by Lemma 4.1 and Theorem 8.4, it is in \(AC(\partial P)\)). If \(c\) is a straight line, then the conclusion follows by applying Theorems 6.5 and 7.7. Figure 12: Triangulating around a line tangential vertex. Suppose then that \(c\) is not a straight line. Let \(P_{1}\subseteq P\) be the convex polygon containing \(c\) which has the diagonal \(\overline{\boldsymbol{x}\,\boldsymbol{y}}\) as a side. (If \(\overline{\boldsymbol{x}\,\boldsymbol{y}}\) was already a side, then \(P_{1}=P\).) Since \(c\) is convex, one can find a line segment \(\ell_{1}\) in \(P_{1}\) which is tangential to \(c\) at \(\boldsymbol{x}\). Choose a point \(\boldsymbol{z}\) on \(c\) so that the orthogonal projection \(\boldsymbol{w}\) of \(\boldsymbol{z}\) lies on \(\ell_{1}\), and consider the triangle \(T_{1}\) with vertices \(\boldsymbol{x},\boldsymbol{z},\boldsymbol{w}\). If \(\ell_{1}\) forms part of a side of \(P\), and \(f\) has been specified on that side, then extend \(f\) to all of \(T_{1}\) using Lemma 8.3. Otherwise extend \(f\) to \(T_{1}\) using Lemma 5.4. In either case one has an absolutely continuous extension of \(f\) to this triangle. The same procedure can be repeated choosing a small right-angle triangle \(T_{2}\) with vertices \(\boldsymbol{y},\boldsymbol{z}^{\prime},\boldsymbol{w}^{\prime}\) to define an extension of \(f\) on \(T_{2}\). The points \(\boldsymbol{z},\boldsymbol{z}^{\prime}\in c\) now lie in the interior of \(P\). Let \(R\) be a rectangle with side \(\overline{\boldsymbol{z}\,\boldsymbol{z}^{\prime}}\) which contains \(c^{\prime}\), the part of \(c\) between \(\boldsymbol{z}\) and \(\boldsymbol{z}^{\prime}\). Choose a polygon \(Q\subseteq R\cap P_{1}\) which contains \(c^{\prime}\). Since \(c\) is projectable we may use Theorem 2.3(iv) to extend \(f\) to all of \(Q\) by making it constant on lines which are orthogonal to \(\overline{\boldsymbol{z}\,\boldsymbol{z}^{\prime}}\). We have now specified the extension \(\hat{f}\) on the green regions in Figure 13, as well as on the blue boundary of \(P\). This leaves a finite number of polygonal regions on which \(\hat{f}\) given on the boundary, but not in the interiors of each of the regions. Using Theorem 6.5 one can now extend \(\hat{f}\) so that it is absolutely continuous on each of these regions. Finally, we can apply Theorem 7.7 to deduce that \(\hat{f}\) is absolutely continuous on the whole set \(P\). Figure 13: The construction in the proof of Theorem 8.6. The value of \(f\) is initially given on the red curves. ## 9 The \(\mathrm{SPIC}\) extension theorem Suppose that \(\sigma_{0}\in\mathrm{SPIC}\). Then there exists a set \(\sigma_{1}\in\mathrm{APIC}\) with \(\sigma_{0}\subseteq\sigma_{1}\). By Lemma 3.6, we may choose a polygonal mosaic so that each of the components of \(\sigma_{1}\) is projectable, and we shall assume that that is the case throughout this section. We will see below that we can always construct extensions of \(AC\) functions defined on \(\mathrm{APIC}\) sets, and then extend this result to the case of \(\mathrm{SPIC}\) sets. **Lemma 9.1**.: _Suppose that \(\emptyset\neq\sigma_{0}\subseteq\sigma_{1}\) with \(\sigma_{1}\in\mathrm{APIC}\). If \(f\in AC(\sigma_{0})\), then there exists an extension \(\hat{f}\in AC(\sigma_{1})\) with \(\|\hat{f}\|_{BV(\sigma_{1})}\leq C_{\sigma_{1}}\left\|f\right\|_{BV(\sigma_{0 })}\) for some \(C_{\sigma_{1}}>0\)._ Proof.: Let \(\mathcal{C}\) denote the set of components of \(\sigma_{1}\) with respect to a suitable polygonal mosaic. Suppose that \(f\in AC(\sigma_{0})\). Let \(\sigma_{0}^{\prime}\) denote the union of \(\sigma_{0}\) and all of the (finitely many) endpoints of the components of \(\sigma_{1}\). Any endpoint which is not in \(\sigma_{0}\) is an isolated point of \(\sigma_{0}^{\prime}\). Thus we can extend \(f\) to \(\sigma_{0}^{\prime}\) by setting it to be zero at any such point and this extension is in \(AC(\sigma_{0}^{\prime})\). The norm of the extension may increase by a factor of \(3\) on each component. Let \(\tau\in\mathcal{C}\) be a component of \(\sigma_{1}\). By Theorem 5.3 we can extend \(f\) from \(\sigma_{0}^{\prime}\cap\tau\) to \(\hat{f}\in AC(\tau)\) (with a possible doubling of the norm). Since \(\sigma_{0}^{\prime}\) contains all the endpoints of the components of \(\sigma_{1}\), doing this for each \(\tau\in\mathcal{C}\) produces a well-defined function \(\hat{f}:\sigma_{1}\rightarrow\mathbb{C}\). By Theorem 8.4, \(\hat{f}\in AC(\sigma_{1})\). The existence of the constant \(C_{\sigma_{1}}\) follows from Theorem 8.2. **Theorem 9.2**.: _Suppose that \(\sigma_{0}\in\mathrm{SPIC}\) and that \(\sigma\) is a compact superset of \(\sigma_{0}\). If \(f\in AC(\sigma_{0})\) then there exists an extension \(\hat{f}\in AC(\sigma)\) with \(\|\hat{f}\|_{BV(\sigma)}\leq K_{\sigma_{0}}\left\|f\right\|_{BV(\sigma_{0})}\) for some \(K_{\sigma_{0}}>0\)._ Proof.: Suppose that \(f\in AC(\sigma_{0})\). As \(\sigma_{0}\in\mathrm{SPIC}\), there exists \(\sigma_{1}\in\mathrm{APIC}\) with \(\sigma_{0}\subseteq\sigma_{1}\). By Lemma 9.1 we can find an absolutely continuous extension of \(f\) to \(\sigma_{1}\). Let \(\mathcal{M}=\{P_{i}\}_{i=1}^{n}\) be a polygonal mosaic for \(\sigma_{1}\) chosen so that each component convex curve is projectable. By Theorem 8.6 we can extend \(f\) from \(\sigma_{1}\cap P_{1}\) to be absolutely continuous on \(P_{1}\). We may now inductively further extend \(f\) to each polygon \(P_{i}\) in turn, taking into account the values which have already been determined on the boundary of the polygon at each stage. By Theorem 7.7, the extension \(\hat{f}\) is absolutely continuous on \(M=\cup_{i=1}^{n}P_{i}\) (the blue region in Figure 14). Let \(R_{0}\) be a large rectangle which contains \(M\) in its interior. Let \(\sigma_{2}=M\cup\partial R_{0}\) and extend \(\hat{f}\) to \(\sigma_{2}\) by making it zero on the boundary of \(\sigma_{2}\). As in [10, Lemma 4.2], \(\hat{f}\in AC(\sigma_{2})\) with \(\|\hat{f}\|_{BV(\sigma_{2})}\leq 5\left\|f\right\|_{BV(M)}\). The region \(R_{0}\setminus M\) can be triangulated by \(\{T_{i}\}_{i=1}^{m}\) with each triangle having vertices in \(\sigma_{2}\). If necessary, use Theorem 9.1 to extend \(\hat{f}\) to the whole boundary of \(T_{1}\). One can then use Theorem 6.5 to extend \(\hat{f}\) to an absolutely continuous function on \(T_{1}\). As above, one can repeat this procedure with each triangle in turn, taking into account the values of \(\hat{f}\) already determined. If necessary, one now chooses an even larger rectangle \(R\) containing \(R_{0}\cup\sigma\) and extends \(\hat{f}\) further by setting \(\hat{f}=0\) on \(R\setminus R_{0}\). Applying Theorem 7.7 again shows that \(\hat{f}\in AC(R)\). At each step we have a control over the norm of the extension. This provides a bound on \(\|\hat{f}\|_{BV(R)}\) which depends only on the choice of \(R_{0}\), and not on \(\sigma\). Finally we can restrict \(\hat{f}\) to \(\sigma\) to give the conclusion of the theorem. ## 10 The main theorem **Theorem 10.1**.: _Suppose that \(\emptyset\neq\sigma_{0}=\sigma_{P}\cup\sigma_{S}\) with \(\sigma_{P}\in PW^{+}\) and \(\sigma_{S}\in\mathrm{SPIC}\) and that \(\sigma\) is a compact superset of \(\sigma_{0}\). Then there exists a constant \(C_{\sigma_{0}}\) such that for all \(f\in AC(\sigma_{0})\) there exists an extension \(\hat{f}\in AC(\sigma)\) with \(\|\hat{f}\|_{BV(\sigma)}\leq C_{\sigma_{0}}\|f\|_{BV(\sigma_{0})}\)._ Figure 14: Theorem 9.2. The function \(f\) is first extended to \(\sigma_{1}\in\mathrm{APIC}\), then to the blue polygons whose union is \(M\), and then, one triangle at a time, to the rectangle \(R_{0}\). The final step is to restrict to the set \(\sigma\). Proof.: Choose a large rectangle \(R\) which contains \(\sigma\) in its interior. Let \(\hat{\sigma}_{P}=\sigma_{P}\cup\partial R\in PW^{+}\). As in the proof of Theorem 9.2, at the cost of a factor \(5\) in the norm, we can extend \(f\) by setting it to be zero on \(\partial R\). Let \(\mathcal{W}=\{W_{1},\ldots,W_{d}\}\) denote the connected components of \(R\setminus\hat{\sigma}_{P}\). Fix \(W\in\mathcal{W}\). Since the boundary of \(W\) consists of a finite number of line segments, we can use Lemma 3.7 to show that \(\sigma_{W}=\partial W\cup(\sigma_{S}\cap W)\in\mathrm{SPIC}\). Thus, by Theorem 9.2, there is a constant \(K_{W}\) and an extension \(f_{W}\) of \(f|\sigma_{W}\) to an absolutely continuous function on all of \(\mathrm{cl}(W)\) with \(\left\|f_{W}\right\|_{BV(W)}\leq K_{W}\left\|f\right\|_{BV(\sigma_{0})}\). Having done this for each \(W\in\mathcal{W}\), setting \[\hat{f}(\boldsymbol{x})=\begin{cases}f(\boldsymbol{x}),&\boldsymbol{x}\in \sigma_{0},\\ f_{W}(\boldsymbol{x}),&\boldsymbol{x}\notin\sigma_{0},\ \boldsymbol{x}\in W\in \mathcal{W}\end{cases}\] gives a function which is absolutely continuous on all of \(R\) by Theorem 7.10. Finally we can restrict to \(\sigma\) to obtain the required extension in \(AC(\sigma)\). The bound follows from applying Corollary 7.9 repeatedly. Theorem 10.1 covers almost all the sets \(\sigma_{0}\) for which it it is known that one can always find an absolutely continuous extension to a larger compact set. It is likely that the differentiability restriction on the convex curves considered can be relaxed. It is not too difficult, for example, to use the results of this paper to prove an extension result for a single convex curve consisting of an infinite collection of line segments. It would of course be of interest to know whether one can always extend an absolutely continuous function on a disk to larger sets. ## 11 An application to operator theory The motivation for this work is to be able to show that if a Banach space operator \(T\) is an \(AC(\sigma)\) operator, then it is in fact an \(AC(\sigma(T))\) operator. That is, it has an \(AC(\sigma(T))\) functional calculus. Suppose then that \(T\) is bounded operator on a Banach space \(X\) with \(\sigma(T)=\sigma_{P}\cup\sigma_{S}\) with \(\sigma_{P}\in PW^{+}\) and \(\sigma_{S}\in\mathrm{SPIC}\). Suppose further than \(T\) has an \(AC(\sigma)\) functional calculus, which implies that \(\sigma(T)\subseteq\sigma\). That is, there is a Banach algebra homomorphism \(\Psi:AC(\sigma)\to B(X)\) which extends the natural polynomial functional calculus. By Theorem 10.1 there is a map \(e:AC(\sigma(T))\to AC(\sigma)\) such that \(e(f)\) is an extension of \(f\) to \(\sigma\). Thus the map \(\Phi:AC(\sigma(T))\to B(X)\) given by \(\Phi(f)=\Phi(e(f))\) is well-defined. It is of course not immediately clear that this map is a functional calculus map for \(T\) since the map \(e\) has not been shown to have any algebraic properties. Suppose then that \(f,g\in AC(\sigma(T))\). Then \(e(f+g)-(e(f)+e(g))\) is identically zero on \(\sigma(T)\). Theorem 3.1.6 of [8] ensures that the support of the functional calculus map \(\Psi\) is \(\sigma(T)\), which implies that \[0=\Psi\big{(}e(f+g)-(e(f)+e(g))\big{)}=\Psi(e(f+g))-\Psi(e(f))-\Psi(e(g))\] or \(\Phi(f+g)=\Phi(f)+\Phi(g)\). A similar proof shows that \(\Phi\) is multiplicative. Since \(\|\Phi(f)\|\leq\|\Psi\|\,\|e(f)\|_{BV(\sigma)}\leq C_{\sigma(T)}\,\|\Psi\|\,\|f \|_{BV(\sigma(T))}\) we now have that \(\Phi\) is a Banach algebra homomorphism. Finally, let \(\lambda(z)=z\) be the identity function on the complex plane. Since \(e(\lambda)-\lambda\) is identically zero on \(\sigma(T)\) we must have that \(\Phi(\lambda)=\Psi(e(\lambda))=\Psi(\lambda)=T\), and so \(\Phi\) is an \(AC(\sigma(T))\) functional calculus for \(T\). We note in particular that this conclusion holds whenever \(\sigma(T)\subset\mathbb{R}\) or \(\sigma(T)\subseteq\mathbb{T}\), which covers the cases of well-bounded and trigonometrically well-bounded operators. The remaining challenge is to be able to remove the above hypotheses on the spectrum of \(T\). ### Acknowledgements The work of the second author was supported by the Research Training Program of the Department of Education and Training of the Australian Government. ### Rights Retention Statement This research was produced in whole or part by UNSW Sydney researchers and is subject to the UNSW Intellectual property policy. For the purposes of Open Access, the authors have applied a Creative Commons Attribution CC-BY licence to any Author Accepted Manuscript(AAM) version arising from this submission.
2310.11709
Live Graph Lab: Towards Open, Dynamic and Real Transaction Graphs with NFT
Numerous studies have been conducted to investigate the properties of large-scale temporal graphs. Despite the ubiquity of these graphs in real-world scenarios, it's usually impractical for us to obtain the whole real-time graphs due to privacy concerns and technical limitations. In this paper, we introduce the concept of {\it Live Graph Lab} for temporal graphs, which enables open, dynamic and real transaction graphs from blockchains. Among them, Non-fungible tokens (NFTs) have become one of the most prominent parts of blockchain over the past several years. With more than \$40 billion market capitalization, this decentralized ecosystem produces massive, anonymous and real transaction activities, which naturally forms a complicated transaction network. However, there is limited understanding about the characteristics of this emerging NFT ecosystem from a temporal graph analysis perspective. To mitigate this gap, we instantiate a live graph with NFT transaction network and investigate its dynamics to provide new observations and insights. Specifically, through downloading and parsing the NFT transaction activities, we obtain a temporal graph with more than 4.5 million nodes and 124 million edges. Then, a series of measurements are presented to understand the properties of the NFT ecosystem. Through comparisons with social, citation, and web networks, our analyses give intriguing findings and point out potential directions for future exploration. Finally, we also study machine learning models in this live graph to enrich the current datasets and provide new opportunities for the graph community. The source codes and dataset are available at https://livegraphlab.github.io.
Zhen Zhang, Bingqiao Luo, Shengliang Lu, Bingsheng He
2023-10-18T04:54:22Z
http://arxiv.org/abs/2310.11709v2
# Live Graph Lab: Towards Open, Dynamic and Real Transaction Graphs with NFT ###### Abstract Numerous studies have been conducted to investigate the properties of large-scale temporal graphs. Despite the ubiquity of these graphs in real-world scenarios, it's usually impractical for us to obtain the whole real-time graphs due to privacy concerns and technical limitations. In this paper, we introduce the concept of _Live Graph Lab_ for temporal graphs, which enables open, dynamic and real transaction graphs from blockchains. Among them, Non-fungible tokens (NFTs) have become one of the most prominent parts of blockchain over the past several years. With more than $40 billion market capitalization, this decentralized ecosystem produces massive, anonymous and real transaction activities, which naturally forms a complicated transaction network. However, there is limited understanding about the characteristics of this emerging NFT ecosystem from a temporal graph analysis perspective. To mitigate this gap, we instantiate a live graph with NFT transaction network and investigate its dynamics to provide new observations and insights. Specifically, through downloading and parsing the NFT transaction activities, we obtain a temporal graph with more than 4.5 million nodes and 124 million edges. Then, a series of measurements are presented to understand the properties of the NFT ecosystem. Through comparisons with social, citation, and web networks, our analyses give intriguing findings and point out potential directions for future exploration. Finally, we also study machine learning models in this live graph to enrich the current datasets and provide new opportunities for the graph community. The source codes and dataset are available at [https://livegraphlab.github.io](https://livegraphlab.github.io). ## 1 Introduction Temporal graphs provide an accurate representation of real-world systems, including social networks, transaction networks and the Web [5; 38; 39; 41], etc. By investigating temporal graphs, we can gain insights into the temporal dynamics and understand how these systems evolve and function [86; 6]. Notably, a growing number of graph mining algorithms [36; 25; 85] and graph systems [69; 35; 13] have been developed. However, with this continuously growing trend, several severe issues emerge, which limit the further development of graph community. In the current literature, studies are usually conducted on a set of outdated and incomplete graphs. The majority of the aforementioned graphs are either not easily available or their graph structures are incomplete since they cannot record all the interactions in graph. Moreover, even through all the interactions are recorded, they might not be shareable in a public and timely evolving manner, such as social networks in companies like Meta and Tencent. Thus, for meaningful temporal graph analysis and benchmarks, we need open graph datasets that evolve dynamically and are easily accessible in a timely manner. To bridge this gap, we propose the concept of Live Graph Lab, which provides live graphs according to blockchain transactions. Specifically, we offer a set of tools for _downloading_, _parsing_, _cleaning_, and _analyzing_ blockchain transactions to empower the analyses of transaction graphs. It not only alleviates the researchers' burden of accessing massive raw transaction data, but also brings a considerable of opportunities to conduct experiments in the real-world scenario for temporal graph studies. Our introduced live graphs have several unique characters like _open availability_, _dynamic evolution_ and _real transactions_ due to the inherent characteristics of the decentralized blockchain. Therefore, it is of great importance to investigate the properties of these live graphs to provide new insights for graph algorithms and systems. Today, as blockchain technology becomes more widespread, the token economy is gradually emerging. Non-fungible tokens (NFTs) have seen tremendous growth with its market capitalization reaching over $40 billion. Notably, one of the digital work named "Everdyas: The First 5000 Days" 1 by artist _Beeple_ was sold for $69 million, which makes NFTs become the center of attention. This phenomenon also leads to an increasing number of enthusiasts participating in this emerging concept. At the same times, it generates massive, anonymous and real transaction activities, which naturally forms a complex NFT transaction network. Although traditional networks like social networks and citation networks have been extensively studied, they are often outdated due to _their lack of constantly updating_. To enrich the current datasets and overcome their limitations, we instantiate a live graph with NFT transaction network by synchronizing a full Ethereum node, thus it continuously keeps up with the latest Ethereum block and includes all the transaction data. To investigate the characteristics of this live NFT transaction network, we present a temporal graph extracted from a specific time period spanning from 2017 to 2022, which comprises over 4.5 million nodes and 124 million edges. Then, comprehensive analyses are performed, and the results demonstrate that our presented live graph exhibits a variety of characteristics, offering exciting opportunities for the graph community. To summarize, the main contributions and findings are as follows: Footnote 1: [https://en.wikipedia.org/wiki/Everdyas:_the_First_5000_Days](https://en.wikipedia.org/wiki/Everdyas:_the_First_5000_Days) * We introduce the concept of live graph lab, which focuses on open, dynamic and real graphs. * We instantiate a live graph with NFT transaction network and provide a systemic analysis, which demonstrates interesting properties like fast-growing, highly-active, ect. * Graph machine learning models are investigated in the live graph, and the experimental results indicate that live graphs pose new challenges and opportunities for the graph community. ## 2 Related Datasets Graph has gained significant attention from both academic and industrial communities. A wide range of benchmark datasets have been proposed to facilitate the research in graph community. Among them, SNAP [43] and Network Data Repository [60] provide diverse types of graphs including social networks, web networks, etc. AMiner [70] offers comprehensive citation networks extracted from DBLP, ACM and Microsoft Academic Graph. Chartalist [63] presents a set of blockchain datasets to enable machine learning model development. Although these datasets are publicly available, they are not constantly updated in a nearly real-time manner. Take social network as an example, the ego-Twitter [45] in SNAP project was released more than 10 years ago. Thus, the graph properties presented in these benchmarks may no longer be suitable in the current context. Meanwhile, their graph structures are usually incomplete due to privacy policy constraints or technical limitations. However, these characters are important for various downstream applications. For instance, if the graph is incomplete or their characteristics have changed significantly, the learning outcomes could be ineffective and even misleading in the graph learning tasks. To enrich the current datasets and overcome their limitations, we propose the concept of Live Graph Lab, which supports various experiments for temporal graph algorithms. We provide a detailed comparison of these datasets in Table 1. Specifically, the proposed live graph lab has the following properties: (1) it is open and \begin{table} \begin{tabular}{c|c|c c c c} \hline Categories & Datasets & Open & Timely Evolving & Complete Structure & Timestamp \\ \hline Social Network & ego-Twitter [45] & ✓ & ✗ & ✗ & ✗ \\ Citation Network & DBLP [70] & ✓ & ✗ & ✗ & ✓ \\ The Web & web-Google [44] & ✓ & ✗ & ✗ & ✗ \\ Blockchain & Live Graph Lab & ✓ & ✓ & ✓ & ✓ \\ \hline \end{tabular} \end{table} Table 1: Comparisons among different types of graph datasets. publicly available; (2) it is constantly evolving in a nearly real-time manner; (3) it is complete (i.e, all interactions are fully recorded); (4) it has realistic timestamp. Moreover, previous blockchain datasets mainly focus on fungible token transactions. However, Non-Fungible Tokens (NFTs), which are a vital component of the Ethereum, have been overlooked by existing works. Our work covers this gap by delving into this emerging NFT ecosystem. ## 3 Dataset Details In this paper, we instantiate a live graph with NFT transaction network in the Ethereum blockchain. We first provide an overview of the blockchain background. Then, we give a comprehensive explanation of the graph construction process. Note that, our methodologies are applicable to other bolckchains like Solana and Polygon, etc. ### Background **Blockchain and Ethereum.** Blockchain, a distributed ledger technology, has attracted continuous attention recent years and is made up of securely linked blocks with cryptography techniques [53], where each block contains information of the previous block (e.g., cryptographic hash). Ethereum is a decentralized, programmable blockchain, which means users can construct various decentralized applications on the blockchain. Ether (ETH) is the native cryptocurrency of Ethereum, and every transaction incurred in the Ethereum needs a specific fee paid in ETH. **Smart Contract and Non-Fungible Token.** Smart contract is an important feature in the Ethereum blockchain [88], which is a computer program that runs on the Ethereum to automatically execute or control relevant events and actions according to its logic. Smart contracts have largely reduced the requirements for trusted intermediaries, fraud losses and arbitration costs, etc. Non-Fungible Token (NFT) is one of the most successful applications on Ethereum. NFTs are tokens that can be used to represent the ownership of any unique asset such as image, video and audio, etc. Different from fungible items where one dollar is exchangeable for another one dollar, NFTs are not interchangeable with each other, since they all have unique properties and are not divisible. ### Raw Data The statistic information of our dataset is summarized in Table 2. To access the transaction information in Ethereum, Geth, a golang implementation of Ethereum client software, is launched to facilitate the synchronization of the ledger on Ethereum mainnet. When the client synchronizes to the latest block, we extract all the blocks before Aug 1st, 2022 (i.e., from block #0 to block #15,255,104). Then, we parse all the transaction data and log data via toolkit Ethereum ETL2. Thanks to the well-defined standards in NFT communities, it is convenient for us to extract the information we need. Specifically, according to the standard of EIP-721, every NFT smart contract must implement the standard interfaces including _transfer_, _approval_, _owner_of and _balanceOf_, etc. For those smart contract that do not strictly follow the EIP-721 standard, we remove those smart contracts and its relevant transactions from our data. For the remaining data, we filter out all the _transfer_ events triggered by its smart contracts to extract the NFT transactions. This is because the ownership change of any NFT will emit a _transfer_ event identified by the event topic Keccack256 hash _0xddf...3ef_. In the NFT _transfer_ event, it contains four key contents including the Keccack256 hash, the sender's address, the receiver's address, and the transferred NFT token ID. The time when the transaction occurs can be \begin{table} \begin{tabular}{l l} \hline Descriptions & Statistics \\ \hline Start date (mm-dd-yyyy, UTC) & 07-12-2017 13:49 \\ End date (mm-dd-yyyy, UTC) & 08-01-2022 06:50 \\ Number of NFT collections & 97,667 \\ Number of NFT tokens & 77,991,885 \\ Number of account addresses & 4,531,020 \\ Number of transactions & 124,660,813 \\ \hline \end{tabular} \end{table} Table 2: Statistics of the dataset. retrieved from the block that the event was found. Once we have obtained all of these key information, we can know when and which NFTs are transferred among different wallet addresses and the prices they are sold. By parsing the transfer events, we find out that there are 97,667 NFT collections with 77,991,885 NFT tokens, where each collection contains different number of NFT tokens (e.g., varying from thousands to millions). Meanwhile, 124,660,813 transactions are extracted, and among them 4,531,020 users (i.e., we regard each wallet address as a user) participate in the transactions. It is worthy noting that there are over 100,000 NFT collections according to Etherscan3. However, as we have mentioned, some of them are not standard NFT tokens (i.e., they do not strictly follow the EIP-721 standard), and we remove them from our data. Thus, our method almost extracts all the NFT collections. Footnote 3: [https://etherscan.io/tokens-nft](https://etherscan.io/tokens-nft) ### Graph Construction We investigate the structure and dynamics of all these transactions by constructing a directed temporal graph \(\mathcal{G}=(\mathcal{N},\mathcal{E},\mathcal{T})\), where \(\mathcal{N}\) and \(\mathcal{E}\) denote the node set and edge set, respectively. That's to say, we only count once for those addresses that have repetitive interactions. \(\mathcal{T}\) is a set of timestamps when the interactions happen. We use \(t(e)\) to denote the timestamp when edge \(e\) is formed, and \(t(u)\) represents when the node \(u\) is added into the graph. Thus, \(a_{t}(u)=t-t(u)\) reflects node \(u\)'s age at time \(t\). For any given time \(t\), graph \(\mathcal{G}_{t}\) consists of all the nodes as well as edges until time \(t\). Note that, since we have accurate timestamps for the arrival of each node and edge, our investigation of graph dynamics is at a much finer granularity compared with the majority of existing studies [5; 41]. ## 4 Observations and Analyses To have a better understanding of the instantiated live graph, we start the analyses from the following perspectives: (1) Structural properties, which investigate how its nodes and edges change as time goes on; (2) Dynamic behaviors, which are graph specific properties such as how hub-nodes and bi-directional edges are formed. More comprehensive analyses are given in the Appendix C and D. ### Structural Properties To fully understand how active is the NFT transaction network, we measure the evolution of nodes and edges over the time. Specifically, we set the time granularity to a yearly yardstick. Since our data started from the July of 2017 and ended at the August of 2022, the statistical information for 2017 and 2022 only has a half year's data. Figure 0(a) and 0(b) show the annual growth of nodes and edges in a log-scale. As observed, there is a rapid increase in the number of nodes and edges, which become 28K and 165K times larger within the six years. It means the NFT transaction network is highly active and growing at a fast speed. To further figure out what leads to the growth, we analyse the newly added nodes in each pair of consecutive years. New nodes could join the network through different ways (i.e., mint, buy or airdrop NFTs). Among them, mint is the most common and easy way to obtain NFTs, where only a small number of gas fee is needed to pay. Figure 0(a) presents the trend of newly added mint nodes and non-mint nodes. We notice that the number of mint nodes added into the network is approximately the same magnitude of the total nodes at that time. Therefore, mint NFTs dominates the growth of Figure 1: Evolution of nodes and edges. nodes in the network at present, meanwhile the number of non-mint nodes are also increasing at a high velocity. These two phenomena result in the expansion of nodes. Figure 0(b) also shows the trend of added bidirectional edges and self-edges. We can see that the number of bidirectional edges only account for a small volume. Unlike social networks where the edges are highly mutual, the NFT market is an anonymous ecosystem and the probability of mutually interacting with each other is relatively low. These bidirectional edges might happen in the scenarios of swap or wash trading activities [73]. We also notice that there exist a very few self-loops in the network, which means these addresses transact with themselves. This abnormal phenomenon might be caused by a mistake input for the received address or these transactions are executed for testing. Furthermore, to characterize how active are these addresses, we compute the percentage of edges in which new nodes have links from or to old nodes, and the percentage of edges which only contain links between new nodes. Here, we refer to new nodes as the nodes created in the current year, and old nodes indicate the nodes that have existed in the previous years. Figure 0(c) demonstrates that more than 50% percent of newly added edges are constituted by the connections between new nodes and old nodes, except for the year of 2021. This indicates that most of the addresses remain active as the NFT ecosystem become mature. It is also very interesting to note that the newly created addresses are highly active in 2021, and at the same time, the whole NFT market capitalization reaches over $40 billion USD dollars in this year. For completeness, we also show the degree distribution of the NFT transaction network in Figure 0(d) with log-log scale. As expected, it follows a power-law distribution. We observe that some nodes have the degree of 1, which indicates they did not conduct any other transactions after the NFTs were minted or transferred. There also exist several high degree hubs that are extremely active and interacts with more than thousands of different addresses. Among them, a special Null address (i.e., _0x000...000_)4_ has the highest degree, since every NFT mint activity will create a link from the Null address to the mint address. Footnote 4: 0x0000000000000000000000000000000000 ### Dynamic Behaviors **Evolution of Hub Nodes.** The node degree shows a heavily long-tailed distribution (i.e., Figure 0(d)), and the assortativity raises year by year (i.e., Figure 3(a) in Appendix C). Both these two measurements are highly relevant to the evolving of hub nodes in the transaction network. Figure 1(a) and 1(b) illustrate the correlations between node degrees and its number of new connections in two consecutive years. We use blue color to present the node degree distribution in the previous year, and then red color indicates the number of connections from new nodes in the current year. As expected, we can find that if a node had high degree in the previous year, it would have high probability to get more new node connections in the current year. Moreover, this observation is also validated by measuring their Pearson Correlation Coefficients. We evaluate the correlation coefficients between node degrees and its number of new node connections (in year 2018-2019 and 2019-2020), which are 0.2266 and 0.5476, respectively. The results reveal that these two factors are positively relevant. Thus, we can draw the conclusion that the NFT transaction network follows preferential attachment growth model [54; 32], i.e., "the rich gets richer". Next, we analyze the distribution of the hub nodes. Specifically, we note that half of the top-100 largest hub nodes are smart contract addresses. This is understandable, since smart contracts play an important role in providing various services (e.g., NFT fractionalization and staking) to other users Figure 2: Correlation between hub nodes, new connections and mutual edges. in the network. Thus, they tend to have high degrees once they are frequently used by others. The transactions with such smart contracts result in the increase of assortativity. Furthermore, there also exist some hub nodes that are not smart contracts. For instance, we observe that address _0x8a0...7005_ is the fifth largest hub nodes with the degree of 69,458. After checking its transactions manually, we find that all its transactions are about HyperNFT tokens, and it holds more than 180,000 such kind of tokens. Since it is strange, we then check its transactions in details and discover that this address creates a transaction every a few minutes, where the ID of the transacted NFT token is in a continuous and increase order. According to these clues, it is very certain that this address is a bot, which makes frequent transactions to pretend it's very prosperous to attract more users to join in. The hub nodes are responsible for the spreading of information in the network, thus it is necessary to investigate the properties of hub nodes. Footnote 5: Ox8a01fa5a77311bcf29e293d8ecb48707cfdb700 **Mutual Interaction Edges.** In the previous study, we find that the reciprocity of the transaction network is relatively low, which is around 0.1 in the end (i.e., Figure 3(c) in Appendix C). This is quite different from the property in the social networks, where the reciprocity is very high and the value is above 0.7 [39]. One possible reason is that people are prone to mutually following and interacting with their friends in the social networks, since it does not cost anything. However, it becomes different in the NFT transaction network, because we do not know who we are actually interacting with and every action costs in the blockchain. To uncover the reciprocity's characteristics, we focus on the mutual interaction edges. Specifically, we are interested in if two nodes are mutually interacting with each other, i.e., reciprocal edges \(\langle u,v\rangle_{t}\) and \(\langle v,u\rangle_{\hat{t}}\), what their maximum time interval distribution \(|t-\hat{t}|\) looks like, i.e., the maximum delay in days of the reciprocity. Figure 1(c) shows that mutual interaction edges forming within one day account for the highest proportion, and the maximum time interval can reach to more than 1,000 days. According to Figure 1(d), we can observe that about 15% of the reciprocal edges are formed almost simultaneously among those bi-directional edges, and more than 70% of them are formed within 90 days. From these observations, we can conclude that the NFT transaction network can not be regarded as an undirected network due to its low reciprocity value, and the simultaneously formed bi-directional edges can be a good indicator to judge the abnormal activities. For instance, it may be caused by the token transfers among the accounts controlled by a same person. To further investigate this phenomenon, we want to know how many address pairs are suspicious in these bi-directional links. We first conduct some statistics for all NFT transaction addresses, which involves the following two factors: the number of transactions (including from transactions and to transactions) and the number of distinctly interacted addresses. Then, if one address has very limited transactions and it only interacts with a specific address, it is highly possible that these two addresses are of same person's wallets. This is because this address is actually not active, but it responses promptly in the bi-directional links. Similarly, another situation is although one address has a lot of transactions, it interacts with one specific address frequently and the specific address accounts for a large ratio of the total transactions. They may also belong to same person's wallets. Based on these two observations, we define the following two rules to identify the suspicious address pairs: 1) one of them makes less than 5 transactions or 2) the transaction ratios between them is larger than 0.8. If they satisfy at least one of the rules, we then believe these two addresses are likely to be same person's wallets. According to these rules, we discover that 33.72% of those simultaneously formed bi-directional links have high probability of being suspicious. This observation provides a new perspective for detecting anomaly transactions. ## 5 Downstream Applications The above analyses provide a general overview of this live graph's properties. Our results demonstrate that the transaction network is highly active and evolving at a fast speed. These properties put forward new challenges for various downstream tasks. Next, we investigate three widely studied tasks. ### Temporal Link Prediction Link prediction lies at the core of graph analysis and mining. The link prediction's goal is to predict whether a pair of node would form a link, which has been extensively used in diverse real-world scenarios like recommendation [79, 7] and knowledge graph completion [55], just to name a few. In this section, we focus on temporal link prediction, i.e., we try to forecast future interactions based on the historical transactions. Specifically, we investigate the snapshot-based representation for temporal link prediction via graph neural networks [36, 25]. Since each edge \(e\) has a timestamp \(\tau_{e}\), we utilize a sequence of graph to depict the temporal transaction network \(G=\{\mathcal{G}_{t}\}_{t=1}^{T}\), where each snapshot is a static graph \(\mathcal{G}_{t}=(\mathcal{N}_{t},\mathcal{E}_{t},\mathcal{T}_{t})\) with \(\mathcal{E}_{t}=\{e\in\mathcal{E}|\tau_{e}\leq t\}\). Through modeling the sequential information of different graph snapshots, we could leverage the information available up to time \(t\) to forecast possible edges at time \(t+1\). Although various approaches have been proposed for temporal link prediction, it is unclear whether they can achieve satisfied performance in this highly active and large-scale temporal transaction network. **Graph Neural Networks.** GNNs have gained great success in numerous learning tasks on graphs including node classification [36, 25], graph classification [84, 83] and link prediction [82, 75]. The objective of GNN is to acquire effective node representations through iteratively aggregating messages from its local neighborhood. Specifically, the \(l\)-th layer of a GNN model can be formulated as follows: \[\mathbf{h}_{v}^{l}=\textsc{Agg}^{l}\left(\left\{\textsc{Msg}^{l}(\mathbf{h}_ {u}^{l-1},\mathbf{h}_{v}^{l-1})|u\in\mathcal{N}(v)\right\},\mathbf{h}_{v}^{l -1}\right)\] where \(\mathbf{h}_{v}^{l}\) is the node representation for node \(v\) at layer \(l\) and \(\mathcal{N}(v)\) represents node \(v\)'s neighborhood. \(\textsc{Msg}(\cdot)\) indicates the message-passing function, which propagates information from its neighbors. \(\textsc{Agg}(\cdot)\) is the aggregation function, which updates its representation with node's neighborhood representations. For a \(L\) layers GNN, it aggregates information from the \(L\)-hop neighborhood. Different GNN architectures can have different message-passing and aggregation functions. The original GNN model is designed for static graphs, thus it cannot capture the underlying temporal information within the graph. To incorporate the evolving property, existing works utilize RNNs [28, 14] to aggregate information from different graph snapshots. Hence, the dynamic GNN models have much more parameters compared with the vanilla GNNs, which is more difficult to scale to large graphs due to the back propagation through time constraints. **Models.** We compare a number of recent state-of-the-art dynamic GNN models. (1) Dyngraph2vec [21] captures the temporal transitions with a deep architecture with dense and recurrent layers. (2) TGCN [87] integrates GCN with GRU to learn complex spatial and temporal dependencies. (3) EvolveGCN [56] use a RNN to adapt the graph convolutional network parameters. (4) GCRN [62] generalize TGCN with either GRU or LSTM. Moreover, it uses ChebNet [15] to encode the graph structure information, and separate GNNs are applied to compute different gates of RNNs. (5) DynGEM [22] utilizes deep autoencoder to perform graph embedding, then local and global constraints are employed to keep node representations being stable over time. (6) Roland [81] is an efficient learning framework designed for temporal GNNs, which updates its parameters through a combination of incremental training and meta-learning strategies [19, 20]. **Settings.** For dataset, we remove all the transactions associated with the Null address, which results in 3.13 million nodes and 23.13 million edges in the directed graph. We set node feature as 1 for all the nodes and utilize area under the curve (AUC) as well as mean reciprocal rank (MRR) as evaluation metrics. For every node \(u\) connected by a positive edge \((u,v)\) at time \(t\), we randomly select 100 negative edges originating from node \(u\). Subsequently, we determine the rank of the score for edge \((u,v)\) among all the sampled negative edges. AUC characterizes the probability of ranking positive node more highly than negative nodes. MRR denotes the average of reciprocal ranks computed across all nodes. Following the settings in Roland [81], we utilize two different train-test splits: fixed-split and live-update. Among them, _the fixed-split setting assesses the models using all the edges from the last \(20\%\) graph snapshots_. Although it is widely used in the existing works, the fixed-split might produce misleading results based on edges merely from the last few graph snapshots, since the graph structure could constantly evolve in the real world scenario. _To eliminate this bias, we also utilize live-update split to test the models, which evaluates their performance over all the available graph snapshots_. \(10\%\) of edges are used to determine the early-stop condition in this setting. **Results.** We present the link prediction results in Table 3. Specifically, we use three different time granularities (i.e, days, weeks and months) to construct the graph snapshot sequences, which results in 1,657 graph snapshots, 253 graph snapshots and 60 graph snapshots, respectively. All the models' performance is evaluated under two different settings, i.e., fixed-split and live-update. We notice that as the time granularity becomes coarser, the models' performance drops. This is because the NFT transaction network is highly active, which is about 68 million transaction volume per day on average in 2021. Thus, it will be more difficult to predict all the possible links in a long time period. Meanwhile, we can observe very high AUC scores in a daily time granularity, which indicates the models have a better capability of distinguishing the positive and the negative edges in this short time scenarios. Also, the performance in live-update setting is a little lower than the fixed-split setting. This is caused by the pattern shifts in different graph snapshots, which implies the patterns indeed evolve over the time. Therefore, we can conclude that it is necessary to model the NFT transaction network in a finer time granularity, and at the meantime this will result in longer graph snapshot sequence, which puts forward more challenges to model the temporal information. ### Temporal Node Classification Node classification plays a crucial role in understanding the attributes, behaviors, and relationships of nodes in temporal graphs, offering valuable insights in diverse applications. By predicting the label of nodes at a particular time \(t\), we gain a temporal perspective that deepens our understanding of how nodes evolve over time and their dynamic characteristics. Specifically, we focus on categorizing nodes according to their transaction behaviors. They can be generally classified into five distinct classes: daily traders, weekly traders, monthly traders, yearly traders and the remaining traders. We first filter out nodes that only have one transaction. Then, each node' maximum transaction interval is calculated. If the maximum interval is within one day, we call it daily trader. Likewise, if the maximum interval is within one week and larger than one day, we call it weekly trader, and so forth. This process results in a large-scale directed graph with about 1.80 million nodes and 21.83 million edges. Following the setting of EvolveGCN [56], we use the first 80% graph snapshots as training set, the following 10% graph snapshots as validation set and the last 10% graph snapshots as test set. Similar to temporal link prediction task, we use the same set of GNN models and add a multi-class classification layer. Furthermore, nodes' degrees are encoded as features. The number of transactions between two nodes and their latest interaction timestamp are transformed as edge features. Two commonly used metrics (i.e., accuracy and recall) are employed to assess the model's performance. **Results.** The node classification results are illustrated in Table 4 and key observations are as follows. Three granularities, i.e., days, weeks and months, are utilized to generate the temporal graph snapshot sequences. In accordance with the temporal link prediction task, we can draw similar conclusions. Roland could consistently outperform other baselines, and its variant Roland-MA (moving-average) stands out due to its parameter-free design and remarkable capability to capture evolving patterns. On the other hand, TGCN fails to achieve satisfactory performance because of the gradient vanishing problem in longer sequences. GCRN-GRU and Roland-GRU address this issue by utilizing separate \begin{table} \begin{tabular}{l|c c|c c|c c} \hline \hline \multirow{2}{*}{Models} & \multicolumn{6}{c}{Fixed Split} \\ \cline{2-7} & \multicolumn{2}{c|}{Snapshot Days} & \multicolumn{2}{c|}{Snapshot Weeks} & \multicolumn{2}{c}{Snapshot Months} \\ \cline{2-7} & AUC & MRR & AUC & MRR & AUC & MRR \\ \hline Dyngraph2vec & OOM & OOM & OOM & OOM & OOM \\ TGCN & 53.64\(\pm\)1.60 & 14.24\(\pm\)2.60 & 61.55\(\pm\)6.24 & 36.16\(\pm\)6.60 & 74.87\(\pm\)4.99 & 45.97\(\pm\)4.46 \\ EvolveGCN & OOM & OOM & OOM & OOM & OOM & OOM \\ GCRN-GRU & 95.86\(\pm\)0.03 & **71.48\(\pm\)0.49** & 93.14\(\pm\)0.18 & **68.44\(\pm\)0.05** & **86.74\(\pm\)0.80** & 58.23\(\pm\)1.23 \\ GCRN-LSTM & 94.12\(\pm\)0.92 & 68.51\(\pm\)5.26 & 92.90\(\pm\)0.43 & 6.76\(\pm\)0.31 & 86.44\(\pm\)0.92 & 58.71\(\pm\)0.84 \\ DynGEM & OOM & OOM & OOM & OOM & OOM & OOM \\ Roland-MA & **95.93\(\pm\)0.15** & 66.34\(\pm\)0.23 & **93.53\(\pm\)0.13** & 65.06\(\pm\)0.43 & 86.23\(\pm\)0.85 & 54.93\(\pm\)1.42 \\ Roland-MLP & 65.46\(\pm\)6.10 & 43.76\(\pm\)5.94 & 73.34\(\pm\)7.87 & 42.04\(\pm\)16.8 & 85.88\(\pm\)2.22 & 57.58\(\pm\)4.12 \\ Roland-GRU & 73.33\(\pm\)11.5 & 49.45\(\pm\)8.91 & 91.48\(\pm\)1.67 & 66.28\(\pm\)1.86 & 86.59\(\pm\)0.88 & **59.37\(\pm\)1.54** \\ \hline \hline Dyngraph2vec & OOM & OOM & OOM & OOM & OOM & OOM \\ TGCN & 58.22\(\pm\)5.76 & 17.77\(\pm\)10.7 & 59.94\(\pm\)8.44 & 22.67\(\pm\)19.3 & 75.01\(\pm\)2.66 & 43.16\(\pm\)0.75 \\ EvolveGCN & OOM & OOM & OOM & OOM & OOM & OOM \\ GCRN-GRU & 80.95\(\pm\)1.92 & 39.13\(\pm\)0.39 & 85.34\(\pm\)0.26 & 46.08\(\pm\)1.43 & 81.40\(\pm\)0.34 & 43.68\(\pm\)0.39 \\ GCRN-LSTM & 79.12\(\pm\)2.14 & 37.83\(\pm\)1.16 & 84.73\(\pm\)0.34 & 42.89\(\pm\)3.34 & 81.24\(\pm\)0.80 & 41.47\(\pm\)3.00 \\ DynGEM & OOM & OOM & OOM & OOM & OOM & OOM \\ Roland-MA & **90.47\(\pm\)0.66** & **49.79\(\pm\)0.95** & **88.74\(\pm\)0.37** & **50.77\(\pm\)1.13** & 83.93\(\pm\)0.95 & 47.11\(\pm\)1.16 \\ Roland-MLP & 56.32\(\pm\)8.06 & 22.16\(\pm\)15.4 & 70.88\(\pm\)10.5 & 40.91\(\pm\)10.8 & 79.38\(\pm\)4.47 & 46.51\(\pm\)2.61 \\ Roland-GRU & 60.04\(\pm\)5.08 & 27.29\(\pm\)6.26 & 75.93\(\pm\)19.2 & 48.66\(\pm\)13.7 & **84.36\(\pm\)0.46** & **50.75\(\pm\)0.83** \\ \hline \hline \end{tabular} \end{table} Table 3: Temporal link prediction performance in fixed split and live-update settings. We repeat experiments with three different seeds to report the mean as well as standard deviation of AUC and MRR. We also present the results under different time snapshot granularities, e.g., days, weeks and months. OOM means out-of-memory. GNNs or leveraging previous and current snapshots. Furthermore, the scalability of several baselines is limited and they experience out-of-memory (OOM) issues. Among the three time granularities, the month granularity is generally the most difficult scenarios. This is because with a month granularity, the time period between snapshots is longer compared with the day and week granularities. As a result, it becomes more challenging to accurately predict the temporal patterns and changes in node behaviors. The longer time gap between snapshots increases the complexity of modeling the evolving dynamics of the network and introduces more uncertainty, leading to decreased performance in terms of classification accuracy and recall. Therefore, we can conclude that given the highly active NFT transaction network, it is essential for the model to use a finer time granularity. Nonetheless, this choice results in longer graph snapshot sequences, presenting additional difficulties in accurately capturing the temporal information. ### Continuous Subgraph Matching Continuous Subgraph Matching (CSM) plays a vital role in numerous real-time graph applications. The goal of CSM is to identify and report the occurrences of a given query graph \(Q\) within a temporal graph stream \(G\). It can be utilized in various scenarios. For example, when setting the query graph \(Q\) as wash trading patterns in e-commerce, CSM could identify the anomaly transaction patterns in the graph via exactly matching [58]. Similarly, representing query graph as rumor patterns in social network could help to detect and prevent the spread of rumors [74]. In this subsection, we set the query graphs as the frequent wash trading patterns in the NFT transactions. According to the definition in [73; 47], wash trading is an activity where the seller is on the both sides of the trade. The goal of wash trading is to influence the price or create the illusion that the item is very popular, which produces artificial activities in the marketplace. Wash trading has been prohibited by many countries. However, due to the anonymous nature of the blockchain, wash trading has become a severe issue in the NFT market6, which accounts for a large volume in the whole NFT transactions. For instance, the CryptoPunk's \(9,998\)-th NFT was traded between two wallets for 124,457 ETH (about $534 million USD), in which the buyer paid to the seller, then the seller transferred the money back to the buyer. Thus, it is important to detect the wash trading transactions to uncover the anomaly behaviors and reduce the risks in the market. We resort to CSM to identify the wash trading transactions. Five most common wash trading patterns are shown in Figure 3. As can be seen, all of them contain at least one cycle. Here is a toy example that exemplifies Pattern 1: Address A (_0x744...282_) initiated the sale of Azuki token 1,215 to Address B (_0xd39...263_). Subsequently, Address B sold the token to Address \begin{table} \begin{tabular}{l|c c|c c|c c} \hline \multirow{2}{*}{Models} & \multicolumn{2}{c|}{Snapshot Days} & \multicolumn{2}{c|}{Snapshot Weeks} & \multicolumn{2}{c}{Snapshot Months} \\ \cline{2-7} & Accuracy & Recall & Accuracy & Recall & Accuracy & Recall \\ \hline Dyngraph2vec & OOM & OOM & OOM & OOM & OOM & OOM \\ TGCN & 18.48\(\pm\)2.66 & 31.15\(\pm\)3.16 & 43.97\(\pm\)4.57 & 32.99\(\pm\)2.34 & 47.45\(\pm\)3.49 & **32.53\(\pm\)2.95** \\ EvolveGCN & OOM & OOM & OOM & OOM & OOM & OOM \\ GCRN-GRU & 41.06\(\pm\)3.30 & 34.75\(\pm\)2.93 & 46.78\(\pm\)0.72 & **34.79\(\pm\)0.42** & 47.42\(\pm\)2.16 & 28.97\(\pm\)3.22 \\ GCRN-LSTM & 46.14\(\pm\)3.29 & **35.19\(\pm\)3.61** & 48.04\(\pm\)2.37 & 31.58\(\pm\)1.75 & 49.32\(\pm\)2.01 & 35.39\(\pm\)1.49 \\ DynGEM & OOM & OOM & OOM & OOM & OOM & OOM \\ Roland-MA & **51.02\(\pm\)2.01** & 28.77\(\pm\)3.23 & **50.39\(\pm\)0.45** & 26.33\(\pm\)3.95 & 47.96\(\pm\)2.69 & 22.33\(\pm\)3.07 \\ Roland-MLP & 48.46\(\pm\)3.18 & 30.62\(\pm\)3.94 & 47.59\(\pm\)3.39 & 31.67\(\pm\)3.62 & 45.74\(\pm\)4.75 & 35.04\(\pm\)3.47 \\ Roland-GRU & 49.88\(\pm\)2.15 & 33.38\(\pm\)3.91 & 46.63\(\pm\)3.07 & 33.85\(\pm\)1.48 & **50.04\(\pm\)0.37** & 32.17\(\pm\)2.72 \\ \hline \end{tabular} \end{table} Table 4: Node classification performance in fixed split setting. We repeat experiments with three different seeds to report the mean as well as standard deviation of Accuracy and Recall. Figure 3: Five most common wash trading patterns. C (_Oxeaa...c0f_), and eventually, Address C sold it back to Address A. These transactions happened within half an hour, and interestingly, the token's price surged from 8.98 ETH to 11.99 ETH. Such activity can raise suspicions of wash trading. We will use them as query graphs in the experiments to simulate the wash trading detection procedure. More details are given in Appendix E. **Results.** Table 5 shows the key results of continuous subgraph matching. As we can see, RapidFlow [68] is the most efficient algorithm, which demonstrates about 18-724x speedups compared with the remaining frameworks. This is because the query reduction technique can significantly expedite the query procedure through an optimized matching order. We also observe that no algorithm can dominate others in different query patterns except RapidFlow. Among them, SymBi [50] and Graphflow [33] perform quite worst on pattern 5, which need 10x more time to produce the results. SJ-Tree [13] and IEDyn [30; 31] cannot output the results within the time limit. The reason is that SJ-Tree encounters memory exhaustion in the majority of cases, and IEDyn's index update bears too much overhead because of the maintenance for constant-delay enumeration. Furthermore, the number of matched subgraphs increase from pattern 1 to pattern 5, however the query time does not have too much difference in almost all the frameworks. We can conclude that these four frameworks are applicable to large-scale graphs with dense structures. They can effectively serve as a bridge to generate ground truth for continuous subgraph matching, providing valuable support for the training of deep graph learning models. ## 6 Conclusion In this paper, we propose the concept of Live Graph Lab, which includes blockchain based temporal graphs that are openly accessible, fully recorded, and dynamically evolving over time. Specifically, we instantiate a live graph using the NFT transaction network and investigate its dynamic properties in a temporal graph analysis perspective. Our findings reveal both similar and distinct characteristics when compared to traditional networks like social networks, citation networks, and the Web. Through comprehensive experiments on the live graph, we uncover numerous insightful discoveries. The proposed live graph overcomes the limitations of existing datasets and enhances the diversity in graph research. We believe that the live graph lab will become an indispensable resource for the graph community and open up new opportunities. There are also various other potential use cases, such as identifying whether an account holds a specific type of tokens or predicting the range of tokens that the account possesses, etc. ## 7 Border Impact and Limitation The Live Graph Lab can facilitate researchers from graph community by offering comprehensive blockchain based graphs via an easily accessible manner. Insights from this research can be directly applied to improve the design, security, and user experience of NFT platforms, leading to sustainable growth of the ecosystem. Meanwhile, as the live graph constantly records all the NFT transactions in the Ethereum blockchain, the possibility of encountering malicious activities could become a concern, such as bot transactions or wash trading, etc. It might also cause potential negative societal impacts. Since the dataset consists of complete transactions associated with the wallet addresses, it could enable the tracking of each wallet's behaviors, habits, and financial activities. This kind of tracking could be exploited for targeted advertising, manipulation, or surveillance. The dataset could also make it possible for malicious actors to analyze the transaction patterns and manipulate the NFT market, which could lead to unfair practices, price manipulation, and market instability. \begin{table} \begin{tabular}{c c|c c c} \hline \hline Query Patterns & \multicolumn{4}{c}{Model Query Time (ms)} \\ \hline Queries & Counts & SymBi & Graphflow & TurboFlux & RapidFlow \\ \hline _p1_ & 19,338 & 1.22\(\times\)10\({}^{4}\) & 1.11\(\times\)10\({}^{4}\) & 1.11\(\times\)10\({}^{4}\) & **5.93\(\times\)10\({}^{2}\)** \\ _p2_ & 2,243,232 & 1.19\(\times\)10\({}^{4}\) & 1.44\(\times\)10\({}^{4}\) & 1.51\(\times\)10\({}^{4}\) & **6.13\(\times\)10\({}^{2}\)** \\ _p3_ & 3,012,738 & 1.11\(\times\)10\({}^{4}\) & 1.13\(\times\)10\({}^{4}\) & 1.08\(\times\)10\({}^{4}\) & **5.83\(\times\)10\({}^{2}\)** \\ _p4_ & 9,472,960 & 1.24\(\times\)10\({}^{4}\) & 1.43\(\times\)10\({}^{4}\) & 6.06\(\times\)10\({}^{4}\) & **6.45\(\times\)10\({}^{2}\)** \\ _p5_ & 3,154,355,868 & 1.34\(\times\)10\({}^{5}\) & 4.24\(\times\)10\({}^{5}\) & 7.72\(\times\)10\({}^{4}\) & **5.84\(\times\)10\({}^{2}\)** \\ \hline \hline \end{tabular} \end{table} Table 5: Comparison of different frameworks on query time. ## Acknowledgments and Disclosure of Funding This research is supported by the National Research Foundation, Singapore under its Industry Alignment Fund - Pre-positioning (IAF-PP) Funding Initiative. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not reflect the views of National Research Foundation, Singapore. The authors would like to thank reviewers for their helpful comments. The authors would also like to thank Zhengtao Jiang for the development of website.
2303.11631
Squeezing of the quantum electromagnetic vacuum
It is commonly agreed that the electromagnetic vacuum is not empty but filled with virtual photons. This leads to effects like Lamb shift and spontaneous emission. Here we argue that if the vacuum has virtual photons it might mean that it is very weakly squeezed and therefore the electromagnetic field is not in its ground state (vacuum) but in an excited dark state. We suggest a stringent test relying on measuring various properties of the electromagnetic field to exclude this yet-untested squeezing hypothesis. This could be done by measuring the number of photons as a function of frequency and comparing it with the spectrum of electric (or magnetic) field fluctuations. If such squeezing exists, it might shed new light on cosmological phase transitions and give complementary information to the observed microwave background radiation as well as be a possible candidate for dark energy.
Karol Gietka
2023-03-21T06:57:20Z
http://arxiv.org/abs/2303.11631v1
# Squeezing of the quantum electromagnetic vacuum ###### Abstract It is commonly agreed that the electromagnetic vacuum is not empty but filled with virtual photons. This leads to effects like Lamb shift and spontaneous emission. Here we argue that if the vacuum has virtual photons it might mean that it is very weakly squeezed and therefore the electromagnetic field is not in its ground state (vacuum) but in an excited dark state. We suggest a stringent test relying on measuring various properties of the electromagnetic field to exclude this yet-untested squeezing hypothesis. This could be done by measuring the number of photons as a function of frequency and comparing it with the spectrum of electric (or magnetic) field fluctuations. If such squeezing exists, it might shed new light on cosmological phase transitions and give complementary information to the observed microwave background radiation as well as be a possible candidate for dark energy. ## I Introduction Squeezing [1; 2] is predicted to play a key role in various quantum technologies, in particular, in quantum-enhanced measurements [3]. It relies on redistributing quantum uncertainties between two non-commuting observables. The primary example is the squeezing of light [4], where the uncertainties are redistributed between the strength of electric and magnetic fields with respect to a coherent state where the uncertainties are equal. The first squeezed light was observed by Slusher _et al._ in 1985 [5]. Since that time, a number of experiments have reported on generating squeezed light [6; 7; 8] using various platforms and characterized with larger and larger squeezing [9]. The current record for direct measurement of squeezing is the measurement of 15 dB squeezed vacuum state of light [10]. In this manuscript, however, we suggest that instead of measuring even more squeezed states of light one should focus on measuring the tiniest amount of squeezing of the electromagnetic field. The quantum electromagnetic vacuum is the lowest energy state of the quantized electromagnetic field [11] \[\hat{H}=\sum\hat{a}_{\omega}^{\dagger}\hat{a}_{\omega}, \tag{1}\] where the sum goes over all the frequencies \(\omega\), directions, and polarizations. It is commonly agreed that the electromagnetic vacuum is not really empty but filled with virtual particles which give rise to vacuum fluctuations [12]. These fluctuations in turn affect the energy levels of atoms leading to effects such as spontaneous emission [13], Casimir force [14; 15], and Lamb shift [16; 17; 18]. The intuitive picture provided to explain the virtual particles is based on the Heisenberg uncertainty principle \[\Delta E\Delta t\geq\frac{\hbar}{2}, \tag{2}\] where \(\Delta E\) and \(\Delta t\) are the uncertainties of energy and time, respectively. It is sometimes argued that because the lifetime of the virtual particle is very short, it can, in certain sense, _borrow_ the energy from the vacuum and pop into existence [19]. Such an explanation is, however, _ad hoc_ and is often criticized [20]. In particular, there is no time operator in quantum mechanics, therefore, the energy and time do not satisfy a canonical commutation relation [21]. An alternative explanation to the virtual particles and the fluctuations of the electromagnetic field that does not rely on the energy-time uncertainty relation is the squeezing of the electromagnetic vacuum. We suggest in this manuscript that the state of the electromagnetic field might not be in its ground state but in an excited squeezed dark state [22] which simply contains virtual particles by definition. In order to understand why this might be the case, we consider a cavity quantum electrodynamics setup [23] with only one mode of the electromagnetic field coupled to a two-level atom. We show that the vacuum state can be squeezed by exploiting light-matter interactions. If these interactions are weak and subsequently turned off (non-adiabatically), the state of the system can remain in an excited squeezed dark state. We suggest ways in which squeezing of the electromagnetic vacuum can be measured to exclude this yet-untested hypothesis. Finally, we attempt to give a plausible explanation of why the state of the electromagnetic vacuum might be squeezed in the first place. ## II Squeezing of a single-mode electromagnetic vacuum Let us consider an atom interacting with a single mode of a cavity in which it resides. The Hamiltonian of such a system is given by the paradigmatic quantum Rabi model \[\hat{H}=\omega\hat{a}^{\dagger}\hat{a}+\frac{\Omega}{2}\hat{\sigma}_{z}+\frac {g}{2}\left(\hat{a}+\hat{a}^{\dagger}\right)\hat{\sigma}_{x}, \tag{3}\] where \(\hat{a}\) describes the electromagnetic field with frequency \(\omega\), the Pauli matrices describe the two-level atom with frequency \(\Omega\), and \(g\) is the coupling between the atom and the electromagnetic field mode. If the frequency of the atom \(\Omega\) is larger than the frequency of the field \(\omega\) and the coupling strength is not too strong \(1-g^{2}/g_{c}^{2}>(\omega/\Omega)^{2/3}\)[24], the Schrieffer-Wolff transformation [25] can be used to eliminate the two-level atom from the Hamiltonian leading to an effective description [26; 27] (note that squeezing can be also obtained for \(\omega\geq\Omega\)) \[\hat{H}_{\rm eff}\approx\omega\hat{a}^{\dagger}\hat{a}-\frac{g^{2}}{4\Omega} \left(\hat{a}+\hat{a}^{\dagger}\right)^{2}, \tag{4}\] which can be rewritten using the abstract position and momentum operators \(\hat{x}=(\hat{a}+\hat{a}^{\dagger})/\sqrt{2\omega}\) and \(\hat{p}=\sqrt{\omega}(\hat{a}-\hat{a}^{\dagger})/\sqrt{-2}\) in the form of a harmonic oscillator \[\hat{H}_{\rm eff}=\frac{\hat{p}^{2}}{2}+\frac{\omega^{2}}{2}\left(1-\frac{g^{ 2}}{g_{c}^{2}}\right)\hat{x}^{2}. \tag{5}\] The above Hamiltonian describes an abstract harmonic oscillator with unit mass and a frequency \(\omega\sqrt{1-g^{2}/g_{c}^{2}}\). Once \(g>0\) the abstract harmonic oscillator ground state is squeezed with respect to the physical harmonic oscillator \((\hat{a}^{\dagger}\hat{a})\) ground state. In terms of the physical harmonic oscillator operators, the ground state wavefunction (also known as a polariton [28; 29; 30]) can be described as \[|\psi_{0}\rangle=\exp\left\{\frac{1}{2}\left(\xi^{*}\hat{a}^{2}-\xi\hat{a}^{ \dagger 2}\right)\right\}|0\rangle, \tag{6}\] where \(\xi=\frac{1}{4}\ln\{1-g^{2}/g_{c}^{2}\}\) is the squeezing parameter and \(|0\rangle\) is the ground state of the physical harmonic oscillator. The number of photons of such a state can be easily calculated to be \[\langle\hat{n}\rangle=\sinh^{2}\xi\geq 0. \tag{7}\] Although the number of photons is greater than \(0\) for a non-zero \(\xi\), these photons do not correspond to real photons (radiation) but to the virtual photons [31; 32]. This happens because for a non-zero coupling, the ground state of the electromagnetic field is squeezed but the ground state cannot lose the energy (otherwise it would constitute a _perpetuum mobile_). If the coupling is suddenly turned off, these virtual photons can become real photons because the squeezed vacuum is no longer the ground state of the system. However, if the squeezing is very weak, the detection of such squeezing might be quite difficult [33]. In order to understand it, let us look at the squeezed single-mode vacuum state from Eq. (6) expressed in the fock basis \[|\psi_{0}\rangle=\frac{1}{\sqrt{\cosh|\xi|}}\sum_{n=0}^{\infty}(e^{-i\xi/| \xi|\tanh|\xi|})^{n}\frac{\sqrt{(2n)!}}{2^{n}n!}|2n\rangle. \tag{8}\] If the squeezing is very weak, the above equation can be approximated as \[|\psi_{0}\rangle\approx\frac{1}{\sqrt{\cosh|\xi|}}|0\rangle+(e^{-i\xi/|\xi| \tanh|\xi|})\sqrt{\frac{\cosh|\xi|-1}{\cosh|\xi|}}|2\rangle. \tag{9}\] Figure 1: The rotating squeezed single-mode electromagnetic vacuum as a function of time. The top row shows the Husimi function of the squeezed electromagnetic vacuum at various times. The bottom row shows the time-dependent fluctuations of the quadratures \(\hat{X}\) (related to the electric field) and \(\hat{P}\) (related to the magnetic field). Although \(\langle\hat{n}\rangle>0\), these excitations might be almost impossible to measure if the squeezing is extremely weak (virtual photons). For the sake of illustration, the amount of squeezing is exaggerated. The quantum electromagnetic vacuum might consists of infinitely many squeezed single-mode vacua each rotating with its own frequency at every point in space. For weak squeezing \(|\xi|\ll 1\) it might be virtually impossible to detect the photons, thus measuring no photons for a given time only pushes the limit for \(\xi\) towards lower values (not measuring photons does not project the system to a state with no photons). The photons would have to be measured in total darkness so essentially the dark counts would be measured in the experiment. Also measuring these photons itself does not necessarily indicate squeezing; there might be simply photons traveling around and hitting the detectors. Therefore measuring the number of photons only to characterize the weak squeezing would not lead to any meaningful conclusions. However, there is one more effect that such a state will lead to. Since a squeezed vacuum is not the eigenstate of the electromagnetic field, it will rotate in the phase space leading to time-dependent fluctuations of the electric and magnetic field. This is illustrated in Fig. 1, where we show the Husimi function of the squeezed vacuum field at various times and the corresponding variances of the quadrature operators. ## III Squeezing of the quantum electromagnetic vacuum Once we understand how rotating single-mode squeezed vacuum leads to time-dependent fluctuations of the electromagnetic field, we can proceed to propose a measurement that will be capable of quantifying the amount of squeezing for each mode of the electromagnetic field. In particular, it should show if the electromagnetic vacuum is squeezed at all. Measuring all the modes at once might be too complicated to analyze. Ideally, one should measure how each mode of the electric (or magnetic) field fluctuates in time. Then the amplitude of the fluctuations will be related to the squeezing parameter \(\xi(\omega)\) of that mode (we choose the electric field because it is stronger than the magnetic field) \[\begin{split}\Delta^{2}\hat{E}(\omega)\sim\Delta^{2}\hat{X}( \omega)=&\frac{1}{2}\exp(2\xi)\cos^{2}(\omega t)\\ &+\frac{1}{2}\exp(-2\xi)\sin^{2}(\omega t).\end{split} \tag{10}\] However, a direct measurement of the electric field fluctuations [34; 35] would not reveal squeezing as the fluctuations would average out over time and it would only indicate the amplitude of the possible squeezing. Although the uncertainty might be larger than predicted by the Heisenberg uncertainty principle for electric and magnetic field [36] it would be very hard to show it because the fluctuations would have to be measured in a single point or in a precisely known volume. Only an electric field correlation measurement for various space-time coordinates would reveal that the electric field fluctuations behave as in Eq. (10) (for a time-independent ground-state there would be no correlations). Such a measurement was reported in Ref. [37], where the authors measured the fluctuations in the terahertz frequency range. This was achieved by using electro-optic detection [38] in a nonlinear crystal placed in a cryogenic environment. In particular, in Ref. [37] it was shown experimentally that the electro-optic field correlation measurement on a vacuum state is non-zero. Furthermore, in a recent experiment [39], it was shown that the quantum vacuum field fluctuations are correlated outside the light cone--a signal that a uniformly squeezed vacuum would generate. These two results alone, however, would not confirm yet that the quantum electromagnetic vacuum is squeezed. Once the amplitude and correlations are measured for the large part of the spectrum, these measurements should be subsequently compared with the results from photon detection. If all these results show no correlations it would exclude the possibility of squeezing of the quantum electromagnetic vacuum. In other words, measuring single photons (dark counts) from the alleged squeezed vacuum for a sufficiently long time will eventually lead to enough collected data to create a histogram of photons as a function of frequency. If the electric field fluctuations and its amplitude as a function of frequency resemble the created histogram, it seems that squeezing of the quantum electromagnetic vacuum is the simplest possible explanation for all the observed phenomena. In principle, one might come up with even more tests based on measuring the fluctuations of the magnetic field. However, as quantum mechanically electric and magnetic fields are conjugate variables satisfying canonical commutation relations, the previously described tests seem sufficient to test the squeezing hypothesis. The possible measurement outcomes are schematically illustrated in Fig. 2 ## IV Plausible explanation of vacuum squeezing One may wonder why the electromagnetic vacuum might be squeezed in the first place, therefore we give now a plausible explanation. According to the cosmology theory, a long time ago, approximately until around 370,000 years after the Big Bang [40], the matter was strongly interacting with the electromagnetic field. Due to the Thomson scattering [41; 42] by free electrons, the mean distance that a photon could travel before interacting with an electron was very short similar to the mean distance that a photon can travel in a cavity before interacting with an atom if the light-matter coupling \(g\) is strong enough. This suggests that in analogy to the squeezing of the electromagnetic field in the cavity, the field ground state of the coupled light-matter system in the early Universe might have been squeezed as well. The subsequent recombination due to cooling caused by expanding Universe reduced the number of electrons (they started forming Hydrogen atoms with protons), and thus _released_ the photons. In other words, the cooling of the Universe might have effectively non adiabatically decreased the coupling between light and matter. The real photons started to roam around the Universe and are observable today through cosmic microwave background radiation [43]. However, if the electromagnetic field was indeed squeezed, this squeezing might have survived until today giving rise to a number of physical phenomena. If this is the case, such squeezing could contain the missing information about the early stage of the Universe that the microwave background radiation does not have because these photons were not there at that time. Moreover, if the state of the electromagnetic field is excited, this excess energy with respect to the true vacuum might be a candidate for dark energy [44; 45] because the electromagnetic vacuum should be squeezed uniformly across the entire space. In order to understand why weak squeezing of the electromagnetic vacuum might lead to measurable effects let us consider a phenomenon from special relativity. If a small mass (of the order of a gram) is moving with a small velocity with respect to the speed of light, its change of mass will be negligible and barely measurable. If a big mass (a massive star) starts to move with a small velocity with respect to the speed of light its absolute change of mass will be large and capable of affecting other phenomena, in particular, local gravitational field. For a single-mode electromagnetic vacuum, very weak squeezing will be negligible and barely measurable as well. However, as the electromagnetic field consists of infinitely many modes at every point in space (and two polarizations in each of the three dimensions), the effect of very weak squeezing might be affecting other phenomena similarly as in the example with a massive star (which is essentially constructed of many small masses). ## V Conclusions In conclusion, we have presented an alternative explanation for the fluctuations of the electromagnetic field and the virtual particles. This explanation assumes that the electromagnetic vacuum is not in its ground state but in a weakly excited squeezed dark state which contains virtual particles by definition. We proposed an experiment able to confirm or refute whether the electromagnetic vacuum is squeezed at all testing thus the squeezing hypothesis, and we provided a plausible explanation of why the electromagnetic vacuum might be squeezed in the first place. To this end, we used a simple example known from the cavity quantum electrodynamics and established an analogy with the early stage of the Universe. Finally, we suggested that such weak squeezing of the electromagnetic vacuum might be a candidate for dark energy. ## VI Acknowledgements Simulations were performed using the open-source QuantumOptics.jl[46] framework in Julia. This work was supported by the Lise-Meitner Fellowship M3304-N of the Austrian Science Fund (FWF).
2308.13354
On the Impact of Language Selection for Training and Evaluating Programming Language Models
The recent advancements in Transformer-based Language Models have demonstrated significant potential in enhancing the multilingual capabilities of these models. The remarkable progress made in this domain not only applies to natural language tasks but also extends to the domain of programming languages. Despite the ability of these models to learn from multiple languages, evaluations typically focus on particular combinations of the same languages. In this study, we evaluate the similarity of programming languages by analyzing their representations using a CodeBERT-based model. Our experiments reveal that token representation in languages such as C++, Python, and Java exhibit proximity to one another, whereas the same tokens in languages such as Mathematica and R display significant dissimilarity. Our findings suggest that this phenomenon can potentially result in performance challenges when dealing with diverse languages. Thus, we recommend using our similarity measure to select a diverse set of programming languages when training and evaluating future models.
Jonathan Katzy, Maliheh Izadi, Arie van Deursen
2023-08-25T12:57:59Z
http://arxiv.org/abs/2308.13354v1
# On the Impact of Language Selection for Training and Evaluating Programming Language Models ###### Abstract The recent advancements in Transformer-based Language Models have demonstrated significant potential in enhancing the multilingual capabilities of these models. The remarkable progress made in this domain not only applies to natural language tasks but also extends to the domain of programming languages. Despite the ability of these models to learn from multiple languages, evaluations typically focus on particular combinations of the same languages. In this study, we evaluate the similarity of programming languages by analyzing their representations using a CodeBERT-based model. Our experiments reveal that token representation in languages such as C++, Python, and Java exhibit proximity to one another, whereas the same tokens in languages such as Mathematica and R display significant dissimilarity. Our findings suggest that this phenomenon can potentially result in performance challenges when dealing with diverse languages. Thus, we recommend using our similarity measure to select a diverse set of programming languages when training and evaluating future models. language model, code representation, programming language, multilingual model, Transformer, pretrained model, transfer learning ## I Introduction Language Models (LMs) have shown great capabilities in both Natural Language Processing (NLP) and source code domains [1, 2, 3, 4, 5, 6, 7, 8]. Initially, studies focused on the performance of LMs on a single language, whereas currently, the emphasis is on achieving optimal performance across various languages simultaneously [9, 10, 11, 12, 13, 14, 15]. The current direction of research suggests that pretraining LMs on multiple languages benefits per-language performance in many cases [16]. However, there is a lack of clear rationale or justification for the specific selection of languages used in these studies. Furthermore, research has shown that models perform better on certain programming languages and worse on others [10, 11, 13]. In the realm of programming, source code tokens are predominantly written in English. These tokens, especially user-defined ones like function and variable names, contain critical information about the code's functionality. Thus, it is reasonable to anticipate that machine learning models should learn from and perform equivalently across different languages. However, significant performance discrepancies persist among LMs working on different languages. This suggests that these performance variations might originate from differing token representations learned by the models. In this investigation, we aim to answer the following question: **How similar are the representations of various programming languages learned by Programming Language Models (PLMs)?** To answer this question, we first identify a set of prevalent languages typically employed in the training and evaluation of PLMs, as well as a number of frequently disregarded ones in the same context. This enables us to include a diverse set of programming languages in our study. We then leverage a prominent PLM, the CodeBERT model [4], to obtain representations of code tokens. Next, we calculate language similarity by examining the set of tokens shared across all evaluated languages. This approach allows us to establish a quantitative basis for language selection for PLM research. Our findings reveal a persistent difference in representations across both the multilingual pretrained setting and the non-pretrained monolingual setting. To validate our results, we conduct a comparison between the identified similarities and the reported performance of other models. We find that models evaluated on Scala and Ruby perform worse than when they are evaluated on Java, Python, C, C++, Go, and JavaScript [11]. This difference in performance overlaps with the relative similarities we have found in these languages. The implications of this work are twofold. We have shown that there is an intrinsic difference in the representations of programming languages learned by CodeBERT implying that there is an inherent difference in how PLMs use inputs from different languages. This finding can have a large impact on areas such as multilingual models and transfer learning. Theoretically, we have proposed a method for validating language selection in the context of multilingual model outcomes, as well as a foundational understanding to guide language choice for multilingual investigations. From a practical standpoint, we provide a list of languages with akin representations, indicating similar performance potential. This enables researchers to make informed choices in selecting programming languages aligned with their specific experimental design, especially when aiming to leverage transfer learning techniques across languages. Finally, we plan to extend this investigation to include more languages to gain a more comprehensive mapping of available languages. We also want to extend this investigation to other architectures and training goals. In this work, we used an encoder-based architecture that used the masked language modeling task [4], we want to extend the work to other tasks such as code infilling [10, 14, 17] and architectures such as the T5 models [15]. Lastly, we release our replication package [18]. ## II Approach In our approach, we first train a set of BERT-style models according to the methodology of CodeBERTScore [19]. These models will be used to calculate the representations of tokens. Then we calculate the similarity of languages using the tokens that are present in all languages. Language Selection and the DatasetThe training of our models utilizes "The Stack" [20] dataset comprising \(358\) programming languages, with code availability governed by permissive licenses. We aim to include a diverse set of languages to encompass as many different scenarios as possible. There are four main criteria for selection. First, we look at what languages are commonly used in Machine Learning for Software Engineering research. Then we look for a variety of grammar and programming paradigms. We also look at the expected use-case of the languages. Finally, we look at including both high- and low-resource languages. From these selection criteria, we curated the list of \(20\) languages. We include the full list of selected languages together with the reason for inclusion in Table I. Note that we do not include some mainstream languages, such as C#, Matlab, and Rust as their niche within the selection criteria has been sufficiently filled. Whereas we include some languages that are not often used in the training of such models, i.e., Emacs Lisp (ELisp), Lisp, WebAssembly, COBOL, and Fortran because their grammar is very dissimilar to popular languages. This makes them good subjects to analyze the potential differences in representations. The dataset that we use for training our models and evaluating the language similarities consist of the first \(100,000\) files from "The Stack" [20] for each language. Note that for low-resource languages where data is scarce (fewer than \(100,000\) files), we used all available files in The Stack dataset. This information is included in Table I. When training the CodeBERT models, we use the first \(90\%\) of the files as training data, and the last \(10\%\) as a test set. We calculate the final similarities with representations of samples from the test set. Note that when selecting common tokens across languages, we exclude comments from consideration. This ensures that we only compare tokens integral to the source code. However, while training the models and inferring the representation of tokens, we do include comments in the input, as they can provide valuable information for the model. Language RepresentationThe individual languages have \(47\)K, \(5\)K, and \(27\)K tokens as maximum, minimum, and average tokens, respectively. To represent a language, we identify tokens that are present in _all_ languages (excluding comments). This generates common \(2,718\) tokens for which we generate a vector representation denoted as \(\mathbf{N}\). We select common tokens by tokenizing all files, and counting the presence of each token in a language. We only use tokens that are present in all languages when making a comparison. This is because using a token that is not present in a particular language would not allow us to calculate a similarity for that token. We transform each token into a numerical vector that captures its semantic meaning or contextual information. For every token, we have a set of samples, M. Samples correspond to the same token but are obtained from different occurrences within a given language. This enables us to generate the token's representation within a language as a set of encodings, denoted as \(T\). Finally, we represent each language \(\mathcal{L}\) as a set of the token set \(T\) as shown by Equation 1, \(l\) is the maximum left context, and \(r\) is the maximum right context. \[\begin{split} T&=\{CodeBERT(t_{n-l},...,t_{n}^{m},...,t_{n+r})|m\in\mathbf{M}\}\\ \mathcal{L}&=\{T_{N}^{\mathcal{L}}|N\in 0...\mathbf{N}\} \end{split} \tag{1}\] Language SimilarityWe use a similarity score to quantify the degree of similarity between two languages that serve as a measure of their relatedness or resemblance. To that end, we employ the cosine similarity between the embeddings of each token in the respective languages. Next, we define a similarity function for two sets of vectors. This allows us to assess the similarity between the vector representations of tokens in different languages. We order a vector pair such that the vectors with the highest cosine similarity are matched up. This will give the highest similarity score possible for the chosen tokens. Once we have the definition of the similarity between two sets of vectors we extend this to the similarity between two languages as shown in Equation 2. We use the mean to aggregate the similarities of the sets of vectors and arrive at a single value that gives the similarity between two languages. \[\begin{split} sim(t_{x}^{a},T^{b})&=max\{cosim(t_{x }^{a},t_{y}^{b})|t_{y}^{b}\in T^{b}\}\\ sim(T^{a},T^{b})&=\sum_{x=0}^{X}\frac{sim(t_{x}^{a},T^{b})}{X}\\ sim(\mathcal{L}^{a},\mathcal{L}^{b})&=\sum_{n=0}^{ \mathbf{N}}\frac{sim(T_{n}^{a},T_{n}^{b})}{\mathbf{N}}\end{split} \tag{2}\] The ordering of a vector pair is essential to the collected results. First, a form of ranking is essential as it allows for the reproduction of the results as the cosine similarity is only defined for two vectors. Furthermore, using the maximum similarity as an ordering criterion gives us an upper bound on the similarities of languages. This means there is no combination of tokens that would give a higher similarity score, so all results shown are the best-case scenario for similarity. Additionally, it minimizes the effect of potential synonyms within the data. Assuming there are two distinct representations for a given token, this ranking compares the most similar representations between languages. Model TrainingAll the models we train for the experiments have the same architecture, and training setup, i.e., CodeBERT base architecture.1 We train the models in two settings; in the first setting, we reinitialize all the weights from scratch for each language. In the second setting, we use the provided pretrained model as a starting point and finetune it on target languages. The pretrained version is trained on Python, Java, JavaScript, PHP, Ruby, and Go. The models were trained using the CodeBERT training setup [4], for 100,000 steps. Footnote 1: [https://huggingface.co/microsoft/codebert-base-mlm](https://huggingface.co/microsoft/codebert-base-mlm) ExperimentsTo evaluate the similarity of the learned representations by the CodeBERT model, we conduct two experimental setups. In the first experiment, we employ a distance metric to calculate the pairwise distance between all languages. This analysis will enable us to identify the languages that exhibit the greatest similarity in terms of how they represent the same tokens. Next, we examine the similarity of a language to itself. This verifies that the representations are consistent within a language. We use the same metric as mentioned earlier but exclude the token from being compared to itself in Equation 2. Computing the similarities among a token representation within one language and comparing it to the representation of the same token in another language serves a dual-purpose function. Firstly, this investigates the consistency of token representations computed by CodeBERT for identical tokens. Maintaining this internal consistency within a single language is crucial when evaluating its correlations with other languages. Secondly, this enables us to distinguish whether there exists a subset of highly similar tokens responsible for the entirety of the linguistic correlations, or whether these similarities across languages can be attributed to a more distributed pattern of resemblance among numerous tokens. Finally, we conduct all mentioned experiments in both a multilingual pretrained setting, as well as a non-pretrained (monolingual) setting to ensure that the differences in representations are consistent. ## III Results Initially, we analyze the outcomes obtained from the distance metric. Figure 0(a) presents pairwise similarity of all languages using the non-pretrained models while Figure 0(b) shows the same metric for the pretrained models. In both figures, the languages are sorted by average similarity to all other languages. A darker shade of blue indicates that the languages are more similar. ### _Cross-lingual Similarity Results_ Upon examining the non-pretrained representations of common languages such as C++, Python, Go, Java, and JavaScript we find that these languages exhibit the closest proximity to each other in terms of their representations. Notably, this similarity is observed without any transfer learning from other languages. When considering derivative languages such as Cuda and Scala, we notice that their representations remain relatively close, but a slight increase in dissimilarity becomes apparent. When examining the most dissimilar languages, we find that both R and Mathematica exhibit significant differences compared to all other languages. This observation is intriguing since R and Mathematica are primarily utilized for mathematical and statistical purposes, which distinguishes them from the more commonly used programming languages. Upon comparison with other languages, we observe that list-based languages, namely WebAssembly, Lisp, and ELisp exhibit distinct dissimilarities from other languages. However, they do not demonstrate a significantly higher degree of similarity among themselves compared to other languages. This is a characteristic previously observed in derivative languages like C++ and Cuda. Additionally, we observe that both COBOL and Assembly (languages with unique syntax), also stand significantly apart from all other languages. For pretrained models, we detect the same trends as previously described. Although the languages, in general, exhibit greater overall similarity, as evident from the change in the scale of Figure 0(b), there remains a discernible difference in the representation of tokens. It is worth noting that in \begin{table} \begin{tabular}{l|l|l|l} \hline \hline **Language** & **Indclusion criteria** & **Files** & **Total Tokens** \\ \hline Assembly & Unique syntax with a limited vocabulary & 100,000 & 364,776,405 \\ C & Widely used general-purpose programming language & 100,000 & 326,871,237 \\ COBOL & The language often present in legacy systems, with a very unique syntax & 2,978 & 106,613,233 \\ C++ & Widely used general-purpose programming language, close to Java and C & 100,000,000 & 86,090,173 \\ Cuda & Domain specific application of C++ & 58,355 & 283,624,967 \\ Emacs Lisp & Domain-specific application of Lisp & 54,768 & 188,661,262 \\ Fortran & Scientific computing language, with similar syntax to Julia and Ruby & 100,000 & 607,478,891 \\ Go & Domain-specific language with elements from C, C++, Python, and Ruby & 100,000 & 232,054,204 \\ HTML & Domain-specific language, with unique syntax & 100,000 & 723,2963,945 \\ Java & Widely used general-purpose programming language & 100,000 & 183,040,204 \\ JavaScript & Widely used domain-specific programming language & 100,000 & 325,109,387 \\ Julia & New emerging scientific computing language & 100,000 & 242,836,338 \\ Kotlin & Mixture of Java and IS elements but less verbose & 100,000 & 111,578,961 \\ Lisp & General purpose list-based programming language & 100,000 & 832,184,089 \\ Mathematica & Mathematical computing language with unique features & 26,895 & 1035,010,0885 \\ Python & General purpose programming language, with semantic whitespace & 100,000 & 237,441,388 \\ R & Scientific computing language & 39,194 & 154,180,798 \\ Ruby & General purpose language with syntax similar to Python and Julia & 93,200,451 \\ Scala & FVM-based language with syntactic elements from JavaScript and C++ & 100,000 & 141,672,916 \\ WebAssembly & Domain-specific emerging list-based language & 5,359 & 59,809,452 \\ \hline \hline \end{tabular} \end{table} TABLE I: Selected languages and inclusion criteria the non-pretrained setting, the difference between the highest and lowest similarity score is \(0.19\), while in the pretrained setting, this difference is \(0.23\). This indicates that although the similarities are closer in the pretrained setting, the differences in similarities remain comparable. Note that due to the pretraining on languages such as Python, Java, JavaScript, PHP, Ruby, and Go, we observe that these languages become more similar to each other after fine-tuning. However, COBOL, Assembly, ELisp, Lisp, Mathematica, and WebAssembly still maintain their distinctiveness and remain distant from all other languages. Another noteworthy observation is that Go, as a result of the pretraining, exhibits increased proximity to a greater number of languages compared to the non-pretrained setting. This indicates that the pretraining process has influenced the language representation, making Go more similar to other languages in the fine-tuned model. ### _Self-Similarity Results_ Next, we examine the self-similarity of tokens within each language. The self-similarity of non-pretrained representations can be seen in Figure 1(a), while the self-similarity of pretrained representations is shown in Figure 1(b). Upon analyzing the results, it is evident that when working with a pretrained model, the representation of languages becomes more consistent. This observation is supported by two key indicators: the increase in similarity scores and the decrease in variance. Both of these trends reflect the improved consistency and reduced variability in language representations when utilizing a pretrained model. Our analysis reveals marginally greater token similarity within individual languages compared to inter-language similarities. Notably, Mathematica shows the highest self-similarity in both pretrained and non-pretrained settings. It also is the least similar to all other languages in the non-pretrained setting, and second least in the pretrained setting. These observations emphasize a consistent divergence between Mathematica's representations and those of other languages. When examining the representations of R, Mathematica, and ELisp, we find that they exhibit consistent patterns similar to other languages in both the monolingual and pretrained settings. This observation confirms the notion that these languages' representations have been learned to be distinct from the more commonly used languages in large language models. Additionally, we observe that both COBOL and WebAssembly exhibit inconsistency in their representations. We attribute this inconsistency to a potential lack of sufficient training data for these languages, which can result in the models underfitting their representations and failing to capture their inherent linguistic patterns effectively. The underfitting of two languages provides further insight into inter-language similarities and reinforces the need for meticulous language selection. Reviewing non-pretrained model outcomes, Mathematica exhibits greater dissimilarity to other languages compared to both WebAssembly and COBOL, with R and ELisp displaying more dissimilarity than COBOL. In the pretrained setting, no language surpasses COBOL in dissimilarity between languages, yet ELisp, R, Assembly, and Mathematica show greater differences compared to WebAssembly. This suggests that certain languages produce representations so different from others, that they surpass representations that have not yet converged in dissimilarity. ## IV Discussion ### _Implications and Future Work_ The implications of our findings lie in the wider field of PLMs. We compare the similarities we found between languages to the multilingual performance of a set of PLMs evaluated by Xu et al. [11] in their study. We see that the trends seen in similarities are mirrored in the performance of the models. Ruby and Scala, the most dissimilar of the languages evaluated by Xu et al. [11] consistently performed worst, while C++, C, Java, Python, and JavaScript had closer and better performances. Furthermore, we note that the languages used in Fig. 1: Pairwise similarity between the representation of all languages the evaluation of many models are the same languages which we have found to be the most similar to one another. This can have grave implications for the research in this field. First, when looking at the cross-lingual performance of PLMs, the choice of language will affect the reported performance. Our results help to select representative languages in future studies. Second, when investigating the contributions of the language-neutral and language-specific elements of the representations learned by PLMs it will be beneficial to choose a set of languages that are most dissimilar. This should show the greatest difference in the language-specific representations and make the language-neutral elements more evident. We have shown that there is an inherent difference in the representations of programming languages for an encoder model trained on the Masked Language Model task. In the future, we aim to extend the investigation to other models such as StarCoder [10], or InCoder [14], which will use a generative training setup, allowing us to work at a higher granularity than token level, while also including other architectures than only encoder models. Finally, an extension to T5 [15] would allow us to evaluate tasks focussed more on code understanding, while maintaining the code generation task also present in StarCoder [10] and InCoder [14]. ### _Threats to Validity_ Concerning the architecture of the models used, prevalent PLMs generally fall into different categories. Recent research has indicated the significance of incorporating future context, particularly in models that employ span masking techniques [14, 15, 21]. In our study, we focus on single token representations, which restricts models that mask spans to operate as BERT-style Masked LMs. Concerning the size of the model, using large models is prohibitively expensive as we need to train two models for every language. The current training procedure for \(20\) languages took \(300\) hours of computing time. This also limits the number of languages analyzed. Lastly, the selection of the similarity metric potentially affects our findings as well. We employed the cosine similarity, a widely accepted metric in token representations [19, 22], however, other choices can be investigated as well. ## V Related Work NLP researchers have extensively investigated the multilingual and knowledge transfer performance of encoder models such as Multilingual-BERT [23]. As for the similarity measurement, the CodeBERTScore model [19] has demonstrated the potential of using a language model for accuracy assessment and code translations, proposing a new accuracy metric to address limitations in the existing BLEU and ROUGE metrics. The authors fine-tune a CodeBERT model [4] to represent code, enabling the use of the representations as an accuracy metric for code-to-code tasks. Additionally, CodeBERTScore aligns better with human annotators' ratings of code quality, and high-scoring snippets are more likely to be correct programmatically. ## VI Conclusion In this study, we showed that there is a consistent difference in the representations of the same tokens learned in different languages for BERT-style models. The difference existed in both monolingual and multilingual (finetuned) settings. We identified several languages that show a high degree of similarity, namely, C, C++, Java, and Python. These languages also correspond to languages commonly used when evaluating code PLMs [11]. Furthermore, we specified the languages that learn a consistently different representation for the same tokens, namely, Mathematica and R. This raises inquiries into the performance of PLMs on dissimilar languages, and it calls into question how much these dissimilar languages could benefit from transfer learning. In future, we will investigate the effects of these differing representations on downstream tasks, as well as investigate the representations found by different model architectures. Fig. 2: Self-Similarity of languages
2308.03147
An Updated Reference Frame for the Galactic Inner Parsec
Infrared observations of stellar orbits about Sgr A* probe the mass distribution in the inner parsec of the Galaxy and provide definitive evidence for the existence of a massive black hole. However, the infrared astrometry is relative and is tied to the radio emission from Sgr A* using stellar SiO masers that coincide with infrared-bright stars. To support and improve this two-step astrometry, we present new astrometric observations of 15 stellar SiO masers within 2 pc of Sgr A*. Combined with legacy observations spanning 25.8 years, we re-analyze the relative offsets of these masers from Sgr A* and measure positions and proper motions that are significantly improved compared to the previously published reference frame. Maser positions are corrected for epoch-specific differential aberration, precession, nutation, and solar gravitational deflection. Omitting the supergiant IRS 7, the mean position uncertainties are 0.46 mas and 0.84 mas in RA and Dec., and the mean proper motion uncertainties are 0.07 mas yr$^{-1}$ and 0.12 mas yr$^{-1}$, respectively. At a distance of 8.2 kpc, these correspond to position uncertainties of 3.7 AU and 6.9 AU and proper motion uncertainties of 2.7 km s$^{-1}$ and 4.6 km s$^{-1}$. The reference frame stability, the uncertainty in the variance-weighted mean proper motion of the maser ensemble, is 8 $\mu$as yr$^{-1}$ (0.30 km s$^{-1}$) in RA and 11 $\mu$as yr$^{-1}$ (0.44 km s$^{-1}$) in Dec., which represents a 2.3-fold improvement over previous work and a new benchmark for the maser-based reference frame.
Jeremy Darling, Jennie Paine, Mark J. Reid, Karl M. Menten, Shoko Sakai, Andrea Ghez
2023-08-06T15:49:03Z
http://arxiv.org/abs/2308.03147v1
# An Updated Reference Frame for the Galactic Inner Parsec ###### Abstract Infrared observations of stellar orbits about Sgr A* probe the mass distribution in the inner parsec of the Galaxy and provide definitive evidence for the existence of a massive black hole. However, the infrared astrometry is relative and is tied to the radio emission from Sgr A* using stellar SiO masers that coincide with infrared-bright stars. To support and improve this two-step astrometry, we present new astrometric observations of 15 stellar SiO masers within 2 pc of Sgr A*. Combined with legacy observations spanning 25.8 years, we re-analyze the relative offsets of these masers from Sgr A* and measure positions and proper motions that are significantly improved compared to the previously published reference frame. Maser positions are corrected for epoch-specific differential aberration, precession, nutation, and solar gravitational deflection. Omitting the supergiant IRS 7, the mean position uncertainties are 0.46 mas and 0.84 mas in RA and Dec., and the mean proper motion uncertainties are 0.07 mas yr\({}^{-1}\) and 0.12 mas yr\({}^{-1}\), respectively. At a distance of 8.2 kpc, these correspond to position uncertainties of 3.7 AU and 6.9 AU and proper motion uncertainties of 2.7 km s\({}^{-1}\) and 4.6 km s\({}^{-1}\). The reference frame stability, the uncertainty in the variance-weighted mean proper motion of the maser ensemble, is 8 \(\mu\)as yr\({}^{-1}\) (0.30 km s\({}^{-1}\)) in RA and 11 \(\mu\)as yr\({}^{-1}\) (0.44 km s\({}^{-1}\)) in Dec., which represents a 2.3-fold improvement over previous work and a new benchmark for the maser-based reference frame. ## 1 Introduction Infrared observations of stellar orbits in the vicinity of Sgr A* spanning nearly three decades have demonstrated the presence of a massive black hole in the Galactic Center (e.g., Ghez et al., 2008; Genzel et al., 2010). These observations can also probe the mass distribution in the inner parsec, including that of the dark matter and other unseen material (Lacroix, 2018; Nampalliwar et al., 2021; Heissel et al., 2022; Yuan et al., 2022). The infrared astrometry has historically relied on a radio-based astrometric reference frame that ties IR-bright stars to the location of Sgr A* via simultaneous observation of SiO maser-emitting stars and the Sgr A* 43 GHz radio continuum (e.g., Menten et al., 1997; Reid et al., 2003, 2007; Yelda et al., 2010; Plewa et al., 2015; Sakai et al., 2019). The predicted positions of these jointly-detected stars degrade over time, and the maser-based reference frame must therefore be regularly monitored and updated. It has now been 16 years since the last published maser observations used for the Galactic Center reference frame (Reid et al., 2007, but see Sakai et al., 2019). Here we present an updated radio reference frame for the Galactic Center that incorporates new and legacy Karl G. Jansky Very Large Array (VLA1) data (Section 2). We employ new astrometric methods (Section 3) to obtain unprecedented position and proper motion measurements and reference frame stability (Section 4). We examine the error budgets, systematic effects, and possible intrinsic scatter in the astrometry (Section 5), examine trends in the 3D stellar velocities (Section 6), and discuss future work (Section 7). The Appendices discuss time-dependent differential astrometric corrections, provide the complete maser time series, examine alternative proper motion fitting methods, and assess the possibility of under-estimated astrometric uncertainties. Footnote 1: The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. Calculations that convert angular offsets to projected physical distances or proper motions to transverse velocities assume a distance to Sgr A* of 8.2 kpc, which is consistent with most recent distance measurements (e.g., Do et al., 2019; Reid et al., 2019; Gravity Collaboration et al., 2021; Leung et al., 2023). ## 2 Data Table 1 lists the epochs, observing programs, observed SiO transitions, and beam properties of the legacy and new data sources used to derive astrometric solutions for the stellar SiO masers near Sgr A*. There are additional masers in the field of view, such as those detected by Li et al. (2010) and Paine & Darling (2022), as well as additional maser transitions that are not included in this study because they do not have legacy astrometry (Reid et al., 2003, 2007). ### Legacy Data We employ the VLA and VLBA astrometric measurements of 15 SiO masers presented in Reid et al. (2003) and Reid et al. (2007). These span 1996-2006. In addition, we use the measurements obtained from VLA programs in 2008, 2011, and 2014 by Reid (2022, private communication). We did not use the 1995 VLA data presented in Menten et al. (1997) because the uncertainties in the measured coordinates are an order of magnitude larger than subsequent epochs due to larger synthesized beams. ### New Observations and Data Reduction New VLA observations were conducted in programs 19A-310 (27 Dec 2020 or 2020.988) and 22A-328 (21, 24, and 28 March 2022; mean epoch 2022.227). Both used the most extended A configuration and set Sgr A* as the phase center because the SiO masers of interest fall within the primary beam. Both used 3C286 for flux calibration, but 19A-310 used J1733\(-\)1304 for bandpass and delay calibration while 22A-328 used J1924\(-\)2914. Rather than switch between the science target field and a complex gain calibrator, the \(1.0\pm 0.1\) Jy Sgr A* compact continuum was used for in-beam gain calibration in both programs. While the 19A-310 program has been analyzed in Paine & Darling (2022), we reprocess and reanalyze it here in a manner that is consistent with the legacy measurements, particularly 2014.249 (see Table 1), and the treatment of 22A-328 observations described below. VLA 19A-310 observations spanned 2.25 hours (1.16 hours on-source) using a recording time of 2 s and two circular polarizations. The \(v=0\), \(v=1\), and \(v=2\)\(J=1-0\) transitions of SiO and the \(v=1\)\(J=1-0\) transition of \({}^{29}\)SiO were observed with 62.5 kHz channels, but only the \(v=1\) and \(v=2\) transitions of SiO at 43.1221 GHz and 42.8206 GHz were detected. Bandwidths were 128 MHz each, except for the \(v=2\) spectral window, which spanned 64 MHz. SiO-16 was not detected in either transition in the 2020 epoch, and IRS 7, IRS 9, and IRS 28 were only detected at \(>5\sigma\) in the \(v=1\) transition (IRS 9 \(v=2\) was not observed). VLA 22A-328 observations spanned 5 hours (3.77 hours on-source) in each of three observing sessions. The \(v=1\) and \(v=2\)\(J=1-0\) transitions of SiO were observed using 3 s integrations, two circular polarizations, and 100 kHz channels spanning 128 MHz. Only IRS 7 lacked a \(>5\sigma\) detection in one of the transitions (\(v=2\)). We used CASA (McMullin et al., 2007) for calibration, imaging, and coordinate measurements. Prior to calibration, the data were averaged in frequency from 62.5 kHz and 100 kHz channels to 187.5 kHz (1.3 km s\({}^{-1}\)) and 200 kHz (1.4 km s\({}^{-1}\)) channels and in time from 2 s and 3 s records to 6 s records for the 2020 and 2022 observations, respectively. Using Sgr A* for the complex gain calibration provides in-beam calibration of the masers and forces the Sgr A* continuum to be the phase center. The absolute astrometry is therefore lost, but the reference frame and dynamical quantities of interest can be obtained from the relative coordinate offsets of the masers compared to Sgr A*, so relative astrometry is adequate for our science goals. Sgr A* shows an apparent 6.4 mas yr\({}^{-1}\) proper motion when compared to background quasars due to the Solar orbit about the Galactic Center (Reid & Brunthaler, 2020; \begin{table} \begin{tabular}{c c c c c} \hline \hline Mean Epoch & Program & \(v\)a & Beam & Ref. \\ & & & (mas) & \\ \hline 1996.413 & VLBA BM060 & 1 & \(1.2\times 0.9\) & R03 \\ 1998.410 & VLA AM592 & 1 & \(70\times 30\) & R03 \\ 2000.850 & VLA AR451 & 1 & \(80\times 40\) & R03 \\ 2006.200 & VLA AR588 & 1 & \(86\times 33\) & R07 \\ 2008.860 & VLA AR678 & 1 & \(82\times 35\) & R22 \\ 2011.470 & VLA 11A-101 & 1 & \(97\times 42\) & R22 \\ 2014.249 & VLA 14A-168 & 1,2 & \(66\times 30\) & R22 \\ 2020.988 & VLA 19A-310 & 1,2 & \(93\times 36\) & P22,D22 \\ 2022.227 & VLA 22A-328 & 1,2 & \(82\times 39\) & D22 \\ \hline \end{tabular} References \end{table} Table 1: SiO Maser Data and Observations Xu et al., 2022), and its position was updated for each observation. Imaging used the CASA tclean algorithm centered on Sgr A* with postage-stamp image cubes of all maser locations as "outlier" fields. The outlier fields are cleaned simultaneously with Sgr A*. We did not subtract the continuum from the spectral line data. Cleaning was performed down to five times the per-channel rms noise in the dirty cubes. All three sessions of 22A-328 were incorporated into a single spectral cube for each SiO transition for each maser. Figure 1 shows all spectra for the \(v=1\) transition. The rms noise per channel was \(\sim\)2-3 mJy beam\({}^{-1}\) in 2020 and \(\sim\)1 mJy beam\({}^{-1}\) in 2022. To be consistent with previous work by Reid et al. (2003, 2007), and contrary to the \(w\)-based fitting used by Paine and Darling (2022), we measure maser positions in the image plane. We used the CASA routine imfit to fit 2D Gaussians to measure the centroid of the Sgr A* continuum and each maser in every channel in each transition independently. Maser coordinates were obtained from a variance-weighted average of the channel-by-channel centroids with peak fluxes \(>5\sigma\), incorporating both transitions. Sgr A* coordinates were calculated from the variance-weighted channel centroids over the entire continuum. Typical maser coordinate uncertainties are 0.2 milliarcsec (1.6 AU at 8.2 kpc), which is a substantial improvement over most legacy measurements by a factor of roughly 2-4. We combined the newly-measured maser coordinates with those from the legacy observations listed in Table 1 to form time series spanning up to 25.8 years. After the astrometric corrections described below are applied to the time series, linear fits provide proper motions. ## 3 Astrometric Methods Masers (and stars) in the vicinity of Sgr A* may not appear to be exactly where they physically lie. Light propagation and observer-induced effects such as solar gravitational deflection and aberration can cause the entire field of view to shift, which is not a problem for relative coordinate measurements, but these effects are also differential, causing relative astrometric offsets between objects as observed. In general, any phenomenon that deflects or appears to deflect light rays and depends on direction will be differential and therefore stretch, shear, or rotate the observed field of view. It is important to differentiate between relative astrometric offsets from Sgr A* that depend on the observation epoch and those that are stable over time. Epoch-dependent relative offsets must be determined and removed from astrometric time series to obtain proper motions. Time-stable offsets must be quantified in order to determine the actual physical locations of stars for kinematic or dynamical modeling, such as characterizing the metric around the Sgr A* black hole or the mass distribution of the inner parsec (e.g., GRAVITY Collaboration et al., 2022). Time-dependent differential astrometric offsets include aberration, terrestrial precession-nutation, and solar gravitational deflection. Aberration caused by an observer's motion will be differential because its amplitude depends on direction (e.g. the CMB and galaxies show a dipole; Smoot et al., 1977; Ellis and Baldwin, 1984; Darling, 2022). The dominant contribution is the Solar motion within the Galaxy, which produces a steady apparent motion of Sgr A* (Reid and Brunthaler, 2020), but the Earth's orbit adds an aberration epicycle that does depend on the observation epoch. Terrestrial precession-nutation involves the secular precession of the celestial pole plus epicycles about this pole, which are necessarily time-dependent. Finally, the solar mass causes measurable gravitational deflection, even at large angular offsets, and the solar-Sgr A* angular separation depends on the observation epoch. Figure 1: Composite spatially-integrated SiO \(v=1\), \(J=1-0\) maser spectra from the 2022.227 epoch image cubes. The colors indicate the projected distance from Sgr A*, assuming a Galactic Center distance of 8.2 kpc. Corrections for time-dependent differential offsets were applied to all data, new and legacy, using astropy.coordinates tools (Astropy Collaboration et al., 2013; Price-Whelan et al., 2018). Starting with the observed maser offsets and the Sgr A* J2000 coordinates, we calculate the "mean" maser J2000 coordinates. These coordinates include a precession correction from the epoch of observation to J2000 but do not include the above effects from aberration, nutation, or gravitational light deflection. Next, we precess the maser and Sgr A* coordinates from J2000 to the equinox of each observed epoch ("apparent" coordinates) and then transform to a precessed geocentric J2000 coordinate system. The geometric transformation includes the effects of aberration, the precession and nutation of the Earth's rotation axis, and gravitational deflection of incoming rays (Kaplan, 2005). Finally, we subtract the precessed and transformed Sgr A* coordinates from the precessed and transformed maser coordinates to obtain a relative maser offset. To correct for time-dependent differential astrometric offsets we find the difference between the coordinate offsets obtained from the above transformations and the J2000 coordinate offsets as observed. This difference is then subtracted from the observed coordinate offsets. Figure 11 in Appendix A shows an example of the corrections for one epoch. Time-independent offsets, such as the solar-Galactic Center aberration, are not removed by this process. A similar process, using different software, was applied to the SiO maser astrometry used in Sakai et al. (2019) but not to the Reid et al. (2007) results. These differential corrections generally slightly reduce the scatter in the residual time series after fitting for proper motions (described below). This is encouraging, and suggests that the process is providing reasonable time-dependent astrometry. However, the corrections are typically smaller than the variation in the astrometry, and the proper motions and reference frame stability are not significantly altered compared to the no-corrections case. The magnitude of the differential corrections scales linearly with angular separation from Sgr A* in a given epoch and varies from epoch to epoch. The corrections range from \(\sim\)0.1 mas to 4.3 mas in absolute value and are similar to the astrometric uncertainty in each coordinate in each epoch, except for masers with large offsets from Sgr A*. The latter have corrections \begin{table} \begin{tabular}{l r r r r r r r r} \hline \hline \multicolumn{1}{c}{ Name} & \multicolumn{1}{c}{\(v_{\rm LSR}\)} & \multicolumn{1}{c}{RA Offset1 } & \multicolumn{1}{c}{Dec. Offset} & \multicolumn{1}{c}{PM RA} & \multicolumn{1}{c}{PM Dec.} & \multicolumn{1}{c}{\(\chi^{2}_{\nu}\)} & \multicolumn{1}{c}{Ref. Epoch} & \multicolumn{1}{c}{\(N_{\rm Obs}\)} \\ & \multicolumn{1}{c}{(km s\({}^{-1}\))} & \multicolumn{1}{c}{(arcsec)} & \multicolumn{1}{c}{(arcsec)} & \multicolumn{1}{c}{(mas yr\({}^{-1}\))} & \multicolumn{1}{c}{(mas yr\({}^{-1}\))} & & \\ \hline IRS 9 & \(-341\) & \(+5.71043\pm 0.00009\) & \(-6.30688\pm 0.00023\) & \(+3.080\pm 0.016\) & \(+2.291\pm 0.033\) & 1.0 & 2017.946 & 8 \\ IRS 7 & \(-114\) & \(+0.03330\pm 0.00500\) & \(+5.49028\pm 0.00500\) & \(-0.002\pm 0.044\) & \(-4.665\pm 0.093\) & 1.2 & 2013.582 & 6 \\ SiO-14 & \(-111\) & \(-7.62578\pm 0.00032\) & \(-28.46850\pm 0.00046\) & \(+2.073\pm 0.041\) & \(-0.969\pm 0.064\) & 4.3 & 2017.153 & 8 \\ IRS 12N & \(-65\) & \(-3.27773\pm 0.00013\) & \(-6.94708\pm 0.00015\) & \(-1.122\pm 0.021\) & \(-2.834\pm 0.024\) & 2.8 & 2019.686 & 9 \\ IRS 28 & \(-54\) & \(+10.49199\pm 0.00030\) & \(-5.86884\pm 0.00050\) & \(+1.548\pm 0.050\) & \(-5.493\pm 0.088\) & 2.9 & 2014.235 & 8 \\ SiO-15 & \(-35\) & \(-12.46900\pm 0.00029\) & \(-11.06769\pm 0.00038\) & \(-2.562\pm 0.058\) & \(+0.738\pm 0.068\) & 1.1 & 2017.505 & 6 \\ IRS 10EE & \(-28\) & \(+7.68504\pm 0.00011\) & \(+4.17765\pm 0.00017\) & \(+0.070\pm 0.017\) & \(-1.984\pm 0.020\) & 2.3 & 2017.308 & 9 \\ IRS 15NE & \(-11\) & \(+1.20422\pm 0.00019\) & \(+11.25164\pm 0.00028\) & \(-1.925\pm 0.019\) & \(-5.802\pm 0.028\) & 1.6 & 2010.230 & 9 \\ SiO-16 & \(+7\) & \(-26.42046\pm 0.00067\) & \(-34.47238\pm 0.00124\) & \(-0.002\pm 0.093\) & \(-1.989\pm 0.170\) & 16.9 & 2017.089 & 7 \\ SiO-6 & \(+52\) & \(+35.25587\pm 0.00106\) & \(+30.68278\pm 0.00227\) & \(+2.719\pm 0.113\) & \(+2.507\pm 0.248\) & 11.4 & 2009.959 & 8 \\ SiO-17 & \(+53\) & \(+8.08338\pm 0.00035\) & \(-27.66156\pm 0.00065\) & \(+2.468\pm 0.052\) & \(+2.492\pm 0.108\) & 7.8 & 2014.935 & 7 \\ SiO-11 & \(+71\) & \(+1.76111\pm 0.00078\) & \(+40.30709\pm 0.00151\) & \(+1.704\pm 0.131\) & \(+1.904\pm 0.230\) & 46.5 & 2014.160 & 8 \\ IRS 17 & \(+74\) & \(+13.14134\pm 0.00090\) & \(+5.55666\pm 0.00148\) & \(-1.073\pm 0.165\) & \(-1.059\pm 0.240\) & 1.8 & 2009.404 & 6 \\ SiO-12 & \(+82\) & \(-18.80861\pm 0.00086\) & \(+42.48144\pm 0.00177\) & \(+1.086\pm 0.177\) & \(+1.458\pm 0.310\) & 6.9 & 2015.756 & 7 \\ IRS 19NW & \(+84\) & \(+14.57819\pm 0.00033\) & \(-18.47510\pm 0.00068\) & \(+1.414\pm 0.074\) & \(-0.702\pm 0.124\) & 15.0 & 2019.938 & 8 \\ \hline \end{tabular} Note. – Coordinate offsets are with respect to the Sgr A* radio centroid at the reference epoch, which is the position variance-weighted date of the time series (see Section 3). The LSR velocity is approximate; the masers show variability in their spectral peaks and velocity centroids (Reid et al., 2003, 2007; Paine & Darling, 2022). The \(\chi^{2}_{\nu}\) statistic characterizes the joint weighted least-squares proper motion fit in both coordinates. The coordinate offset uncertainties in IRS 7 have been manually adjusted to \(\pm\)5 mas (see Section 3). \end{table} Table 2: SiO Maser Angular Offsets and Proper Motions larger than centroid uncertainties due to the linear scaling of the corrections. Table 1 in Appendix A lists the full astrometric time series and the differential corrections for all masers in all epochs. IRS 7 requires special treatment: it has supergiant luminosity, its SiO maser emission distribution may span 10 mas, and its \(v=1\)\(J=1-0\) maser shows substantial variability, both in flux density and in velocity (Reid et al., 2003, 2007). To wit, the \(J=1-0\) maser decreased in brightness by a factor of 8, and the \(-124\) km s\({}^{-1}\) maser component faded below the flux density of the \(-103\) km s\({}^{-1}\) component from 1995.49 to 2000.85. The dominant component at \(-103\) km s\({}^{-1}\) persists through the current epoch (Figure 1), but the \(J=2-1\) transition resembles the pre-2000 \(J=1-0\) spectrum: it is about 10 times brighter and peaks at roughly \(-123\) km s\({}^{-1}\)(Paine and Darling, 2022). The ALMA \(J=2-1\) astrometry in 2015.27 and 2017.72, however, is statistically consistent with temporally bracketing VLA \(J=1-0\) astrometry in 2014.18 and 2020.99, which is at odds with the possibility of a shift of the \(v=1\)\(J=1-0\) maser emission from one side of the supergiant to the other. We conclude that for the purposes of a current and near-future reference frame determination, the proper motion and position of IRS 7 should rely on the last 20 years of observations and omit those made before the dramatic change in the \(J=1-0\) emission. The coordinates and proper motions presented in Table 2 and Figure 3 rely on epochs 2006.200-2022.227, and the coordinate uncertainties in the astrometric solution have been set to \(\pm 5\) mas to allow for the likely maser offsets from the stellar photocenter, following Reid et al. (2007) and Paine and Darling (2022). Table 1 includes the omitted epochs for posterity. Figure 3: Left and center columns: Time series in each coordinate for each stellar maser. The lower panel for each shows the coordinate offset from Sgr A* and the weighted least-squares linear proper motion fit. The upper panel shows the best-fit residual versus epoch. The shaded region indicates \(\pm\)4 AU. RA offsets are true angular offsets (i.e., they are corrected for cos(Dec.)). Right column: Sky tracks for the masers. The arrows indicate the direction of the 2D proper motion. Figure 3: continued Figure 3: continued Most masers have 7-9 epochs in their time series (SiO-15 and IRS 17 have 6, and for IRS 7 we use 6), and the post-2000 epochs tend to include more masers. The per-epoch uncertainty in coordinates ranges from 0.1 mas to 8.1 mas, with uncertainties uniformly smaller for RA than for Dec. due to a north-south elongated synthesized beam. Uncertainties tend to be larger in earlier epochs (less sensitivity and shorter integration times) and for fainter masers (lower signal-to-noise). Table A1 lists the coordinates for each maser in every epoch. The mean uncertainty in the offset coordinate of the masers in the reference epoch is 0.46 mas in RA and 0.84 mas in Dec., excluding IRS 7 (see Section 3). These correspond to 3.7 AU and 6.9 AU, respectively, at 8.2 kpc. The formal uncertainties in the measured position of Sgr A* are 1 \(\mu\)as and 2 \(\mu\)as in RA and Dec. in epoch 2022.227 and are therefore negligible compared to the uncertainties in the maser coordinates. It is important to note that these very small uncertainties in the Sgr A* position are strictly statistical, and the coordinates of Sgr A* are assigned _a priori_ because it is the phase center used for complex gain calibration. However, recent work by Xu et al. (2022) found a \(\sim\)30 mas offset from the canonical absolute position of Sgr A* (Reid and Brunthaler, 2020). This causes second-order astrometric offsets in the maser positions of order 30 mas\(\times\Delta\theta\) where \(\Delta\theta\), expressed in radians, is the angular offset from Sgr A*. For the masers presented here, the error is roughly 1-7 \(\mu\)as. This is negligible compared to other uncertainties and systematics, and therefore no correction was applied to the astrometry. The mean maser proper motion uncertainties are 0.07 mas yr\({}^{-1}\) in RA and 0.12 mas yr\({}^{-1}\) in Dec., corresponding to 2.7 km s\({}^{-1}\) and 4.6 km s\({}^{-1}\). As seen in Figure 3, the residuals from the linear time series fits often have significant outliers. These outliers are not consistent across all masers at a fixed epoch (i.e., there do not seem to be epochs with bad astrometry), and they are not restricted to particular stars or coordinate directions. Residuals are often within roughly 1-2 mas (Figure 3); 1 mas corresponds to \(\sim\)8 AU, which is the typical size of SiO maser distributions around evolved stars (Cotton et al., 2008). Residuals can, however, be as large as \(\sim\)5 mas or \(\sim\)40 AU (Figure 3). The nature of the variation in residuals remains unknown, but suggests a systematic effect that should be addressed in future work. It is clear, however, from these long time baselines that astrometric trends (i.e., proper motions) can be measured despite substantial single-epoch departures. Among the best-measured masers are the bright ones: IRS 9, IRS 12N, and IRS 10EE, which reach coordinate uncertainties of 0.09-0.13 mas (0.7-1.1 AU) in RA and 0.15-0.23 mas (1.2-1.9 AU) in Dec. These uncertainties are smaller than the expected size of the maser-emitting regions in the stellar atmospheres. The proper motions of these masers have uncertainties of 0.016-0.021 mas yr\({}^{-1}\) (0.6-0.8 km s\({}^{-1}\)) in RA and 0.021-0.033 mas yr\({}^{-1}\) (0.8-1.3 km s\({}^{-1}\)) in Dec., showing that it is possible to reach sub-km s\({}^{-1}\) precision in measurements of transverse velocity (also demonstrated by Paine and Darling, 2022). The reference frame stability, the uncertainty in the variance-weighted mean proper motion of the maser ensemble, is 8 \(\mu\)as yr\({}^{-1}\) in RA and 11 \(\mu\)as yr\({}^{-1}\) in Dec. or 0.30 km s\({}^{-1}\) in RA, 0.44 km s\({}^{-1}\) and in Dec. This sub-km s\({}^{-1}\) measurement is 2.3 times smaller than the previous value (Sakai et al., 2019) and represents a new benchmark for the maser-based reference frame. This new reference frame stability is in agreement with the predictions made by Yelda et al. (2010) and Sakai et al. (2019) and should enable observation of the apocenter shift of the star S0-2 caused by relativistic prograde precession (Schwarzschild precession; Weinberg et al., 2005). It should be noted that this general relativistic effect was detected for S0-2 (aka S2) by the GRAVITY Collaboration et al. (2020), but the precision-limiting factor in the measurement was the radio-to-infrared reference frame conversion of Plewa et al. (2015). ## 5 Discussion Figure 4: Three-dimensional velocity of stellar masers versus projected distance from Sgr A*. Point color indicates the sign of the radial velocities (red is redshifted, blue is blueshifted). The size of the points scales linearly with the transverse velocity, spanning 59–241 km s\({}^{-1}\). Velocity errorbars are uniformly smaller than the data points. The blue line indicates the upper bound on 3D velocity based on the enclosed stellar and black hole masses (see text and Equation 3). Projected distances assume a Galactic Center distance of 8.2 kpc. The astrometry in this and previous work relies on fitting Gaussian brightness distributions to planes in maser image-velocity cubes. In contrast, Paine & Darling (2022) uses _uv_-based fitting, often of several masers simultaneously. The per-epoch astrometry is generally in agreement between the two methods, including for the 2020.988 epoch that is included in both studies. The derived proper motions also show good agreement, although the Paine & Darling (2022) time baseline is shorter. However, this study utilized additional epochs, some of which provided 86 GHz maser-based positions. For many masers -- but not all -- the scatter about the linear proper motion fit is larger than the formal uncertainties would suggest; i.e., \(\chi^{2}_{\nu}\gg 1\) (Table 2). That this is not consistently true for all masers suggests that there is no consistent systematic effect influencing the astrometry, and there do not appear to be specific outlier epochs. Possible explanations for the offsets include physical and instrumental effects, but it is difficult to identify systematics that could produce the observed magnitude of the offsets that are not consistent across all masers or limited to specific epochs. Stellar winds, pulsation, maser variability, and stellar companions are possible sources of real offsets in SiO masing regions, but these are unlikely to produce the few-mas single-epoch departures from the observed proper motion trends. One mas is equivalent to 8.2 AU, roughly the diameter of the stellar maser-emitting regions. Instrumental or calibration systematics should generally affect all masers in a given epoch, and might scale with distance from Sgr A*. It is noteworthy that the masers with the highest \(\chi^{2}_{\nu}\) values are all redshifted and generally at the largest separations from Sgr A*. Regardless of the source of the astrometric variation, one could examine the magnitude and impact of an intrinsic scatter added in quadrature to the measurement uncertainties. In Appendix C, we examine expanded uncertainties in the time series and find that while larger uncertainties are favored to fit a linear secular trend model in maser offsets from Sgr A* for 60% of the maser coordinates, the resultant proper motions are formally consistent with those obtained from the weighted least-squares fits using the original measurement uncertainties. ## 6 Analysis Given the mass interior to the projected distance from Sgr A*, _M\({}_{\rm encl}\)_, the 3D velocity of a bound orbit has an upper limit \[v\leq\left(\frac{2GM_{\rm encl}}{r_{\rm proj}}\right)^{1/2}. \tag{3}\] The enclosed mass is the sum of the black hole mass, combined stellar mass, and any other constituents such as gas and dark matter. In Figure 4 we compare the measured 3D velocities to this upper bound assuming \(M_{\rm BH}=4.3\times 10^{6}\)_M\({}_{\odot}\)_(GRAVITY Collaboration et al., 2022) and the maximal stellar mass at 1 pc described by Schodel et al. (2018). All stars except IRS 9, which may be unbound (Reid et al., 2007), lie below this locus, in agreement with the mass limits obtained by Paine & Darling (2022). It is interesting that the blue-shifted masers tend to be closer in projection to Sgr A* than the redshifted masers, although the transverse velocity vectors do not show preferential radial or azimuthal trends (Figure 2). 3D velocities trend larger with smaller projected radius, as one would expect. ## 7 Conclusions Using new and legacy VLA observations, we have updated the SiO stellar maser astrometric reference frame relative to the Sgr A* 43 GHz radio continuum. Much of the astrometry represents new benchmarks in precision, including sub-km s\({}^{-1}\) measurements of transverse velocity for some masers and \(\sim\)10 \(\mu\)as yr\({}^{-1}\) reference frame stability. There are, however, significant single-epoch coordinate outliers from proper motion trends for many masers that remain unexplained but provide opportunities to further improve the astrometry if the systematic effects can be quantified and corrected. We have also demonstrated the value of continued and higher cadence maser monitoring. JD and JP acknowledge support from NSF grant AST-1908122. SS and AG acknowledge support from the Gordon & Betty Moore Foundation and NSF grant AST-1909554. We thank the anonymous referee for helpful feedback. This research made use of NumPy (van der Walt et al., 2011), Matplotlib (Hunter, 2007), and Astropy2, a community-developed core Python package for Astronomy (Astropy Collaboration et al., 2013; Price-Whelan et al., 2018). VLA, VLBA Footnote 2: affiliation: Visiting Astronomer, 331-330, 331-331-331-331-331-311-311-311-3111-3111-3111-3111-3111-3111-3111-3111-31111-31111-31111-311111-31111-31111-311111-311111-311111-311111-311111-3111111-3111111-3111111-311111-3111111-31111111-31111111-31111111-31111111-31111111-31
2306.16958
Identifiability of Direct Effects from Summary Causal Graphs
Dynamic structural causal models (SCMs) are a powerful framework for reasoning in dynamic systems about direct effects which measure how a change in one variable affects another variable while holding all other variables constant. The causal relations in a dynamic structural causal model can be qualitatively represented with an acyclic full-time causal graph. Assuming linearity and no hidden confounding and given the full-time causal graph, the direct causal effect is always identifiable. However, in many application such a graph is not available for various reasons but nevertheless experts have access to the summary causal graph of the full-time causal graph which represents causal relations between time series while omitting temporal information and allowing cycles. This paper presents a complete identifiability result which characterizes all cases for which the direct effect is graphically identifiable from a summary causal graph and gives two sound finite adjustment sets that can be used to estimate the direct effect whenever it is identifiable.
Simon Ferreira, Charles K. Assaad
2023-06-29T14:05:35Z
http://arxiv.org/abs/2306.16958v4
# Identifiability of Direct Effects from Summary Causal Graphs ###### Abstract Dynamic structural causal models (SCMs) are a powerful framework for reasoning in dynamic systems about direct effects which measure how a change in one variable affects another variable while holding all other variables constant. The causal relations in a dynamic structural causal model can be qualitatively represented with a full-time causal graph. Assuming linearity and causal sufficiency and given the full-time causal graph, the direct causal effect is always identifiable and can be estimated from data by adjusting on any set of variables given by the so-called single-door criterion. However, in many application such a graph is not available for various reasons but nevertheless experts have access to an abstraction of the full-time causal graph which represents causal relations between time series while omitting temporal information. This paper presents a complete identifiability result which characterizes all cases for which the direct effect is graphically identifiable from summary causal graphs and gives two sound finite adjustment sets that can be used to estimate the direct effect whenever it is identifiable. Structural causal models (SCMs) are a powerful framework for representing and reasoning about causal relations between variables with a long history in many fields such as genetics (Wright, 1920, 1921), econometrics (Haavelmo, 1943), social sciences (Duncan, 1975; Goldberger, 1972), and artificial intelligence (Pearl, 2000). In particular, SCMs are useful for reasoning about direct effects which measure how a change in one variable affects another variable while holding all other variables constant (Pearl, 2012). The identification and estimation of direct effects are important in many application, _e.g._, epidemiologists are interested in measuring how smoking affects lung cancer risk without being mediated by genetic susceptibility (Zhou et al., 2021); ecologists are usually focus on understanding direct effects such as competition, herbivory, and predation (Connell, 1961); IT monitoring expert can localize the root cause of a system failure or a performance issue by comparing the direct causal impact of different components on each other before and after the failure (Assaad et al., 2023). In the framework of (non-dynamic) SCMs, assuming linearity and causal sufficiency and given a causal graph which qualitatively represents causal relations between different variables, the direct effect between two variables is always identifiable and there exists a complete graphical tool, called the single-door criterion (Pearl, 1998; Spirtes et al., 1998; Pearl, 2000) that finds all possible adjustment sets that allow to estimate the direct effect from data. These results are directly applicable in dynamic SCMs given the full-time causal graph--which qualitatively represents all causal relations between different temporal instants of the dynamic SCM--and assuming consistency throughout time and that the full-time causal graph is acyclic. However, in many dynamic systems, experts have difficulties in building a full-time causal graph (Ait-Bachir et al., 2023), while they can usually build a summary causal graph (Assaad et al., 2022) which is an abstraction of the full-time causal graph where temporal information is omitted. So far, the problem of identifying direct effects has only been addressed for summary causal graphs under the assumption that the summary causal graph is acyclic (while allowing self-loops). In particular, it has been shown that direct effects are always identifiable in an acyclic summary causal graph with loops and a non-complete extension of the single-door criterion has been proposed to find some adjustment sets that allow to estimate the direct effect from data (Assaad et al., 2023). In this work, we focus on the identifiability of direct effects from summary causal graphs without assuming that the given summary causal graph is acyclic. Our main contribution is two-folds. First, we give a complete identifiability
2304.00943
Almost sure upper bound for random multiplicative functions
Let $\varepsilon >0$. Let $f$ be a Steinhaus or Rademacher random multiplicative function. We prove that we have almost surely, as $x \to +\infty$, $$ \sum_{n \leqslant x} f(n) \ll \sqrt{x} (\log_2 x)^{\frac{3}{4}+ \varepsilon}. $$
Rachid Caich
2023-04-03T13:00:31Z
http://arxiv.org/abs/2304.00943v2
# Almost sure upper bound ###### Abstract Let \(\varepsilon>0\). Let \(f\) be a Steinhaus or Rademacher random multiplicative function. We prove that we have almost surely, as \(x\to+\infty\), \[\sum_{n\leqslant x}f(n)\ll\sqrt{x}(\log_{2}x)^{\frac{1}{4}+\varepsilon}.\] Thanks to Harper's Lower bound, this gives a sharp upper bound of the largest fluctuation of the quantity \(\sum_{n\leqslant x}f(n)\). **Keywords:** Random multiplicative functions, large fluctuations, law of iterated logarithm, mean values of multiplicative functions, Rademacher functions, Steinhaus functions, Doob's inequality, Hoeffding's inequality, martingales. **2000 Mathematics Subject Classification:** 11N37, (11K99, 60F15). ## 1 Introduction The aim of this article is to study the large fluctuations of random multiplicative functions. Random multiplicative functions has been a very attractive topic in the recent years. There are at least two models of random multiplicative functions that have been frequently studied in number theory and probability (see for example, Harper [11], [10], [12], Lau-Tenenbaum-Wu [13], Chatterjee-Soundararajan [4], Benatar-Nishry-Rodges [2]). Let \(\mathcal{P}\) be the set of the prime numbers, _a Steinhaus random multiplicative function_ is obtained by letting \((f(p))_{p\in\mathcal{P}}\) be a sequence of independent Steinhaus random variables (i.e distributed uniformly on the unit circle \(\{|z|=1\}\)), and then setting \[f(n):=\prod_{p^{a}\parallel n}f(p)^{a}\text{ for all }n\in\mathbb{N},\] where \(p^{a}||n\) means that \(p^{a}\) is the highest power of \(n\). _A Rademacher multiplicative function_ is obtained by letting \((f(p))_{p\in\mathcal{P}}\) be independent Rademacher random variables (i.e taking values \(\pm 1\) with probability \(1/2\) each), and setting \[f(n)=\begin{cases}\prod_{p\mid n}f(n),&\text{if }n\text{ is squarefree}\\ 0,&\text{otherwise}.\end{cases}\] The Rademacher model was introduced by Wintner [17], in 1944 as a heuristic model of Mobius function \(\mu\) (see the introduction in [13]). With a little change, one can obtain the probabilistic model for a real primitive Dirichlet character (see Granville-Soundararajan [7]). Steinhaus random multiplicative functions modelize the randomly chosen Dirichlet character \(\chi\) or continuous characters \(n\mapsto n^{it}\), see for example section 2 of Granville-Soundararajan [6]. A classical result in the study of sums of independent random variables is the Law Iterated Logarithm, which predicts the almost sure size of the largest fluctuation of those sums. Let \((\xi_{k})_{k\in\mathbb{N}}\) be an independent sequence of real random variables taking value \(\pm 1\) with probability \(1/2\) each. Khinchine's Law of the Iterated Logarithm consists in the almost sure statement \[\limsup_{N\to+\infty}\frac{|\sum_{k\leqslant N}\xi_{k}|}{\sqrt{2N\log_{2}N}}=1.\] Here and in the sequel \(\log_{k}\) denotes the k-fold iterated logarithm. See for instance Gut [8] Chapter 8 Theorem 1.1. Note that the largest fluctuations as \(N\) varies (of the size \(\sqrt{N\log_{2}N}\)) are significantly larger than the random fluctuations one expects at a fixed point ( \(\mathbb{E}\big{[}|\sum_{k\leqslant N}\xi_{k}|\big{]}\asymp\sqrt{N}\)). Khinchine's theorem can't be applied in the case of random multiplicative functions, because their values are not independent. However, following Harper (see the end of the Introduction in [12]), we believe that a suitable version of the law of iterated logarithm might hold. For \(f\) a multiplicative function, we denote \(M_{f}(x):=\sum_{n\leqslant x}f(n)\). Wintner [17] studied the case where \(f\) is Rademacher random multiplicative function, and he was able to prove, for any fixed \(\varepsilon>0\), we have almost surely \[M_{f}(x)=O(x^{1/2+\varepsilon})\] and \[M_{f}(x)=\Omega(x^{1/2-\varepsilon}).\] This has been further improved by Erdos [5] who proved that almost surely one has the bound \(O(\sqrt{x}\log^{A}x)\) and one almost surely does not have the bound \(O(\sqrt{x}\log^{-B}x)\) for some nonnegative real numbers \(A\) and \(B\). In the 1980s, Halasz [9] introduced several novel ideas which had made a further progress. By conditioning, coupled with hypercontractive inequalities, he proved in the Rademacher case that \[M_{f}(x)=O\big{(}\sqrt{x}\exp(A\sqrt{\log_{2}x\log_{3}x})\big{)}\] for some nonnegative \(A\). Recently, Lau-Tenenbaum-Wu [13] (see also Basquin [1]) improved the analysis of hypercontractive inequalities in Halasz's argument, establishing, for Rademacher case, an almost sure upper bound \(O(\sqrt{x}(\log_{2}x)^{2+\varepsilon})\). For the lower bound, Harper [12] proved, using a _Multiplicative Chaos_ techniques in both case Rademacher and Steinhaus random multiplicative functions, that for any function \(V(x)\) tending to infinity with \(x\), there almost surely exists arbitrarily large values of \(x\) for which \[\big{|}M_{f}(x)\big{|}\gg\frac{\sqrt{x}(\log_{2}x)^{1/4}}{V(x)}.\] Moreover, Mastrostefano [14] recently proved an upper bound for the sum restricted to integers that possess a large prime factor. We denote \(P(n)\) to be the largest prime factor dividing \(n\) with the convention \(P(1)=1\). In [14], it was proved that we have almost surely, as \(x\to+\infty\) \[\sum_{\begin{subarray}{c}n\leqslant x\\ P(n)>\sqrt{x}\end{subarray}}f(n)\ll\sqrt{x}(\log_{2}x)^{1/4+\varepsilon}.\] This bound is compared to the first moment: Harper, in [11], proved, when \(x\to+\infty\) \[\mathbb{E}\bigg{[}\big{|}M_{f}(x)\big{|}\bigg{]}\asymp\frac{\sqrt{x}}{(\log_{ 2}x)^{1/4}}.\] This discrepancy of a factor \(\sqrt{\log_{2}x}\) between the first moment and the almost sure behaviour is similar to the Law Iterated Logarithm for independent random variables. For this reason Harper conjectured that for any fixed \(\varepsilon>0\), we might have almost surely, as \(x\to+\infty\) \[M_{f}(x)\ll\sqrt{x}(\log_{2}x)^{1/4+\varepsilon}\] for both cases Steinhaus and Rademacher (see the introduction in [12] for more details). The main goal of this work is to prove that this is indeed the case. **Theorem 1.1**.: _Let \(\varepsilon>0\). Let \(f\) be a Steinhaus or Rademacher random multiplicative function. We have almost surely, as \(x\to+\infty\)_ \[M_{f}(x)\ll\sqrt{x}(\log_{2}x)^{\frac{1}{4}+\varepsilon}. \tag{1}\] ## 2 Sketch of the proof Let \(f\) be a Steinhaus or Rademacher multiplicative function. First we reduce our analysis to a sequence of "test points" (say \(x_{i}\)), with the property of being sparse (see Lemma 4.1). The second step is to split \(M_{f}(x_{i})\) according to the largest prime factor \(P(n)\) of the summation variable \(n\). Recall that \(v_{p}(n)\) denotes the \(p\)-adic number of an integer \(n\) (i.e. the exponent of \(p\) in the product of prime factors decomposition of \(n\)). Let \((y_{j})_{0\leqslant j\leqslant J}\) a nondecreasing sequence such that \(\log y_{j-1}\sim\log y_{j}\) and \(J\asymp\log_{2}x_{i}\). \[M_{f}(x_{i})=\sum_{\begin{subarray}{c}n\leqslant x_{i}\\ P(n)\leqslant y_{0}\end{subarray}}f(n)+M_{f}^{(1)}(x_{i})+M_{f}^{(2)}(x_{i})\] where \[M_{f}^{(1)}(x_{i}):=\sum_{j=1}^{J}\sum_{y_{j-1}<p\leqslant y_{j}}f(p)\sum_{ \begin{subarray}{c}n\leqslant x_{i}/p\\ P(n)<p\end{subarray}}f(n) \tag{2}\] and \[M_{f}^{(2)}(x_{i}):=\sum_{j=1}^{J}\sum_{\begin{subarray}{c}y_{j-1}^{2}<d \leqslant x_{i}\\ v_{P(n)}(d)\geqslant 2\end{subarray}}^{y_{j-1},y_{j}}f(d)\sum_{ \begin{subarray}{c}n\leqslant x_{i}/d\\ P(n)\leqslant y_{j-1}\end{subarray}}f(n). \tag{3}\] where the symbol \(\sum^{y,z}\) indicates a sum restricted to integers all of whose prime factors belong to the interval \(]y,z]\). Note that \(M_{f}^{(2)}(x_{i})\) is equal to \(0\) for Rademacher case. By choosing \(y_{0}\) small enough, the sum \(\sum_{\begin{subarray}{c}n\leqslant x_{i}\\ P(n)\leqslant y_{0}\end{subarray}}f(n)\) is "small", thus it can be neglected. We show first that \(M_{f}^{(2)}(x_{i})\) is "small ", this, can be done in few steps. Moreover, the expectation, conditioning on the value \(f(q)\) for prime numbers \(q<p\), of the quantity \(f(p)\sum_{\begin{subarray}{c}n\leqslant x_{i}/p\\ P(n)<p\end{subarray}}f(n)\) is equal to \(0\). Therefore, \(M_{f}^{(1)}(x_{i})\) is sum of martingale differences with variance \[V(x_{i})=\sum_{y_{0}<p\leqslant y_{J}}\left|\sum_{\begin{subarray}{c}n\leqslant x _{i}/p\\ P(n)<p\end{subarray}}f(n)\right|^{2}. \tag{4}\] A developed version of Hoeffding's inequality (see Lemma 3.12) allows us to reduce the problem to study the variance (4). The use of this inequality is exactly what gives a strong exponential upper bound. More specifically, the goal becomes to prove that we have almost surely as \(x_{i}\) tends to infinity \(V(x_{i})\leqslant\frac{x_{i}}{(\log_{2}x_{i})^{1/2}}\). The key point here is to notice that among the terms in the sum of \(V(x_{i})\) that contribute significantly are the \(\sum_{y_{j-1}<p\leqslant y_{j}}\bigg{|}\sum_{\begin{subarray}{c}n\leqslant x_{i}/ p\\ P(n)<p\end{subarray}}f(n)\bigg{|}^{2}\) such that \(y_{j}\) is very "close" to \(x_{i}\) (see Section 6.5). In fact the number of \(y_{j}\) close to \(x_{i}\), that contribute much in the sum, is less than \((\log_{2}x_{i})^{\varepsilon/50}\). Thus we are reduced again to study \[\widetilde{V}(y_{j},x_{i}):=\sum_{y_{j-1}<p\leqslant y_{j}}\bigg{|}\sum_{ \begin{subarray}{c}n\leqslant x_{i}/p\\ P(n)<p\end{subarray}}f(n)\bigg{|}^{2}. \tag{5}\] By smoothing and adjusting a little bite \(\widetilde{V}(y_{j},x_{i})\), this will give rise to a supermartingale sequence \((U_{j,i})_{j\geqslant 0}\) where \(U_{0,i}=I_{0}\) doesn't depend on \(x_{i}\) for each \(x_{i}\). Using an adjust version of Doob's inequality (Lemma 3.11), this will help us out to deal with the supremum over \(j\) and \(x_{i}\). At the end, we use Harper's low moment result which we give the detailed proof in the subsection 6.3 with some modifications to make it suitable for our case. We get roughly speaking, almost surely \(V(x_{i})\leqslant\frac{x_{i}}{(\log_{2}x_{i})^{1/2}}\) as \(x_{i}\) tends to infinity. ## 3 Preliminary results ### Notation Let's start by some definitions. Let \((\Omega,\mathcal{F},\mathbb{P})\) be a probabilistic space. We call a _filtration_ every sequence \((\mathcal{F}_{n})_{n\geqslant 1}\) of increasing sub-\(\sigma\)-algebras of \(\mathcal{F}\). We say that a sequence of real random variables \((Z_{n})_{n\geqslant 1}\) is _submartingale_ (resp. _supermartingale_) sequence with respect to the filtration \((\mathcal{F}_{n})_{n\geqslant 1}\), if the following properties are satisfied: - \(Z_{n}\) is \(\mathcal{F}_{n}\) measurable - \(\mathbb{E}[|Z_{n}|]<+\infty\) - \(\mathbb{E}[Z_{n+1}\,|\,\mathcal{F}_{n}]\geqslant Z_{n}\) (resp. \(\mathbb{E}[Z_{n+1}\,|\,\mathcal{F}_{n}]\leqslant Z_{n}\) ) almost surely. We say that \((Z_{n})_{n\geqslant 1}\) is martingale difference sequence with respect to the same filtration \((\mathcal{F}_{n})_{n\geqslant 1}\) if - \(Z_{n}\) is \(\mathcal{F}_{n}\) measurable - \(\mathbb{E}[|Z_{n}|]<+\infty\) - \(\mathbb{E}[Z_{n+1}\,|\,\mathcal{F}_{n}]=0\) almost surely. An event \(E\in\mathcal{F}\) happens _almost surely_ if \(\mathbb{P}[E]=1\). Let \(Z\) be a random variable and let \(\mathcal{H}_{1}\subset\mathcal{H}_{2}\subset\mathcal{F}\) some sub-\(\sigma\)-algebras, we have \[\mathbb{E}\big{[}\mathbb{E}\big{[}Z\,\big{|}\,\mathcal{H}_{2}\big{]}\,\big{|} \,\mathcal{H}_{1}\big{]}=\mathbb{E}\big{[}Z\,\big{|}\,\mathcal{H}_{1}\big{]}.\] ### Known Tools **Lemma 3.1**.: _Let \(f\) be a Steinhaus or Rademacher random multiplicative function. For every sequence \((a_{n})_{n\geqslant 1}\) of complex numbers and every positive integer \(m\geqslant 1\), we have_ \[\mathbb{E}\left[\left|\sum_{n\geqslant 1}a_{n}f(n)\right|^{2m}\right]\leqslant \left(\sum_{n\geqslant 1}|a_{n}|^{2}\,\tau_{2m-1}(n)\right)^{m},\] _where \(\tau_{m}(n)\) is the \(m\)-fold divisor function._ Proof.: See Benami [3] Chapter III. Refer also to lemma 2 of Halasz [9] for another proof. Already used by Lau-Tenenbaum-Wu in [13], Harper [10] and recently by Mastrostefano [14]. **Lemma 3.2**.: _Let \(m\geqslant 2\) an integer. Then, uniformly for \(x\geqslant 3\), we have,_ \[\sum_{n\leqslant x}\tau_{m}(n)\leqslant x(2\log x)^{m-1}.\] Proof.: See lemma 3.1 in [2]. **Lemma 3.3**.: _(Parseval's identity). Let \((a_{n})_{n\in\mathbb{N}^{n}}\) be sequence of complex numbers \(A(s):=\sum_{n=1}^{+\infty}\frac{a_{n}}{n^{s}}\) denotes the corresponding Dirichlet series and let also \(\sigma_{c}\) denote its abscissa of convergence. Then for any \(\sigma>\max(0,\sigma_{c})\), we have_ \[\int_{0}^{+\infty}\frac{\big{|}\sum_{n\leqslant x}a_{n}\big{|}^{2}}{x^{1+2 \sigma}}dx=\frac{1}{2\pi}\int_{-\infty}^{+\infty}\bigg{|}\frac{A(\sigma+it)}{ \sigma+it}\bigg{|}^{2}\mathrm{d}t.\] Proof.: See Eq (5.26) in [15]. We define the following parameter \[a_{f}=\begin{cases}1&\text{ if $f$ is a Rademacher multiplicative function}\\ -1&\text{ if $f$ is a Steinhaus multiplicative function.}\end{cases} \tag{6}\] **Lemma 3.4**.: _(Euler product result) Let \(f\) be a Rademacher or Steinhaus random multiplicative function. Let \(t\) a real number and \(2\leqslant x\leqslant y\), we have_ \[\mathbb{E}\Bigg{[}\prod_{x<p\leqslant y}\bigg{|}1+a_{f}\frac{f(p)}{p^{1/2+it} }\bigg{|}^{2a_{f}}\Bigg{]}=\prod_{x<p\leqslant y}\bigg{(}1+\frac{a_{f}}{p} \bigg{)}^{a_{f}}.\] Proof.: See Mastrostefano [14] lemma 2.4. **Lemma 3.5**.: _(Borel-Cantelli's First Lemma). Let \((A_{n})_{n\geqslant 1}\) be sequence of events. Assuming that \(\sum_{n=1}^{+\infty}\mathbb{P}[A_{n}]<+\infty\) then \(\mathbb{P}[\limsup_{n\to+\infty}A_{n}]=0\)._ Proof.: See theorem 18.1 in [8]. **Lemma 3.6**.: _Let \(b_{0},b_{1},...,b_{n}\) be any complex numbers. We have_ \[\int_{0}^{1}\big{|}b_{0}+\sum_{k=1}^{n}\mathrm{e}^{\mathrm{i}2\pi k\vartheta}b _{k}\big{|}\mathrm{d}\vartheta\geqslant|b_{0}|. \tag{7}\] Proof.: Directly, we have \[\int_{0}^{1}\big{|}b_{0}+\sum_{k=1}^{n}\mathrm{e}^{\mathrm{i}2\pi k\vartheta}b _{k}\big{|}\mathrm{d}\vartheta\geqslant\bigg{|}\int_{0}^{1}\bigg{(}b_{0}+\sum _{k=1}^{n}\mathrm{e}^{\mathrm{i}2\pi k\vartheta}b_{k}\bigg{)}\mathrm{d} \vartheta\bigg{|}=|b_{0}|.\] **Lemma 3.7**.: _Suppose that the real sequence of random variables and \(\sigma\)-algebras \(\{(Z_{n},\mathcal{F}_{n})\}_{n\geqslant 0}\) is a supermartingale. Then, for every \(n,m\) integers such that \(n\geqslant m\), we have_ \[\mathbb{E}[Z_{n}\,|\,\mathcal{F}_{m}]\leqslant Z_{m}.\] Proof.: See theorem 10.2.1 in [8]. **Lemma 3.8**.: _(Doob's inequality). Let \(a>0\). Suppose that the real sequence of random variables and \(\sigma\)-algebras \(\{(Z_{n},\mathcal{F}_{n})\}_{n\geqslant 0}\) is a nonnegative submartingale (resp. supermartingale). Then_ \[\mathbb{P}\bigg{[}\max_{0\leqslant j\leqslant n}Z_{j}>a\bigg{]}\leqslant\frac{ \mathbb{E}\big{[}Z_{n}\big{]}}{a}\,\bigg{(}\text{resp. }\mathbb{P}\bigg{[}\max_{0\leqslant j\leqslant n}Z_{j}>a\bigg{]}\leqslant \frac{\mathbb{E}[Z_{0}]}{a}\bigg{)}.\] Proof.: See theorem 9.1 in [8]. **Lemma 3.9**.: _(Doob's \(L^{r}\)-inequality). Let \(r>1\). Suppose that the sequence of real random variables and \(\sigma\)-algebras \(\{(Z_{n},\mathcal{F}_{n})\}_{0\leqslant n\leqslant N}\) is nonnegative submartingale bounded in \(L^{r}\). Then_ \[\mathbb{E}\bigg{[}\big{(}\max_{0\leqslant n\leqslant N}Z_{n}\big{)}^{r} \bigg{]}\leqslant\bigg{(}\frac{r}{r-1}\bigg{)}^{r}\max_{0\leqslant n\leqslant N }\mathbb{E}\big{[}Z_{n}^{r}\big{]}.\] Proof.: See theorem 9.4 in [8]. **Lemma 3.10**.: _Let \(a\geqslant 1\). For any integer \(n\geqslant 1\), let \(G_{1},...,G_{n}\) be an independent real Gaussian random variables, each having mean 0 and variance between \(\frac{1}{20}\) and \(20\). Then_ \[\mathbb{P}\bigg{[}\sum_{m=1}^{k}G_{m}\leqslant a+2\log k+O(1)\text{ for all }k\leqslant n\bigg{]}\asymp\min\bigg{\{}1,\frac{a}{\sqrt{n}}\bigg{\}}.\] Proof.: This is Probability Result 1 in [11]. ### New Tools. **Lemma 3.11**.: _(2-Dimension Doob's inequality.) Let \((X_{n,k})\underset{0\leqslant k\leqslant K}{n\geqslant 0}\) be a nonnegative sequence of random variables. Let \((\mathcal{F}_{n})_{n\geqslant 0}\) be a filtration. Let \(\mathcal{S}_{0}\) be a \(\mathcal{F}_{0}\)-measurable event. Let for each \(0\leqslant k\leqslant K\), the sequence \((X_{n,k})_{n\geqslant 0}\) is supermartingale with respect to \((\mathcal{F}_{n})_{n\geqslant 0}\) and we assume that for all \(0\leqslant k\leqslant K\), \(X_{0,k}=X_{0}\), where \(X_{0}\) is a random variable which doesn't depend on \(k\). Then for any \(\lambda>0\) and \(N\geqslant 0\), we have_ \[\lambda\mathbb{P}\bigg{[}\sup_{\begin{subarray}{c}0\leqslant k\leqslant K\\ 0\leqslant n\leqslant N\end{subarray}}X_{n,k}>\lambda\,\big{|}\,\mathcal{S}_{ 0}\bigg{]}\leqslant 2\mathbb{E}[X_{0}\,|\,\mathcal{S}_{0}].\] Proof.: Let \(\lambda>0\). By taking the lexicographic order on \(\{(n,k):n\geqslant 0\text{ and }0\leqslant k\leqslant K\}\) (i.e \((n_{1},k_{1})>(n_{2},k_{2})\) if and only if \(n_{1}>n_{2}\) or ( \(n_{1}=n_{2}\) and \(k_{1}>k_{2})\)), we define \[T=\inf\big{\{}(n,k)\geqslant(0,0):X_{n,k}>\lambda\big{\}}.\] Observe that, for \(n\geqslant 0\) and \(0\leqslant k\leqslant K\), the event \(\big{\{}T\leqslant(n,k)\big{\}}\) is \(\mathcal{F}_{n}\)-measurable. In the sake of readability, we denote, only in this proof, for \(a,b\in\mathbb{R}\) by \(a\wedge b:=\inf\{a,b\}\). Let \(n\geqslant 0\), we set \[Y_{n}:=X_{T\wedge(n,K)}+\sum_{0\leqslant k\leqslant K}X_{(n,k)}\mathbb{1}_{T \in\{(n,k);m>n\}}.\] Note that \(X_{T\wedge(n,K)}\leqslant Y_{n}\). It is clear that \[\mathbb{E}\big{[}Y_{n+1}\,\big{|}\,\mathcal{F}_{n}\big{]} =\mathbb{E}\big{[}X_{T}\mathbb{1}_{T\leqslant(n,K)}+\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! Note that under the event \(\{T=(n,k)\}\), we have \(\lambda<X_{n,k}\). Thus \[\lambda\mathbb{P}\bigg{[}\sup_{\begin{subarray}{c}0\leqslant k \leqslant K\\ 0\leqslant n\leqslant N\end{subarray}}X_{n,k}>\lambda\,\big{|}\,\mathcal{S}_{0} \bigg{]} =\lambda\mathbb{P}\big{[}T\leqslant(N,K)\,|\,\mathcal{S}_{0}\big{]}\] \[=\sum_{\begin{subarray}{c}0\leqslant n\leqslant N\\ 0\leqslant k\leqslant K\end{subarray}}\mathbb{E}\big{[}\lambda\mathbb{1}_{T=(n,k)}\,|\,\mathcal{S}_{0}\big{]}\] \[\leqslant\sum_{\begin{subarray}{c}0\leqslant n\leqslant N\\ 0\leqslant k\leqslant K\end{subarray}}\mathbb{E}\big{[}X_{n,k}\mathbb{1}_{T=(n,k)}\,|\,\mathcal{S}_{0}\big{]}\] \[=\mathbb{E}\big{[}X_{T}\mathbb{1}_{T\leqslant(N,K)}\,|\,\mathcal{ S}_{0}\big{]}.\] Since \(X_{T\wedge(N,K)}=X_{T}\mathbb{1}_{T\leqslant(N,K)}+X_{(N,K)}\mathbb{1}_{T>(N,K)} \geqslant X_{T}\mathbb{1}_{T\leqslant(N,K)}\), we get then \[\lambda\mathbb{P}\bigg{[}\sup_{\begin{subarray}{c}0\leqslant k \leqslant K\\ 0\leqslant n\leqslant N\end{subarray}}X_{n,k}>\lambda\,\big{|}\,\mathcal{S}_{0 }\bigg{]}\leqslant\mathbb{E}\big{[}X_{T\wedge(N,K)}\,|\,\mathcal{S}_{0}\big{]}.\] By using the previous inequality (8), we get \[\lambda\mathbb{P}\bigg{[}\sup_{\begin{subarray}{c}0\leqslant k \leqslant K\\ 0\leqslant n\leqslant N\end{subarray}}X_{n,k}>\lambda\,\big{|}\,\mathcal{S}_{0 }\bigg{]}\leqslant 2\mathbb{E}[X_{0}\,|\,\mathcal{S}_{0}].\] This ends the proof. We have the following version of Azuma/Hoeffding's inequality. **Lemma 3.12**.: _Let \(Z=(Z_{n})_{1\leqslant n\leqslant N}\) be a complex martingale difference sequence with respect to a filtration \((\mathcal{F}_{n})_{1\leqslant n\leqslant N}\). We assume that for each \(n\), \(Z_{n}\) is bounded almost surely (let's say \(|Z_{n}|\leqslant b_{n}\) almost surely, where \(b_{n}\) is some real number). Furthermore, assume that we have \(|Z_{n}|\leqslant S_{n}\) almost surely, where \((S_{n})_{1\leqslant n\leqslant N}\) is a real predictable process with respect to the same filtration (i.e for each \(n\), \(S_{n}\) is \(\mathcal{F}_{n-1}\)-measurable). Assume that \(\sum_{1\leqslant n\leqslant N}S_{n}^{2}\leqslant T\) almost surely where \(T\) is a deterministic constant. Then, for any \(\varepsilon>0\), we have_ \[\mathbb{P}\bigg{[}\bigg{|}\sum_{1\leqslant n\leqslant N}Z_{n}\bigg{|}\geqslant \varepsilon\bigg{]}\leqslant 2\exp\bigg{(}\frac{-\varepsilon^{2}}{10T}\bigg{)}.\] Proof.: We define the conditional expectation \(\mathbb{E}_{n}[\,.\,]:=\mathbb{E}[\,.\,|\,\mathcal{F}_{n}]\). Following the proof of theorem 3 in [16], we define, \(g_{n}:=\sum_{k=1}^{n}Z_{k}\) with \(g_{0}:=0\). We set \(Z_{0}:=0\) and \(S_{0}:=0\). Let's define now the following function, for each \(n\geqslant 1\), \(\lambda>0\) and \(t\geqslant 0\) \[H_{n}(t):=\mathbb{E}_{n-1}\Big{[}\cosh\big{(}\lambda|g_{n-1}+tZ_{n}|\big{)} \Big{]},\] where \(\cosh x:=\frac{\sigma^{x}+\mathrm{e}^{-x}}{2}\). Note that \(H_{n}\geqslant 0\). We have for any positive differentiable function \(u\), \[(\cosh u)^{\prime\prime} =u^{\prime 2}\cosh u+u^{\prime\prime}\sinh u\] \[\leqslant(\cosh u)(u^{\prime 2}+|u^{\prime\prime}|u).\] Here, we used the following inequality \[\sinh u\leqslant u\cosh u.\] Thus, by taking \(u\) to be \(\lambda|g_{n-1}+tZ_{n}|\) we get \(u^{\prime 2}+|u^{\prime\prime}|u\) is less than \[\lambda^{2}\Bigg{(}2\bigg{(}\frac{\Re(Z_{n})\big{(}\Re(g_{n-1})+t\Re(Z_{n}) \big{)}+\Im(Z_{n})\big{(}\Im(g_{n-1})+t\Im(Z_{n})\big{)}}{|g_{n-1}+tZ_{n}|} \bigg{)}^{2}+|Z_{n}|^{2}\Bigg{)}. \tag{9}\] By using, \(\frac{|\Re(g_{n-1})+t\Re(Z_{n})|}{|g_{n-1}+tZ_{n}|}\), \(\frac{|\Im(g_{n-1})+t\Im(Z_{n})|}{|g_{n-1}+tZ_{n}|}\leqslant 1\), we get then (9) is less than \[\lambda^{2}\bigg{(}2\Big{(}|\Re(Z_{n})|+|\Im(Z_{n})|\Big{)}^{2}+|Z_{n}|^{2} \bigg{)} \tag{10}\] and since \(|\Re(Z_{n})|+|\Im(Z_{n})|\leqslant\sqrt{2}|Z_{n}|\), we get at the end \[\begin{split} H_{n}^{\prime\prime}(t)&\leqslant \mathbb{E}_{n-1}\Big{[}5\lambda^{2}|Z_{n}|^{2}\cosh\big{(}\lambda|g_{n-1}+tZ_{ n}|\big{)}\Big{]}\\ &\leqslant 5\lambda^{2}S_{n}^{2}\mathbb{E}_{n-1}\Big{[}\cosh \big{(}\lambda|g_{n-1}+tZ_{n}|\big{)}\Big{]}\\ &=5\lambda^{2}S_{n}^{2}H_{n}(t).\end{split} \tag{11}\] Since \(\mathbb{E}_{n-1}\big{[}Z_{n}\big{]}=0\) and \(g_{n-1}\) is \(\mathcal{F}_{n-1}\)-measurable, we have \[\begin{split} H_{n}^{\prime}(0)&=\lambda\mathbb{E }_{n-1}\bigg{[}\Re(Z_{n})\Re(g_{n-1})+\Im(Z_{n})\Im(g_{n-1})\big{)}\frac{\sinh \big{(}\big{(}lambda|g_{n-1}|\big{)}}{|g_{n-1}|}\bigg{]}\\ &=\lambda\bigg{(}\mathbb{E}_{n-1}\big{[}\Re(Z_{n})\big{]}\Re(g_{ n-1})+\mathbb{E}_{n-1}\big{[}\Im(Z_{n})\big{]}\Im(g_{n-1})\bigg{)}\frac{\sinh \big{(}\big{(}\lambda|g_{n-1}|\big{)}}{|g_{n-1}|}\\ &=0.\end{split}\] Now by lemma 3 in [16], we get then \[H_{n}(t)\leqslant H_{n}(0)\exp\bigg{(}\frac{5}{2}t^{2}\lambda^{2}S_{n}^{2} \bigg{)}.\] In particular \[\begin{split} H_{n}(1)=\mathbb{E}_{n-1}\big{[}\cosh\big{(} \lambda|g_{n}|\big{)}\big{]}&\leqslant\mathbb{E}_{n-1}\big{[} \cosh\big{(}\lambda|g_{n-1}|\big{)}\big{]}\exp\bigg{(}\frac{5}{2}\lambda^{2}S_{n }^{2}\bigg{)}\\ &=\cosh\big{(}\lambda|g_{n-1}|\big{)}\exp\bigg{(}\frac{5}{2} \lambda^{2}S_{n}^{2}\bigg{)}.\end{split}\] Let's define now the following sequence, for each \(n\geqslant 0\) \[G_{n}:=\exp\bigg{(}\frac{-5\lambda^{2}}{2}\sum_{k=0}^{n}S_{k}^{2}\bigg{)}\cosh \big{(}\lambda|g_{n}|\big{)}.\] Since by assumption, \(S_{n}\) is \(\mathcal{F}_{n-1}\)-measurable, we get then \[\begin{split}\mathbb{E}_{n-1}\big{[}G_{n}\big{]}&= \mathbb{E}_{n-1}\bigg{[}\exp\bigg{(}\frac{-5\lambda^{2}}{2}\sum_{k=0}^{n}S_{k} ^{2}\bigg{)}\cosh\big{(}\lambda|g_{n}|\big{)}\bigg{]}\\ &=\exp\bigg{(}\frac{-5\lambda^{2}}{2}\sum_{k=0}^{n}S_{k}^{2} \bigg{)}\mathbb{E}_{n-1}\big{[}\cosh\big{(}\lambda|g_{n}|\big{)}\big{]}\\ &\leqslant\exp\bigg{(}\frac{-5\lambda^{2}}{2}\sum_{k=0}^{n}S_{k} ^{2}\bigg{)}\exp\bigg{(}\frac{5\lambda^{2}}{2}S_{n}^{2}\bigg{)}\cosh\big{(} \lambda|g_{n-1}|\big{)}\\ &=G_{n-1}.\end{split}\] We deduce then that \(G_{n}\) is supermartingale with \(\mathbb{E}G_{0}=1\). Thus by Doob's inequality, we have. \[\mathbb{P}\bigg{[}\big{|}g_{N}|\geqslant\varepsilon\bigg{]} \leqslant\mathbb{P}\bigg{[}\sup_{0\leqslant n\leqslant N}|g_{n}| \geqslant\varepsilon\bigg{]}\] \[\leqslant\mathbb{P}\bigg{[}\sup_{0\leqslant n\leqslant N}G_{n} \geqslant\exp\bigg{(}\frac{-5\lambda^{2}T}{2}\bigg{)}\cosh\lambda\varepsilon \bigg{]}\] \[\leqslant\frac{\exp\big{(}\frac{5\lambda^{2}T}{2}\big{)}}{\cosh \lambda\varepsilon}\mathbb{E}G_{0}\] \[\leqslant 2\exp\bigg{(}\frac{5\lambda^{2}T}{2}-\lambda\varepsilon \bigg{)}.\] By choosing \(\lambda\) to be \(\frac{\varepsilon}{5T}\), we get the result. **Lemma 3.13**.: _Let \(Z=(Z_{n})_{1\leqslant n\leqslant N}\) be a complex martingale difference sequence with respect to a filtration \(\mathcal{F}=(\mathcal{F}_{n})_{1\leqslant n\leqslant N}\). We assume that for each \(n\), \(Z_{n}\) is bounded almost surely (let's say \(|Z_{n}|\leqslant b_{n}\) almost surely, where \(b_{n}\) is some real number). Furthermore, assume that we have \(|Z_{n}|\leqslant S_{n}\) almost surely, where \((S_{n})_{1\leqslant n\leqslant N}\) is a real predictable process with respect to the same filtration. We set the event \(\Sigma:=\bigg{\{}\sum_{1\leqslant n\leqslant N}S_{n}^{2}\leqslant T\bigg{\}}\) where \(T\) is a deterministic constant. Then, for any \(\varepsilon>0\),_ \[\mathbb{P}\bigg{[}\bigg{\{}\bigg{|}\sum_{1\leqslant n\leqslant N}Z_{n}\bigg{|} \geqslant\varepsilon\bigg{\}}\bigcap\Sigma\bigg{]}\leqslant 2\exp\bigg{(}\frac{- \varepsilon^{2}}{10T}\bigg{)}.\] Proof.: We define \[\widetilde{S}_{n}:=S_{n}\mathbb{1}_{\sum_{k=1}^{n}S_{k}^{2}\leqslant T}\] and \[\widetilde{Z}_{n}:=Z_{n}\mathbb{1}_{\sum_{k=1}^{n}S_{k}^{2}\leqslant T}.\] It is clear that \((\widetilde{Z}_{n})_{1\leqslant n\leqslant N}\) is a martingale difference sequence which is almost bounded. Note that \((\widetilde{S}_{n})_{1\leqslant n\leqslant N}\) is predictable process with respect to the filtration \(\mathcal{F}\) and \(\sum_{1\leqslant n\leqslant N}\widetilde{S}_{n}^{2}\leqslant T\). Thus, all assumptions of Lemma 3.12 are satisfied for \((\widetilde{Z}_{n})_{1\leqslant n\leqslant N}\) and \((\widetilde{S}_{n})_{1\leqslant n\leqslant N}\). Note that under the condition \(\Sigma\), we have \[\sum_{1\leqslant n\leqslant N}Z_{n}=\sum_{1\leqslant n\leqslant N}\widetilde{ Z}_{n}.\] thus, by Lemma 3.12 ## 4 Reduction of the problem From now on, we indicate by \(f\) the Steinhaus or Rademacher multiplicative function. The goal of this section is to reduce the problem to something simple to deal with. We want to prove that the event \[\mathcal{A}:=\big{\{}|M_{f}(x)|>4\sqrt{x}(\log_{2}x)^{1/4+\varepsilon},\,\text {for infinitely many $x$}\big{\}},\] holds with null probability. As in Lau-Tenenbaum-Wu [13], Basquin [1], in Mastrostefano [14] and, in more general, in some proofs of the Law of the Iterated Logarithm (theorem 8.1 in [8] for example), the idea is to assess the event \(\mathcal{A}\) on a suitable sequence of test points. Without introducing any change, we keep the same test points as Lau-Tenenbaum-Wu in [13], lemma 2.3. We take \(x_{i}:=\lfloor\mathrm{e}^{\mathrm{i}\sigma_{0}}\rfloor\), where \(c_{0}\) is a "small constant" in \(\left]0,1\right[\) that will be chosen in the coming Lemma 4.1. As Mastrostefano in [14] at the end of Section 2, we choose on the other hand \(X_{\ell}:=\mathrm{e}^{2^{\ell^{K}}}\), where \(K:=\frac{25}{\varepsilon}\). Set \[\mathcal{A}_{\ell}:=\bigg{\{}\sup_{X_{\ell-1}<x_{i-1}\leqslant X_{\ell}}\sup_ {x_{i-1}<x\leqslant x_{i}}\frac{|M_{f}(x)|}{\sqrt{x}R(x)}>4\bigg{\}}\] where \(R(x):=(\log_{2}x)^{1/4+\varepsilon}\). One can easily see that \(\mathcal{A}\subset\cup_{\ell\geqslant 1}\mathcal{A}_{\ell}\). We have the following upper bound. Let \(\overline{\mathcal{A}}_{\ell}\) be the complement of \(\mathcal{A}_{\ell}\) in simple space. **Lemma 4.1**.: _For any fixed constant \(A>0\). Recall that \(x_{i}=\lfloor\mathrm{e}^{\mathrm{i}\sigma_{0}}\rfloor\). There exists \(c_{0}=c_{0}(A)\) small enough such that, we have almost surely, as \(x_{i}\) tends to infinity_ \[\max_{x_{i-1}<x\leqslant x_{i}}|M_{f}(x)-M_{f}(x_{i-1})|\ll_{A}\frac{\sqrt{x_{ i}}}{(\log x_{i})^{A}}. \tag{12}\] **Remark 1**.: _For \(A=1\) we can take \(c_{0}=\frac{1}{350}\). From now on, we take \(c_{0}\leqslant\frac{1}{10^{3}}\)._ Proof.: See lemma 2.3 in [13]. Lau-Tenenbaum-Wu states the result for Rademacher case, but it can be extended easily to Steinhaus case by following the same arguments of the proof. Thus it suffices to prove that \(\sum_{\ell\geqslant 1}\mathbb{P}[\mathcal{B}_{\ell}]<+\infty\) where \[\mathcal{B}_{\ell}:=\bigg{\{}\sup_{X_{\ell-1}<x_{i}\leqslant X_{\ell}}\frac{ |M_{f}(x_{i})|}{\sqrt{x_{i}}R(x_{i})}>3\bigg{\}}.\] **Remark 2**.: _Indeed if we assume that \(\sum_{\ell\geqslant 1}\mathbb{P}[\mathcal{B}_{\ell}]<+\infty\), then by Borel-Cantelli's First Lemma 3.5, we get almost surely_ \[|M_{f}(x_{i})|\ll\sqrt{x_{i}}R(x_{i})\text{ as }x_{i}\to+\infty.\] _Since we have_ \[|M_{f}(x)|\leqslant|M_{f}(x)-M_{f}(x_{i})|+|M_{f}(x_{i})|\] _where \(x_{i}\) is chosen such that \(x\in]x_{i-1},x_{i}]\), we then have_ \[M_{f}(x)\ll\frac{\sqrt{x_{i}}}{(\log x_{i})^{A}}+\sqrt{x_{i}}R(x_{i})\ll\sqrt {x_{i}}R(x_{i})\ll\sqrt{x}R(x).\] _Thus, we get almost surely_ \[M_{f}(x)\ll\sqrt{x}R(x)\text{ as }x\to+\infty.\] ## 5 Upper bound of \(\mathbb{P}[\mathcal{B}_{\ell}]\) ### Setting up the model In this subsection, we give the basic idea of the approach that we are going to follow. Arguing as Lau-Tenenbaum-Wu in [13] in the proof of lemma 3.1, with a little change in the variables, we consider the \(x_{i}\in]X_{\ell-1},X_{\ell}]\), we take \[y_{0}=\exp\left(2^{\ell^{K}(1-K/\ell)}\right)=\exp\bigg{\{}(\log X_{\ell})^{1- K/\ell}\bigg{\}}\text{ and }y_{j}=\exp\bigg{\{}\mathrm{e}^{j/\ell}(\log X_{\ell})^{1-K/\ell}\bigg{\}}.\] Let \(J\) be minimal under the constraint \(y_{j}\geqslant X_{\ell}\) which means \(J_{\ell}=J:=\lceil K\ell^{K}\log 2\rceil\ll\ell^{K}\). Note that \(\ell^{K}=\frac{1}{\log 2}\log_{2}X_{\ell}\asymp\log_{2}x_{i}\asymp\log_{2}y_{j}\) for any \(x_{i}\in]X_{\ell-1},X_{\ell}]\) and \(1\leqslant\,j\leqslant\,J\). We start by splitting \(M_{f}(x_{i})\) according to the size of the largest prime factor \(P(n)\) of \(n\). Let \[\Psi_{f}(x,y):=\sum_{\begin{subarray}{c}n\leqslant x\\ P(n)\leqslant y\end{subarray}}f(n)\ \ \text{and}\ \ \Psi_{f}^{\prime}(x,y):=\sum_{ \begin{subarray}{c}n\leqslant x\\ P(n)<y\end{subarray}}f(n).\] We have \[M_{f}(x_{i})=\Psi_{f}(x_{i},y_{0})+M_{f}^{(1)}(x_{i})+M_{f}^{(2)}(x_{i})\] with \[M_{f}^{(1)}(x_{i}):=\sum_{y_{0}<p\leqslant y_{J}}Y_{p}\] where \[Y_{p}:=f(p)\Psi_{f}^{\prime}(x_{i}/p,p)\] and \[M_{f}^{(2)}(x_{i}):=\sum_{j=1}^{J}\sum_{\begin{subarray}{c}y_{j-1}^{\prime}<d \leqslant x_{i}\\ v_{P(d)}(d)\geqslant 2\end{subarray}}^{y_{j-1},y_{j}}f(d)\Psi_{f}(x_{i}/d,y_{j-1}).\] One can see that \(\mathcal{B}_{\ell}\subset\mathcal{B}_{\ell}^{(0)}\cup\mathcal{B}_{\ell}^{(1) }\cup\mathcal{B}_{\ell}^{(2)}\) where, \[\mathcal{B}_{\ell}^{(0)}:=\bigcup_{X_{\ell-1}<x_{i}\leqslant X_{\ell}}\bigg{\{} \big{|}\Psi_{f}(x_{i},y_{0})|>\sqrt{x_{i}}R(x_{i})\bigg{\}},\] \[\mathcal{B}_{\ell}^{(1)}:=\bigcup_{X_{\ell-1}<x_{i}\leqslant X_{\ell}}\bigg{\{} \big{|}M_{f}^{(1)}(x_{i})\big{|}>\sqrt{x_{i}}R(x_{i})\bigg{\}}\] and \[\mathcal{B}_{\ell}^{(2)}:=\bigcup_{X_{\ell-1}<x_{i}\leqslant X_{\ell}}\bigg{\{} \big{|}M_{f}^{(2)}(x_{i})\big{|}>\sqrt{x_{i}}R(x_{i})\bigg{\}}.\] Proof of Theorem 1.1 assuming that \(\sum_{\ell\geqslant 1}\mathbb{P}[\mathcal{B}_{\ell}^{(0)}]\), \(\sum_{\ell\geqslant 1}\mathbb{P}[\mathcal{B}_{\ell}^{(1)}]\) and \(\sum_{\ell\geqslant 1}\mathbb{P}[\mathcal{B}_{\ell}^{(2)}]\) converge. If \(\sum_{\ell\geqslant 1}\mathbb{P}[\mathcal{B}_{\ell}^{(0)}]\), \(\sum_{\ell\geqslant 1}\mathbb{P}[\mathcal{B}_{\ell}^{(1)}]\) and \(\sum_{\ell\geqslant 1}\mathbb{P}[\mathcal{B}_{\ell}^{(2)}]\) converge then \(\sum_{\ell\geqslant 1}\mathbb{P}[\mathcal{B}_{\ell}]\) converges. By Borel-Cantelli's First Lemma 3.5, and by Remark 2, we get Theorem 1.1. \(\square\) Let's start first by dealing with \(\mathcal{B}_{\ell}^{(0)}\). **Lemma 5.1**.: _The sum \(\sum_{\ell\geqslant 1}\mathbb{P}[\mathcal{B}_{\ell}^{(0)}]\) converges._ Proof.: Note that \(\Psi_{1}(x,y)=\#\{n\leqslant x:P(n)\leqslant y\}\). We have by Markov's inequality \[\mathbb{P}\big{[}\mathcal{B}_{\ell}^{(0)}\big{]}\leqslant\sum_{X_{\ell-1}<x_{ i}\leqslant X_{\ell}}\mathbb{P}\bigg{[}\big{|}\Psi_{f}(x_{i},y_{0})\big{|}>\sqrt{x_{i}}R(x_{ i})\bigg{]}\leqslant\sum_{X_{\ell-1}<x_{i}\leqslant X_{\ell}}\frac{\mathbb{E} \big{[}\Psi_{f}(x_{i},y_{0})^{2}\big{]}}{x_{i}R(x_{i})^{2}}.\] However we know that \(y_{0}\leqslant x_{i}^{\frac{1}{\log 2}x_{i}}\) for \(\ell\) large enough. We have then \[\mathbb{E}\big{[}\Psi_{f}(x_{i},y_{0})^{2}\big{]}\leqslant\Psi_{1}\big{(}x_{ i},y_{0}\big{)}\leqslant\Psi_{1}\big{(}x_{i},x_{i}^{\frac{1}{\log 2}x_{i}}\big{)}\ll x_{i}( \log x_{i})^{-c\log_{3}x_{i}}\] where \(c\) is an absolute constant. Thus, the sum \[\sum_{\ell\geqslant 1}\mathbb{P}\big{[}\mathcal{B}_{\ell}^{(0)}\big{]} \leqslant\sum_{\ell\geqslant 1}\sum_{X_{\ell-1}<x_{i}\leqslant X_{\ell}}(\log x _{i})^{-c\log_{3}x_{i}}\] converges. Bounding \(\mathbb{P}[\mathcal{B}_{\ell}^{(1)}]\) The goal of this section is to give a bound of \(\mathbb{P}[\mathcal{B}_{\ell}^{(1)}]\). From now on, we indicate by \(p,q\) two prime numbers. We consider the filtration \(\big{\{}\mathcal{F}_{p}\big{\}}_{p\in\mathcal{P}}\), where \(\mathcal{F}_{p}\) denotes the \(\sigma\)-algebra generated by the random variables \(f(q)\) for \(q<p\). One can see that the expectation of the random variable \(Y_{p}\) conditioned on \(\mathcal{F}_{p}\) gives \(\mathbb{E}\big{[}Y_{p}\,|\,\mathcal{F}_{p}\big{]}=0\). Thus the sequence \((Y_{p})_{p\in\mathcal{P}}\) is a martingale difference. Set \[V_{\ell}(x_{i};f):=\sum_{y_{0}<p\leqslant y_{J}}\big{|}\Psi_{f}^{\prime}(x_{i}/ p,p)\big{|}^{2}. \tag{13}\] ### Simplifying and smoothing \(V_{\ell}(x_{i};f)\) The goal of this subsection is to simplify \(V_{\ell}(x_{i};f)\). Let \(X\) be a large real, such that \(\log X\asymp\ell^{K}\). Let \(p\) be a prime and \(p<t\leqslant p(1+1/X)\). Let \[\Psi_{f}^{\prime}(x,z,y):=\sum_{\begin{subarray}{c}z<n\leqslant x\\ P(n)<y\end{subarray}}f(n).\] Using the bound \[\big{|}\Psi_{f}^{\prime}(x_{i}/p,p)\big{|}^{2}\leqslant 2\big{|}\Psi_{f}^{ \prime}(x_{i}/t,p)\big{|}^{2}+2\big{|}\Psi_{f}^{\prime}(x_{i}/p,x_{i}/t,p) \big{|}^{2},\] we have \(V_{\ell}(x_{i};f)\leqslant 2L_{\ell}(x_{i};f)+2W_{\ell}(x_{i};f)\) with \[L_{\ell}(x_{i};f):=\sum_{y_{0}<p\leqslant y_{J}}\frac{X}{p}\int_{p}^{p(1+1/X)} \big{|}\Psi_{f}^{\prime}(x_{i}/t,p)\big{|}^{2}\mathrm{d}t \tag{14}\] and \[W_{\ell}(x_{i};f):=\sum_{y_{0}<p\leqslant y_{J}}\frac{X}{p}\int_{p}^{p(1+1/X)} \big{|}\Psi_{f}^{\prime}(x_{i}/p,x_{i}/t,p)\big{|}^{2}\mathrm{d}t. \tag{15}\] Let's start by cleaning up \(L_{\ell}(x_{i};f)\). We have \[L_{\ell}(x_{i};f) =\sum_{y_{0}<p\leqslant y_{J}}\frac{X}{p}\int_{p}^{p(1+1/X)} \big{|}\Psi_{f}^{\prime}(x_{i}/t,p)\big{|}^{2}\mathrm{d}t\] \[=\sum_{\begin{subarray}{c}j=1\\ x_{i}\geqslant y_{j-1}\end{subarray}}^{J}\sum_{y_{j-1}<p\leqslant y_{j}}\frac {X}{p}\int_{p}^{p(1+1/X)}\big{|}\Psi_{f}^{\prime}(x_{i}/t,p)\big{|}^{2} \mathrm{d}t.\] By swapping integral and summation, we have \[L_{\ell}(x_{i};f) =\sum_{\begin{subarray}{c}j=1\\ x_{i}\geqslant y_{j-1}\end{subarray}}^{J}\int_{y_{j-1}}^{y_{j}(1+1/X)}X\sum_{ y_{j-1}<p\leqslant y_{j}}\frac{1}{p}\mathbb{1}_{p<t<p(1+1/X)}|\Psi_{f}^{\prime}(x_{i} /t,p)|^{2}\mathrm{d}t\] \[\leqslant\sum_{\begin{subarray}{c}j=1\\ x_{i}\geqslant y_{j-1}\end{subarray}}^{J}\int_{y_{j-1}}^{y_{j}(1+1/X)}X\sum_{ \max\{t/(1+1/X),y_{j-1}\}<p\leqslant\min\{t,y_{j}\}}\frac{1}{p}\big{|}\Psi_{f} ^{\prime}(x_{i}/t,p)\big{|}^{2}\mathrm{d}t\] \[\leqslant x_{i}L_{\ell}^{(1)}(x_{i};f)+x_{i}L_{\ell}^{(2)}(x_{i};f),\] where \[L_{\ell}^{(1)}(x_{i};f):=\frac{1}{x_{i}}\sum_{\begin{subarray}{c}j=1\\ x_{i}\geqslant y_{j-1}\end{subarray}}^{J}\int_{y_{j-1}}^{y_{j}}X\sum_{t/(1+1/X)< p\leqslant t}\frac{1}{p}\big{|}\Psi_{f}^{\prime}(x_{i}/t,p)\big{|}^{2}\mathrm{d}t \tag{16}\] and \[L_{\ell}^{(2)}(x_{i};f):=\frac{1}{x_{i}}\sum_{\begin{subarray}{c}j=1\\ x_{i}\geqslant y_{j-1}\end{subarray}}^{J}\int_{y_{j}}^{y_{j}(1+1/X)}X\sum_{ \max\{t/(1+1/X),y_{j-1}\}<p\leqslant y_{j}}\frac{1}{p}\big{|}\Psi_{f}^{\prime}( x_{i}/t,p)\big{|}^{2}\mathrm{d}t. \tag{17}\] By changing the variable \(z:=x_{i}/t\) inside the integral and by simplifying by \(x_{i}\), we find \[L_{\ell}^{(1)}(x_{i};f)=\sum_{\begin{subarray}{c}j=1\\ x_{i}\geqslant y_{j-1}\end{subarray}}^{J}\int_{x_{i}/y_{j}}^{x_{i}/y_{j-1}}X \sum_{\frac{x_{i}}{z(1+1/X)}<p\leqslant\frac{x_{i}}{z}}\frac{1}{p}|\Psi_{f}^{ \prime}(z,p)|^{2}\frac{\mathrm{d}z}{z^{2}}\] and \[L_{\ell}^{(2)}(x_{i};f)=\sum_{\begin{subarray}{c}j=1\\ x_{i}\geqslant y_{j-1}\end{subarray}}^{J}\int_{\frac{x_{i}}{y_{j}(1+1/X)}}^{ \frac{x_{i}}{y_{j}}}X\sum_{\max\{\frac{x_{i}}{z(1+1/X)},y_{j-1}\}<p\leqslant y _{j}}\frac{1}{p}|\Psi_{f}^{\prime}(z,p)|^{2}\frac{\mathrm{d}z}{z^{2}}.\] Let's first focus on \(j\) such that \(1\leqslant\frac{\log x_{i}}{\log y_{j-1}}\leqslant\ell^{100K}\) in the \(L_{\ell}^{(1)}(x_{i};f)\)'s sum. By the strong form of Mertens' theorem (with error term given by the Prime Number Theorem) we have \[\sum_{\frac{x_{i}}{z(1+1/X)}<p\leqslant x_{i}/z}\frac{1}{p}=\log\left(\frac{ \log(x_{i}/z)}{\log\left(\frac{x_{i}}{z(1+1/X)}\right)}\right)+O\bigg{(} \mathrm{e}^{-C\sqrt{\log\left(\frac{x_{i}/z}{z+1/X}\right)}}\bigg{)}\] where \(C\) is an absolute constant. Since \(x_{i}/y_{j}<z\leqslant x_{i}/y_{j-1}\) and \(\log y_{j}=\mathrm{e}^{\frac{1}{T}}\log y_{j-1}\), we get \(\log(x_{i}/z)\asymp\log y_{j-1}\). On the other hand, by assumption, \(\log X\asymp\log_{2}y_{j-1}\asymp\ell^{K}\), we have then \[\sum_{\frac{x_{i}}{z(1+1/X)}<p\leqslant\frac{x_{i}}{z}}\frac{1}{p}\ll\frac{1 }{X\log y_{j-1}}. \tag{18}\] Thus, we have \[\sum_{\begin{subarray}{c}j=1\\ 1\leqslant\frac{\log x_{i}}{\log y_{j-1}}\leqslant\ell^{100K}\end{subarray}}^{ J}\int_{x_{i}/y_{j}}^{x_{i}/y_{j-1}}X\sum_{\frac{x_{i}}{z(1+1/X)}<p\leqslant\frac{x_{i}}{z}} \frac{1}{p}|\Psi_{f}^{\prime}(z,p)|^{2}\frac{\mathrm{d}z}{z^{2}}\] \[\leqslant\sum_{\begin{subarray}{c}j=1\\ 1\leqslant\frac{\log x_{i}}{\log y_{j-1}}\leqslant\ell^{100K}\end{subarray}}^{ J}\int_{x_{i}/y_{j}}^{x_{i}/y_{j-1}}X\sum_{\frac{x_{i}}{z(1+1/X)}<p\leqslant\frac{x_{i}}{z}} \frac{1}{p}\sup_{\frac{x_{i}}{z(1+1/X)}<q\leqslant\frac{x_{i}}{z}}|\Psi_{f}^{ \prime}(z,q)|^{2}\frac{\mathrm{d}z}{z^{2}}\] \[\ll\sum_{\begin{subarray}{c}j=1\\ 1\leqslant\frac{\log x_{i}}{\log y_{j-1}}\leqslant\ell^{100K}\end{subarray}}^{ J}\frac{1}{\log y_{j}}\int_{x_{i}/y_{j}}^{x_{i}/y_{j-1}}\sup_{\frac{x_{i}}{z(1+1/X)}<q \leqslant\frac{x_{i}}{z}}|\Psi_{f}^{\prime}(z,q)|^{2}\frac{\mathrm{d}z}{z^{2}}.\] Since \[|\Psi_{f}^{\prime}(z,q)|^{2} =\bigg{|}\Psi_{f}(z,x_{i}/z)-\sum_{\begin{subarray}{c}n\leq z\\ \frac{x_{i}}{z(1+1/X)}<P(n)\leqslant x_{i}/z\end{subarray}}f(n)+\sum_{ \begin{subarray}{c}n\leqslant z\\ \frac{x_{i}}{z(1+1/X)}<P(n)<q\end{subarray}}f(n)\bigg{|}^{2}\] \[\leqslant 4\big{|}\Psi_{f}(z,x_{i}/z)\big{|}^{2}+4\bigg{|}\sum_{ \begin{subarray}{c}n\leqslant z\\ \frac{x_{i}}{z(1+1/X)}<P(n)\leqslant x_{i}/z\end{subarray}}f(n)\bigg{|}^{2}+4 \bigg{|}\sum_{\begin{subarray}{c}n\leqslant z\\ \frac{x_{i}}{z(1+1/X)}<P(n)<q\end{subarray}}f(n)\bigg{|}^{2}\] We define, \[\lambda_{\ell}^{(1)}(x_{i},y_{j};f) :=\frac{1}{\log y_{j}}\int_{x_{i}/y_{j}}^{x_{i}/y_{j-1}}|\Psi_{f} (z,x_{i}/z)|^{2}\frac{\mathrm{d}z}{z^{2}},\] \[\lambda_{\ell}^{(2)}(x_{i},y_{j};f) :=\frac{1}{\log y_{j}}\int_{x_{i}/y_{j}}^{x_{i}/y_{j-1}}\sup_{ \begin{subarray}{c}x_{i}\\ \frac{x_{i}}{z(1+1/X)}\leqslant q\leqslant\frac{x_{i}}{z}\end{subarray}}\bigg{|} \sum_{\begin{subarray}{c}n\leqslant z\\ \frac{x_{i}}{z(1+1/X)}<P(n)<q\end{subarray}}f(n)\bigg{|}^{2}\frac{\mathrm{d}z}{ z^{2}},\] \[\lambda_{\ell}^{(3)}(x_{i},y_{j};f) :=\frac{1}{\log y_{j}}\int_{x_{i}/y_{j}}^{x_{i}/y_{j-1}}\bigg{|} \sum_{\begin{subarray}{c}n\leqslant z\\ \frac{x_{i}}{z(1+1/X)}<P(n)\leqslant\frac{x_{i}}{z}\end{subarray}}f(n)\bigg{|}^ {2}\frac{\mathrm{d}z}{z^{2}},\] \[L_{\ell}^{(12)}(x_{i};f) :=\sum_{\begin{subarray}{c}j=1\\ \frac{\log x_{i}^{2}}{\log y_{j-1}}>\ell^{100K}\end{subarray}}^{J}\int_{x_{i}/y _{j-1}}^{x_{i}/y_{j-1}}X\sum_{\begin{subarray}{c}x_{i}\\ \frac{x_{i}}{z(1+1/X)}<p\leqslant\frac{x_{i}}{z}\end{subarray}}\frac{1}{p}| \Psi_{f}^{\prime}(z,p)|^{2}\frac{\mathrm{d}z}{z^{2}}.\] Since the number of \(y_{j}\) such that \(1\leqslant\frac{\log x_{i}}{\log y_{j-1}}\leqslant\ell^{100K}\) is less than \(100K\ell\log\ell\) then \[\frac{L_{\ell}(x_{i};f)}{x_{i}}\ll \ell\log\ell\bigg{(}\sum_{k=1}^{3}\sup_{\begin{subarray}{c}y_{j} \\ 1\leqslant\frac{\log y_{j}}{\log y_{j-1}}\leqslant\ell^{100K}\end{subarray}} \lambda_{\ell}^{(k)}(x_{i},y_{j};f)\bigg{)}+L_{\ell}^{(12)}(x_{i};f)+L_{\ell}^ {(2)}(x_{i};f)\] \[\leqslant\ell\log\ell\Bigg{(}\sum_{k=1}^{3}\sup_{0\leqslant j \leqslant J}\lambda_{\ell}^{(k)}(x_{i},y_{j};f)\bigg{)}+L_{\ell}^{(12)}(x_{i};f )+L_{\ell}^{(2)}(x_{i};f).\] Thus \[\frac{V_{\ell}(x_{i};f)}{x_{i}}\ll \ell\log\ell\bigg{(}\sum_{k=1}^{3}\sup_{0\leqslant j\leqslant J} \lambda_{\ell}^{(k)}(x_{i},y_{j};f)\bigg{)}+L_{\ell}^{(12)}(x_{i};f)+L_{\ell}^ {(2)}(x_{i};f) \tag{19}\] \[+\frac{W_{\ell}(x_{i};f)}{x_{i}}.\] It turns out that \(\lambda_{\ell}^{(1)}(x_{i},y_{j};f)\) makes the most contribution in above right hand sum. The others terms of the sum will be bounded straightforwardly. Let \(T(\ell)\geqslant\ell^{6}\) a positive real parameter depending on \(\ell\). We define, for each \(k\in\{1,2,3\}\) the following probabilities \[\mathbb{P}_{\ell}^{\lambda,k}:=\mathbb{P}\bigg{[}\sup_{X_{\ell-1}<x_{i}\leqslant X _{\ell}}\sup_{0\leqslant j\leqslant J}\lambda_{\ell}^{(k)}(x_{i},y_{j},f)> \frac{T(\ell)}{\ell^{K/2}\ell\log\ell}\bigg{]} \tag{20}\] and we define, as well \[\mathbb{P}_{\ell}^{(12)}:=\mathbb{P}\bigg{[}\sup_{X_{\ell-1}<x_{i}\leqslant X _{\ell}}L_{\ell}^{(12)}(x_{i};f)>\frac{T(\ell)}{\ell^{K/2}}\bigg{]},\] \[\mathbb{P}_{\ell}^{(2)}:=\mathbb{P}\bigg{[}\sup_{X_{\ell-1}<x_{i}\leqslant X_{\ell}} L_{\ell}^{(2)}(x_{i};f)>\frac{T(\ell)}{\ell^{K/2}}\bigg{]}\] and \[\mathbb{P}_{\ell}^{W}:=\mathbb{P}\bigg{[}\sup_{X_{\ell-1}<x_{i}\leqslant X_{ \ell}}\frac{\ell^{K/2}W_{\ell}(x_{i};f)}{x_{i}}>T(\ell)\bigg{]}.\] ### Bounding \(\mathbb{P}_{\ell}^{W}\) The goal of this subsection is to prove the convergence of the sum \(\sum_{\ell\geqslant 1}\mathbb{P}_{\ell}^{W}\). **Proposition 6.1**.: _We have, \(\sum_{\ell\geqslant 1}\mathbb{P}_{\ell}^{W}\) converges._ To prove the above proposition, we need the following Lemma. **Lemma 6.2**.: _Let \(r>1\) be an integer. We have_ \[\mathbb{P}_{\ell}^{W}\ll_{r}\sum_{X_{\ell-1}<x_{i}\leqslant X_{\ell}}\bigg{(} \frac{\ell^{K/2}}{\log x_{i}}\bigg{)}^{r}. \tag{21}\] Proof.: Using Markov's inequality for the power \(r>1\) and since \(T(\ell)\geqslant 1\), we have \[\mathbb{P}_{\ell}^{W} \leqslant\frac{1}{T(\ell)^{r}}\sum_{X_{\ell-1}<x_{i}\leqslant X_{ \ell}}\bigg{(}\frac{\ell^{K/2}}{x_{i}}\bigg{)}^{r}\mathbb{E}\big{[}W_{\ell}(x _{i};f)^{r}\big{]}\] \[\leqslant\sum_{X_{\ell-1}<x_{i}\leqslant X_{\ell}}\bigg{(}\frac{ \ell^{K/2}}{x_{i}}\bigg{)}^{r}\mathbb{E}\big{[}W_{\ell}(x_{i};f)^{r}\big{]}.\] Following the steps of Harper's work [10] and more recently Mastrostefano [14] in section 6, we begin first by applying Minkowski's inequality, we then bound the above expectation with \[\mathbb{E}\big{[}W_{\ell}(x_{i};f)^{r}\big{]}\leqslant\bigg{(}\sum_{ \begin{subarray}{c}y_{0}<p\leqslant y_{J}\\ p\leqslant x_{i}\end{subarray}}\bigg{(}\mathbb{E}\bigg{[}\bigg{(}\frac{X}{p} \int_{p}^{p(1+1/X)}\big{|}\Psi_{f}^{\prime}(x_{i}/p,x_{i}/t,p)\big{|}^{2} \mathrm{d}t\bigg{)}^{r}\bigg{]}\bigg{)}^{\frac{1}{r}}\bigg{)}^{r}.\] Then by applying Holder's inequality on the normalized integral \(\frac{X}{p}\int_{p}^{p(1+1/X)}\mathrm{d}t\) with parameters \(1/r\) and \((r-1)/r\) we can bound the above sum with \[\leqslant\bigg{(}\sum_{\begin{subarray}{c}y_{0}<p\leqslant y_{J}\\ p\leqslant x_{i}\end{subarray}}\bigg{(}\frac{X}{p}\int_{p}^{p(1+1/X)}\mathbb{E }\bigg{[}\big{|}\Psi_{f}^{\prime}(x_{i}/p,x_{i}/t,p)\big{|}^{2r}\bigg{]} \mathrm{d}t\bigg{)}^{\frac{1}{r}}\bigg{)}^{r}. \tag{22}\] Now let's focus on bounding the \(2r\)-th moment of partial sum of \(f\) over short interval. By arguing as Harper [10] in the proof of proposition 2, we observe that when \(\frac{x_{i}}{p}-\frac{x_{i}}{p(1+1/X)}<1\), the interval \(|x_{i}/p(1+1/X),x_{i}/p|\) contains at most one integer. Hence \[\frac{X}{p}\int_{p}^{p(1+1/X)}\mathbb{E}\bigg{[}\big{|}\Psi_{f}^{\prime}(x_{i }/p,x_{i}/t,p)\big{|}^{2r}\bigg{]}\mathrm{d}t\leqslant 1.\] Otherwise we have \(p\leqslant\frac{x_{i}}{1+X}\), and by applying Cauchy Schwarz's inequality, we get \[\mathbb{E}\bigg{[}\big{|}\Psi_{f}^{\prime}(x_{i}/p,x_{i}/t,p)\big{|}^{2r} \bigg{]}\leqslant\sqrt{\mathbb{E}\bigg{[}\big{|}\Psi_{f}^{\prime}(x_{i}/p,x_{ i}/t,p)\big{|}^{2}\bigg{]}\mathbb{E}\bigg{[}\big{|}\Psi_{f}^{\prime}(x_{i}/p,x_{ i}/t,p)\big{|}^{2(2r-1)}\bigg{]}}.\] Since \(t\leqslant p(1+1/X)\) then \(\frac{x_{i}}{p}-\frac{x_{i}}{t}<\frac{x_{i}}{p(1+X)}\). We get then \[\mathbb{E}\bigg{[}\big{|}\Psi_{f}^{\prime}(x_{i}/p,x_{i}/t,p)\big{|}^{2}\bigg{]} \leqslant\sum_{\begin{subarray}{c}\frac{x_{i}}{t}<n\leqslant\frac{x_{i}}{p}\\ P(n)<p\end{subarray}}1\ll\frac{x_{i}}{pX}.\] For the second expectation, we apply Lemma 3.1 \[\mathbb{E}\bigg{[}\big{|}\Psi_{f}^{\prime}(x_{i}/p,x_{i}/t,p)\big{|}^{2(2r-1)} \bigg{]}\ll\bigg{(}\sum_{\begin{subarray}{c}\frac{x_{i}}{t}<n\leqslant\frac{x_ {i}}{p}\\ P(n)<p\end{subarray}}\tau_{4r-3}(n)\bigg{)}^{2r-1}.\] By applying Lemma 3.2, we deduce \[\ll\bigg{(}\frac{x_{i}}{p}(\log x_{i})^{4r-4}\bigg{)}^{2r-1}.\] We get at the end in the case where \(p\leqslant\frac{x_{i}}{1+X}\) \[\mathbb{E}\bigg{[}\big{|}\Psi_{f}^{\prime}(x_{i}/p,x_{i}/t,p)\big{|}^{2r}\bigg{]} \ll_{q}\bigg{(}\frac{x_{i}}{p}\bigg{)}^{r}\frac{(\log x_{i})^{(2r-2)(2r-1)}}{ \sqrt{X}}. \tag{23}\] By choosing \(X=(\log x_{i})^{8r^{2}-8r+4}\), we conclude that \[\begin{split}\mathbb{E}\big{[}W_{\ell}(x_{i};f)^{r}\big{]}& \ll_{r}\bigg{(}\sum_{\begin{subarray}{c}\frac{x_{i}}{t+X}<p\leqslant x_{i} \\ 1+X\end{subarray}}1+x_{i}\frac{(\log x_{i})^{\frac{(2r-2)(2r-1)}{r}}}{X^{\frac{ 1}{2r}}}\sum_{p\leqslant\frac{x_{i}}{1+X}}\frac{1}{p}\bigg{)}^{r}\\ &\ll_{r}\bigg{(}\frac{x_{i}}{\log x_{i}}\bigg{)}^{r}.\end{split} \tag{24}\] Then \[\mathbb{P}_{\ell}^{W}\ll_{r}\sum_{X_{\ell-1}<x_{i}\leqslant X_{\ell}}\bigg{(} \frac{\ell^{K/2}}{\log x_{i}}\bigg{)}^{r},\] this ends the proof. Proof of Proposition 6.1.: By choosing \(r>1/c_{0}\) where \(c_{0}\) is the constant chosen in Lemma 4.1, we get the convergence of \(\sum_{\ell\geqslant 1}\mathbb{P}_{\ell}^{W}\). ### Bounding \(\mathbb{P}_{\ell}^{\lambda,1}\) The goal of this subsection is to prove the convergence of the sum of \(\sum_{\ell\geqslant 1}\mathbb{P}_{\ell}^{\lambda,1}\). We start by adapting \(\lambda_{\ell}^{(1)}\) to be a supermartingale for each \(x_{i}\). Indeed, we have \(\lambda_{\ell}^{(1)}(x_{i},y_{j};f)\ll\ U_{j,i}\), where \[U_{j,i}:=\frac{1}{\log y_{j}}\bigg{(}\frac{\log y_{j}}{\log y_{0}}\bigg{)}^{- 1/\ell^{K}}\int_{0}^{+\infty}\big{|}\Psi_{f}(z,g_{j,i}(z))\big{|}^{2}\frac{ \mathrm{d}z}{z^{2}}\] where \(g_{j,i}\) is function defined on \([0,+\infty[\) by \[\begin{cases}g_{j,i}(z)=y_{j}\text{ for }z\leqslant\frac{x_{i}}{y_{j}},\\ g_{j,i}(z)=\frac{x_{i}}{z}\text{ for }\frac{x_{i}}{y_{j}}<z\leqslant\frac{x_{i}}{y_{ 0}},\\ g_{j,i}(z)=y_{0}\text{ for }z>\frac{x_{i}}{y_{0}}.\end{cases}\] As in [14], the factor \(\left(\frac{\log y_{j}}{\log y_{0}}\right)^{-1/\ell^{K}}\) is added for technical reasons. Note that \[\big{(}\frac{\log y_{j}}{\log y_{0}}\big{)}^{-1/\ell^{K}}\asymp\ 1.\] We will soon see that the \(U_{j,i}\) random variable is a non-negative submartingale sequence over \(y_{j}\) for a fixed \(n\). However, we cannot apply Doob's inequality at this point unless we use an upper bound of the probability by the sum over \(y_{j}\) of the probability of the supremum over \(n\) of \(U_{j,i}\), which would result a significant loss (of factor \(\ell^{K}\)). Nevertheless, by observing that \(U_{0,i}\) is independent of \(n\) and by utilizing Lemma 3.11, we can provide a robust upper bound for the probability of the supremum over \(n\) and \(y_{j}\) of \(U_{j,i}\). Unfortunately, the direct application of this result leads to a weak bound for \(\mathbb{P}_{\ell}^{(1)}\) due to the fact that 2-dimension Doob's inequality 3.11 only relates the probability of the supremum of a submartingale sequence to the expectations of its members and not to their low moments (which we need here, because of the presence of the factors \(\ell^{K/2}\), which are related to the size of the low moments of the random variables). To overcome this, we will first condition on some event that the contribution from the values of \(X(n)\) on the small \(n\) is dominated by the size of its low moments. We denote \[T_{1}(\ell)=\frac{T(\ell)}{\ell\log\ell}. \tag{25}\] We have \[\mathbb{P}_{\ell}^{\lambda,1}\leqslant\widetilde{\mathbb{P}}_{\ell}^{\lambda,1}:=\mathbb{P}\bigg{[}\sup_{X_{\ell-1}<x_{i}\leqslant X_{\ell}}\sup_{0\leqslant j \leqslant J}U_{j,i}>\frac{C_{0}T_{1}(\ell)}{\ell^{K/2}}\bigg{]}\] where \(C_{0}\) is an absolute constant added due to the factor \(\big{(}\frac{\log y_{j}}{\log y_{0}}\big{)}^{-1/\ell^{K}}\). Now, we prove for each \(x_{i}\), the sequence \((U_{j,i})_{j\geqslant 0}\) is supermartingale with respect to the filtration \((\mathcal{F}_{y_{j}})_{j\geqslant 0}\). **Lemma 6.3**.: _For \(\ell\) large enough, for any \(x_{i}\), the sequence \((U_{j,i})_{j\geqslant 0}\) is supermartingale with respect to the filtration \((\mathcal{F}_{y_{j}})_{j\geqslant 0}\)._ Proof.: Since for \(z\geqslant x_{i}/y_{j-1}\), \(g_{j,i}(z)\leqslant y_{j-1}\), it is clear that \(g_{j,i}(z)=g_{j-1,i}(z)\) and \[\int_{x_{i}/y_{j-1}}^{\infty}\big{|}\Psi_{f}(z,g_{j,i}(z))\big{|}^{2}\frac{ \mathrm{d}z}{z^{2}}=\int_{x_{i}/y_{j-1}}^{\infty}\big{|}\Psi_{f}(z,g_{j-1,i}( z))\big{|}^{2}\frac{\mathrm{d}z}{z^{2}}\] is \(\mathcal{F}_{y_{j-1}}\)-measurable. We have \[\mathbb{E}\big{[}U_{j,i}\,\big{|}\mathcal{F}_{y_{j-1}}\,\big{]} =\frac{1}{\log y_{j}}\bigg{(}\frac{\log y_{j}}{\log y_{0}}\bigg{)} ^{-1/\ell^{K}}\int_{0}^{x_{i}/y_{j-1}}\mathbb{E}\bigg{[}\big{|}\Psi_{f}(z,g_{j,i}(z))\big{|}^{2}\,\big{|}\,\mathcal{F}_{y_{j-1}}\,\bigg{]}\frac{\mathrm{d}z}{ z^{2}}\] \[\quad+\frac{1}{\log y_{j}}\bigg{(}\frac{\log y_{j}}{\log y_{0}} \bigg{)}^{-1/\ell^{K}}\int_{x_{i}/y_{j-1}}^{\infty}\big{|}\Psi_{f}(z,g_{j,i}( z))\big{|}^{2}\frac{\mathrm{d}z}{z^{2}}.\] Let's start by giving an upper bound of \[\mathbb{E}\bigg{[}\big{|}\Psi_{f}(z,g_{j,i}(z))\big{|}^{2}\,\big{|}\mathcal{F} _{y_{j-1}}\,\bigg{]}. \tag{26}\] Recall that the symbol \(\sum^{y,z}\) indicates a sum restricted to integers all of whose prime factors belong to the interval \(]y,z]\). Since \(g_{j,i}(z)\leqslant y_{j}\) for \(z\leqslant x_{i}/y_{j-1}\), we have, in both case Rademacher and Steinhaus, \[\mathbb{E}\bigg{[}\Big{|}\Psi_{f}(z,g_{j,i}(z))\big{|}^{2}\,\big{|} \mathcal{F}_{y_{j-1}}\bigg{]} =\mathbb{E}\Bigg{[}\Big{|}\Psi_{f}(z,y_{j-1})+\sum_{\begin{subarray} {c}n\leqslant z\\ y_{j-1}<P(n)\leqslant g_{j,i}(z)\end{subarray}}f(n)\Big{|}^{2}\,\,\Big{|} \mathcal{F}_{y_{j-1}}\,\,\Bigg{]}\] \[=\mathbb{E}\Bigg{[}\Big{|}\Psi_{f}(z,y_{j-1})+\sum_{d>y_{j-1}}^{y _{j-1},g_{j,i}(z)}f(d)\Psi_{f}(z/d,y_{j-1})\Big{|}^{2}\,\,\Big{|}\mathcal{F}_{y _{j-1}}\,\,\Bigg{]}\] \[\leqslant\big{|}\Psi_{f}(z,y_{j-1})\big{|}^{2}+\sum_{d>y_{j-1}}^{ y_{j-1},g_{j,i}(z)}\big{|}\Psi_{f}(z/d,y_{j-1})\big{|}^{2}\] \[\leqslant\big{|}\Psi_{f}(z,y_{j-1})\big{|}^{2}+\sum_{d>y_{j-1}}^{ y_{j-1},y_{j}}\big{|}\Psi_{f}(z/d,y_{j-1})\big{|}^{2}.\] On the other hand, we have, \[\int_{0}^{x_{i}/y_{j-1}}\sum_{d>y_{j-1}}^{y_{j-1},y_{j}}\big{|} \Psi_{f}(z/d,y_{j-1})\big{|}^{2}\frac{\mathrm{d}z}{z^{2}} =\sum_{d>y_{j-1}}^{y_{j-1},y_{j}}\frac{1}{d}\int_{0}^{x_{i}/dy_{j- 1}}\big{|}\Psi_{f}(u,y_{j-1})\big{|}^{2}\frac{\mathrm{d}u}{u^{2}}\] \[\leqslant\bigg{(}\sum_{d>y_{j-1}}^{y_{j-1},y_{j}}\frac{1}{d} \bigg{)}\int_{0}^{x_{i}/y_{j-1}}\big{|}\Psi_{f}(u,y_{j-1})\big{|}^{2}\frac{ \mathrm{d}u}{u^{2}}\] where we did the change of variable \(u:=z/d\). In the sake of readability, we set \[b:=\bigg{(}1+\sum_{d>y_{j-1}}^{y_{j-1},y_{j}}\frac{1}{d}\bigg{)}\mathrm{e}^{- 1/\ell}\bigg{(}\frac{\log y_{j}}{\log y_{j-1}}\bigg{)}^{-1/\ell^{\kappa}}.\] Note as well, that \[\int_{0}^{x_{i}/y_{j-1}}\big{|}\Psi_{f}(z,y_{j-1})\big{|}^{2}\frac{\mathrm{d} z}{z^{2}}+\int_{x_{i}/y_{j-1}}^{\infty}\big{|}\Psi_{f}(z,g_{j-1,i}(z)) \big{|}^{2}\frac{\mathrm{d}z}{z^{2}}=\int_{0}^{+\infty}\big{|}\Psi_{f}(z,g_{j -1,i}(z))\big{|}^{2}\frac{\mathrm{d}z}{z^{2}}.\] By collecting the previous computations together, we find \[\mathbb{E}\big{[}U_{j,i}\,\big{|}\mathcal{F}_{y_{j-1}}\,\big{]}\leqslant bU_{j -1,i}.\] To end the proof it suffices to prove that \(b\leqslant 1\). Note that \(\frac{\log y_{j}}{\log y_{j-1}}=\mathrm{e}^{1/\ell}\) and \[1+\sum_{d>y_{j-1}}^{y_{j-1},y_{j}}\frac{1}{d}=\prod_{y_{j-1}<p\leqslant y_{j} }\bigg{(}1-\frac{1}{p}\bigg{)}^{-1}.\] For \(\ell\) large enough, we have \[b =\mathrm{e}^{-1/\ell}\big{(}\mathrm{e}^{\frac{1}{\ell}}\big{)}^{- 1/\ell^{\kappa}}\prod_{y_{j-1}<p\leqslant y_{j}}\bigg{(}1-\frac{1}{p}\bigg{)}^ {-1}\] \[=\exp\bigg{(}-\frac{1}{\ell}+\frac{-1}{\ell^{K+1}}-\sum_{y_{j-1} <p\leqslant y_{j}}\log\bigg{(}1-\frac{1}{p}\bigg{)}\bigg{)}\] \[=\exp\bigg{(}\frac{-1}{\ell^{K+1}}+\frac{1}{2}\sum_{y_{j-1} \leqslant p<y_{j}}\frac{1}{p^{2}}+O\bigg{(}\sum_{y_{j-1}\leqslant p<y_{j}} \frac{1}{p^{3}}\bigg{)}+O\bigg{(}\mathrm{e}^{-C\sqrt{\log y_{j-1}}}\bigg{)} \bigg{)}\] for some constant \(C\). We have \(\sum_{y_{j-1}\leqslant p<y_{j}}\frac{1}{p^{2}}\ll\frac{1}{y_{j-1}}\ll\frac{1}{y_{0}}\). We get \[b =\exp\left(\frac{-1}{\ell^{K+1}}+O\!\left(\mathrm{e}^{-C\sqrt{\log y _{0}}}\right)\right)\] \[=\exp\left(\frac{-1}{\ell^{K+1}}+O\!\left(\mathrm{e}^{-2^{\frac{ \ell K}{4}}}\right)\right)\!.\] Thus, for large \(\ell\), we have \(b\leqslant 1\). It follows then that \(\mathbb{E}\!\left[U_{j,i}\,\big{|}\,\mathcal{F}_{y_{j-1}}\right]\leqslant\ U_{j-1,i}\). **Remark 3**.: _The reason why we added the factor \(\left(\frac{\log y_{j}}{\log y_{0}}\right)^{-1/\ell^{K}}\) is to make sure that \((U_{j,i})_{0\leqslant j\leqslant J}\) is supermartingale sequence with respect to the filtration \((\mathcal{F}_{y_{j}})_{0\leqslant j\leqslant J}\)._ We set \[I_{0}:=\frac{1}{\log y_{0}}\int_{-\infty}^{+\infty}\left|\frac{F_{0}(1/2+it)}{ 1/2+it}\right|^{2}\mathrm{d}t \tag{27}\] where \(F_{0}(s)=\prod_{p\leqslant y_{0}}\left(1+a_{f}\frac{f(p)}{p^{s}}\right)^{a_{ f}}\). **Lemma 6.4**.: _(Low moments estimates) We have \(\mathbb{E}\!\left[I_{0}^{\frac{2}{3}}\right]\ll 1/\ell^{\frac{K}{3}}\). In particular, for \(\ell\) large enough, we have_ \[\mathbb{P}\!\left[I_{0}>\frac{T_{1}(\ell)^{1/2}}{\ell^{K/2}}\right]\ll\frac{1 }{T_{1}(\ell)^{1/3}}. \tag{28}\] Proof.: From Key Proposition 1 and Key Proposition 2 in [11], as it is done in the paragraph entitled: "Proof of the upper bound in Theorem 1, assuming Key Propositions 1 and 2", for Steinhaus case, we have by taking \(y_{0}=x^{1/\mathrm{e}}\), \(q=2/3\) and \(k=0\) \[\mathbb{E}\!\left[I_{0}^{2/3}\right]\ll\mathbb{E}\!\left[\left(\frac{1}{\log y _{0}}\int_{-\infty}^{+\infty}\left|\frac{F_{0}(1/2+it)}{1/2+it}\right|^{2} \mathrm{d}t\right)^{2/3}\right]\ll\frac{1}{\ell^{K/3}}. \tag{29}\] In the case of Rademacher, the inequality (29) follows directly from Key Propositions 3 and 4 in [12], using the same proof as in the Steinhaus case and by taking the same values \(y_{0}=x^{1/\mathrm{e}}\), \(q=2/3\) and \(k=0\). The second part of the Lemma follows easily from Markov's inequality \[\mathbb{P}\!\left[I_{0}>\frac{T_{1}(\ell)^{1/2}}{\ell^{K/2}}\right]\leqslant \frac{\ell^{\frac{K}{3}}\mathbb{E}\!\left[I_{0}^{\frac{2}{3}}\right]}{T_{1}( \ell)^{\frac{1}{3}}}\ll\frac{1}{T_{1}(\ell)^{1/3}}.\] **Proposition 6.5**.: _For sufficiently large \(\ell\), we have_ \[\mathbb{P}_{\ell}^{\lambda,1}\ll\frac{1}{T_{1}(\ell)^{1/3}}. \tag{30}\] _Furthermore, the sum \(\sum_{\ell\geqslant 1}\mathbb{P}_{\ell}^{\lambda,1}\) converges._ Proof.: Define \(\mathcal{S}_{0}\) to be the event \(\left\{I_{0}\leqslant\frac{T_{1}(\ell)^{1/2}}{\ell^{K/2}}\right\}\). We have \[\widetilde{\mathbb{P}}_{\ell}^{\lambda,1} =\mathbb{P}\!\left[\,\sup_{X_{\ell-1}<x_{i}\leqslant X_{\ell}} \sup_{0\leqslant j\leqslant J}U_{j,i}>\frac{C_{0}T_{1}(\ell)}{\ell^{K/2}}\,\right]\] \[\leqslant\mathbb{P}\!\left[\,\sup_{X_{\ell-1}<x_{i}\leqslant X_{ \ell}}\sup_{0\leqslant j\leqslant J}U_{j,i}>\frac{C_{0}T_{1}(\ell)}{\ell^{K/2} }\,\right|\mathcal{S}_{0}\right]+\mathbb{P}\!\left[\overline{\mathcal{S}_{0}}\right]\!.\] Note that for each \(X_{\ell-1}<x_{i}\leqslant X_{\ell}\), the sequence \((U_{j,i})_{j\geqslant 0}\) is a nonnegative supermartingale (see Lemma 6.3) and \(U_{0,i}=I_{0}\) for all \(x_{i}\). Note as well, that \(\mathcal{S}_{0}\) is \(\mathcal{F}_{0}\)-measurable. Then all assumptions of Lemma 3.11 are satisfied, we have then \[\mathbb{P}\bigg{[}\sup_{X_{\ell-1}<x_{i}\leqslant X_{\ell}}\sup_{0\leqslant j \leqslant J}U_{j,i}>\frac{C_{0}T_{1}(\ell)}{\ell^{K/2}}\,\bigg{|}\,\mathcal{S} _{0}\bigg{]}\ll\frac{\ell^{K/2}}{T_{1}(\ell)}\mathbb{E}\big{[}I_{0}\,\big{|}\, \mathcal{S}_{0}\big{]}.\] By Lemma 6.4, we have \(\mathbb{P}[\widetilde{\mathcal{S}_{0}}]\ll\frac{1}{T_{1}(\ell)^{1/3}}\). We get then \[\widetilde{\mathbb{P}}_{\ell}^{\lambda,1}\ll\frac{\ell^{K/2}}{T_{1}(\ell)} \mathbb{E}\big{[}I_{0}\,\big{|}\,\mathcal{S}_{0}\big{]}+\frac{1}{T_{1}(\ell)^ {1/3}}.\] Since \(\mathbb{E}\big{[}I_{0}\,\big{|}\,\mathcal{S}_{0}\big{]}\leqslant\frac{T_{1}( \ell)^{1/2}}{\ell^{K/2}}\) (by definition of the event \(\mathcal{S}_{0}\)) we deduce then \[\mathbb{P}_{\ell}^{\lambda,1}\leqslant\widetilde{\mathbb{P}}_{\ell}^{\lambda,1}\ll\frac{1}{T_{1}(\ell)^{1/2}}+\frac{1}{T_{1}(\ell)^{1/3}}\ll\frac{1}{T_{1 }(\ell)^{1/3}}.\] Since \(T_{1}(\ell)=\frac{T(\ell)}{\ell\log\ell}\gg\ell^{4}\), it's obvious that \(\sum_{\ell\geqslant 1}\mathbb{P}_{\ell}^{\lambda,1}\) converges. This ends the proof. ### Bounding \(\mathbb{P}_{\ell}^{\lambda,2}\)nd \(\mathbb{P}_{\ell}^{\lambda,3}\) **Lemma 6.6**.: _For \(k=2,3\), the sum \(\sum_{\ell\geqslant 1}\mathbb{P}_{\ell}^{\lambda,k}\) converges._ Proof.: We have \[\mathbb{P}_{\ell}^{\lambda,k}\leqslant\frac{\ell^{K}}{T_{1}(\ell)^{2}}\sum_{X _{\ell-1}<x_{i}\leqslant X_{\ell}}\sum_{0\leqslant j\leqslant J}\mathbb{E} \bigg{[}\bigg{(}\lambda_{\ell}^{(k)}(x_{i},y_{j};f)\bigg{)}^{2}\bigg{]}.\] For fixed \(z\), we consider \[X_{q}(z):=\sum_{\begin{subarray}{c}n\leqslant z\\ \frac{z_{1}}{z(1+1/X)}<P(n)\leqslant q\end{subarray}}f(n).\] We have \((|X_{q}(z)|)_{q\in\mathcal{P}}\) is a submartingale under the filtration \((\mathcal{F}_{p})\). Indeed, let \(q<p\) be two consecutive prime numbers. For Rademacher case, we have \[\mathbb{E}\big{[}\big{|}X_{p}(z)\big{|}\,\big{|}\,\mathcal{F}_{p} \big{]} =\mathbb{E}\bigg{[}\big{|}X_{q}(z)+f(p)X_{q}(z/p)\big{|}\,\bigg{|} \,\mathcal{F}_{p}\bigg{]}\] \[=\frac{1}{2}\big{|}X_{q}(z)+X_{q}(z/p)\big{|}+\frac{1}{2}\big{|} X_{q}(z)-X_{q}(z/p)\big{|}\] \[\geqslant\big{|}X_{q}(z)\big{|}.\] For Steinhaus case, let \(n\) to be the smallest integer such that \(\frac{z}{p^{n}}<1\). By applying Lemma 3.6, we have \[\mathbb{E}\big{[}\big{|}X_{p}(z)\big{|}\,\big{|}\,\mathcal{F}_{p} \big{]} =\mathbb{E}\bigg{[}\big{|}X_{q}(z)+\sum_{k=1}^{n}f(p)^{k}X_{q}(z/p ^{k})\big{|}\,\bigg{|}\,\mathcal{F}_{p}\bigg{]}\] \[=\int_{0}^{1}\big{|}X_{q}(z)+\sum_{k=1}^{n}\mathrm{e}^{\mathrm{i} 2\pi k\vartheta}X_{q}(z/p^{k})\big{|}\mathrm{d}\vartheta\] \[\geqslant\big{|}X_{q}(z)\big{|}\] Then in both cases Rademacher and Steinhaus, we have \[\mathbb{E}\big{[}\big{|}X_{p}(z)\big{|}\,\big{|}\,\mathcal{F}_{p}\big{]}\geqslant \big{|}X_{q}(z)\big{|}.\] By Cauchy-Schwarz's inequality followed by Doob's \(L^{4}\)-inequality (Lemma 3.9), we get \[\mathbb{E}\bigg{[}\bigg{(}\lambda_{\ell}^{(2)}(x_{i},y_{j};f)\bigg{)}^{2} \bigg{]} \leqslant\frac{1}{(\log y_{j})^{2}}\bigg{(}\int_{x_{i}/y_{j}}^{x_ {i}/y_{j-1}}\frac{\mathrm{d}z}{z}\bigg{)}\] \[\ll\frac{1}{\log y_{j}}\int_{x_{i}/y_{j}}^{x_{i}/y_{j-1}}\mathbb{E }\bigg{[}\bigg{|}\sum_{\begin{subarray}{c}n\leqslant z\\ \frac{x_{i}}{z(1+1/X)}<P(n)\leqslant\frac{x_{i}}{z}\end{subarray}}f(n)\bigg{|} ^{4}\bigg{]}\frac{\mathrm{d}z}{z^{3}}.\] By Cauchy-Schwarz's inequality, we get in the case of \(\lambda_{\ell}^{(3)}(x_{i},y_{j};f)\) \[\mathbb{E}\bigg{[}\bigg{(}\lambda_{\ell}^{(3)}(x_{i},y_{j};f)\bigg{)}^{2} \bigg{]}\ll\frac{1}{\log y_{j}}\int_{x_{i}/y_{j}}^{x_{i}/y_{j-1}}\mathbb{E} \bigg{[}\bigg{|}\sum_{\begin{subarray}{c}n\leqslant z\\ \frac{x_{i}}{z(1+1/X)}<P(n)\leqslant\frac{x_{i}}{z}\end{subarray}}f(n)\bigg{|} ^{4}\bigg{]}\frac{\mathrm{d}z}{z^{3}}.\] To give an upper bound of the fourth moment, we use Lemma 3.1 and Lemma 3.2. This gives \[\mathbb{E}\Bigg{[}\bigg{|}\sum_{\begin{subarray}{c}n\leqslant z\\ \frac{x_{i}}{z(1+1/X)}<P(n)\leqslant\frac{x_{i}}{z}\end{subarray}}f(n)\bigg{|} ^{4}\Bigg{]} \leqslant\bigg{(}\sum_{\begin{subarray}{c}n\leqslant z\\ \frac{x_{i}}{z(1+1/X)}<P(n)\leqslant\frac{x_{i}}{z}\end{subarray}}\tau_{3}(n) \bigg{)}^{2}\] \[\leqslant\bigg{(}3\sum_{\begin{subarray}{c}x_{i}\\ \frac{x_{i}}{z(1+1/X)}<P\leqslant\frac{x_{i}}{z}\end{subarray}}\sum_{k\leqslant \frac{x_{i}}{p}}\tau_{3}(k)\bigg{)}^{2}\] \[\ll z^{2}(\log z)^{4}\bigg{(}\sum_{\begin{subarray}{c}x_{i}\\ \frac{x_{i}}{z(1+1/X)}<P\leqslant\frac{x_{i}}{z}\end{subarray}}\frac{1}{p} \bigg{)}^{2}.\] Since \(x_{i}/y_{j}<z\leqslant x_{i}/y_{j-1}\), from (18), we have \[\sum_{\begin{subarray}{c}x_{i}\\ \frac{x_{i}}{z(1+1/X)}<p\leqslant\frac{x_{i}}{z}\end{subarray}}\frac{1}{p}\ll \frac{1}{X\log y_{j}}.\] We get then, in both cases (\(k=2\) or \(3\)) \[\mathbb{E}\bigg{[}\bigg{(}\lambda_{\ell}^{(k)}(x_{i},y_{j};f)\bigg{)}^{2} \bigg{]}\ll\frac{(\log x_{i})^{4}}{X^{2}(\log y_{j})^{3}}\int_{x_{i}/y_{j}}^{ x_{i}/y_{j-1}}\frac{\mathrm{d}z}{z}\ll\frac{(\log x_{i})^{4}}{X^{2}(\log y_{0})^{2}}.\] Thus at the end, we get \[\mathbb{P}_{\ell}^{\lambda,k}\ll\frac{\ell^{2K}}{T_{1}(\ell)^{2}(\log y_{0}) ^{2}}\sum_{X_{\ell-1}<x_{i}\leqslant X_{\ell}}\frac{(\log x_{i})^{4}}{X^{2}}.\] Now recall from the proof of Lemma 6.2 in Section 6.1 that \(X=(\log x_{i})^{8r^{2}-8r+4}\) where \(r>1/c_{0}>1\) (\(c_{0}\) is defined in Lemma 4.1). Since \(2\times(8r^{2}-8r+4)-4>r\) for \(r>1\) we have then \(c_{0}(2\times(8r^{2}-8r+4)-4)>1\). Thus \[\mathbb{P}_{\ell}^{\lambda,k} \ll\frac{\ell^{2K}}{T_{1}(\ell)^{2}(\log y_{0})^{2}}\sum_{i\leqslant 2^{ \ell^{K}/c_{0}}}\frac{1}{i^{c_{0}(2\times(8r^{2}-8r+4)-4)}}\] \[\ll\frac{\ell^{2K}}{T_{1}(\ell)^{2}(\log y_{0})^{2}}.\] Since \(\log y_{0}=\frac{2^{\ell^{k}}}{2^{K\ell^{k}-1}}\) and \(T_{1}(\ell)\geqslant\ell^{4}\), then \(\sum_{\ell\geqslant 1}\mathbb{P}_{\ell}^{\lambda,k}\) converges which ends the proof. ### Bounding \(\mathbb{P}_{\ell}^{(12)}\) The goal of this subsection is to give an upper bound bound of \(\mathbb{P}_{\ell}^{(12)}\). **Lemma 6.7**.: _For \(\ell\) large enough and \(x_{i}\in]X_{\ell-1},X_{\ell}]\), we have_ \[\mathbb{E}\big{[}L_{\ell}^{(12)}(x_{i};f)\big{]}\ll\ell^{K}\mathrm{e}^{\ell-9 \theta K}/2. \tag{31}\] Proof.: Note first that for \(p\leqslant\frac{x_{i}}{z}\leqslant y_{j}\), \[\mathbb{E}\big{[}|\Psi_{f}^{\prime}(z,p)|^{2}\big{]}\ll z\mathrm{e}^{\frac{- \log z}{2\log p}}\leqslant z\mathrm{e}^{\frac{-\log z}{2\log\theta y_{j}}}.\] By using again the bound \[\sum_{\frac{x_{i}}{z(1+1/X)}<p\leqslant\frac{1}{z}}\frac{1}{p}\ll\frac{1}{X \log y_{j-1}}\] we get \[\mathbb{E}\big{[}L_{\ell}^{(12)}(x_{i};f)\big{]} \ll\sum_{\begin{subarray}{c}j=1\\ \frac{\log x_{i}}{\log y_{j-1}}>\ell^{100K}\end{subarray}}^{J}\int_{x_{i}/y_{ j}}^{x_{i}/y_{j-1}}X\sum_{\frac{x_{i}}{z(1+1/X)}<p\leqslant\frac{x_{i}}{z}} \frac{1}{p}z\mathrm{e}^{\frac{-\log z}{2\log y_{j}}}\frac{\mathrm{d}z}{z^{2}}\] \[\ll\sum_{\begin{subarray}{c}j=1\\ \frac{\log x_{i}}{\log y_{j-1}}>\ell^{100K}\end{subarray}}^{J}\frac{1}{\log y_ {j}}\int_{x_{i}/y_{j}}^{x_{i}/y_{j-1}}\frac{1}{z^{1+\frac{1}{2\log y_{j}}}} \mathrm{d}z\ll\ell^{K}\mathrm{e}^{-\ell^{90K}/2}.\] **Lemma 6.8**.: _For \(T\geqslant 1\), we have \(\sum_{\ell\geqslant 1}\mathbb{P}_{\ell}^{(12)}\) converges._ Proof.: For \(\ell\) large and by applying Lemma 6.7, we have \[\mathbb{P}_{\ell}^{(12)} \leqslant\sum_{X_{\ell-1}<x_{i}\leqslant X_{\ell}}\frac{\ell^{K/2 }}{T}\mathbb{E}\big{[}L_{\ell}^{(12)}(x_{i};f)\big{]}\] \[\ll\sum_{X_{\ell-1}<x_{i}\leqslant X_{\ell}}\ell^{K/2}\ell^{K} \mathrm{e}^{-\ell^{90K}/2}\] \[\ll 2^{\ell^{K}/c_{0}}\ell^{3K/2}\mathrm{e}^{-\ell^{90K}/2}\ll \mathrm{e}^{-\ell^{98K}}.\] Thus \(\sum_{\ell\geqslant 1}\mathbb{P}_{\ell}^{(12)}\) converges. ### Bounding \(\mathbb{P}_{\ell}^{(2)}\) The goal of this subsection is to bound \(\mathbb{P}_{\ell}^{(2)}\). **Lemma 6.9**.: _For \(T\geqslant 1\), the sum \(\sum_{\ell\geqslant 1}\mathbb{P}_{\ell}^{(2)}\) converges._ Proof.: By bounding the expectation, we get \[\mathbb{E}\big{[}L_{\ell}^{(2)}(x_{i};f)\big{]} \ll\sum_{j=1}^{J}\int_{\frac{x_{i}}{y_{j}(1+1/X)}}^{\frac{x_{i}}{ y_{j}}}X\sum_{\max\big{(}\frac{x_{i}}{z(1+1/X)},y_{j-1}\big{)}<p\leqslant y_{j}} \frac{1}{p}\mathrm{e}^{-\frac{\log z}{2\log p}}\frac{\mathrm{d}z}{z}\] \[\leqslant\sum_{j=1}^{J}\int_{\frac{x_{i}}{y_{j}(1+1/X)}}^{\frac{x _{i}}{y_{j}}}X\sum_{\max\big{(}\frac{x_{i}}{z(1+1/X)},y_{j-1}\big{)}<p\leqslant y _{j}}\frac{1}{p}\mathrm{e}^{-\frac{\log z}{2\log y_{j}}}\frac{\mathrm{d}z}{z}.\] Note that \(y_{j}\leqslant\frac{x_{i}}{z}\), then by applying again (18), we have \[\sum_{\max\big{(}\frac{x_{i}}{z(1+1/X)},y_{j-1}\big{)}<p\leqslant y_{j}}\frac {1}{p}\leqslant\sum_{\frac{x_{i}}{z(1+1/X)}<p\leqslant\frac{x_{i}}{z}}\frac{ 1}{p}\ll\frac{1}{X\log y_{j}}.\] Let's back now to the expectation, we get \[\mathbb{E}\big{[}L_{\ell}^{(2)}(x_{i};f)\big{]} \ll\sum_{j=1}^{J}\frac{1}{\log y_{j}}\int_{\frac{x_{i}}{y_{j}(1+1/ X)}}^{\frac{x_{i}}{y_{j}}}\frac{1}{z^{1+\frac{1}{2\log y_{j}}}}\mathrm{d}z\] \[\ll\sum_{j=1}^{J}\frac{1}{\mathrm{e}^{\frac{\log x_{i}}{2\log y_{ j}}}}\bigg{(}\mathrm{e}^{\frac{\log(1+1/X)}{2\log y_{j}}}-1\bigg{)}\] \[\ll\sum_{j=1}^{J}\frac{1}{X\log y_{j}}.\] Recall that \(X\) is chosen to be \((\log x_{i})^{8r^{2}-8r+4}\) at the end of Lemma 6.2's proof. We denote \(r^{\prime}:=8r^{2}-8r+4\). Note that \(r^{\prime}>1\). We deduce then \[\mathbb{P}_{\ell}^{(2)} \ll\frac{\ell^{K/2}}{T}\sum_{X_{\ell-1}<x_{i}\leqslant X_{t}}\sum _{j=1}^{J}\frac{1}{X\log y_{j}}\] \[\ll\frac{\ell^{K/2}}{T}2^{\ell^{K/c_{0}}}\frac{2^{K\ell^{K-1}}}{2 ^{r^{\prime}(\ell-1)^{K}+\ell^{K}}}.\] Since \(r^{\prime}>r>1/c_{0}\), we get the convergence of the quantity \[\sum_{\ell\geqslant 1}2^{\frac{K}{2}\frac{\log\ell}{2}+\frac{1}{c_{0}}\ell^{K }+K\ell^{K-1}-r^{\prime}(\ell-1)^{K}-\ell^{K}}.\] Thus, the sum \(\sum_{\ell\geqslant 1}\mathbb{P}_{\ell}^{(2)}\) converges. ### Convergence of \(\sum_{\ell\geqslant 1}\mathbb{P}\big{[}\mathcal{B}_{\ell}^{(1)}\big{]}\) From (19), there exists a constant \(C\) such that \[C\frac{\ell^{K/2}V_{\ell}(x_{i};f)}{x_{i}} \leqslant\ell\log\ell\bigg{(}\sum_{k=1}^{3}\sup_{1\leqslant y_{j} \leqslant J}\lambda_{\ell}^{(k)}(x_{i},y_{j};f)\bigg{)}+L_{\ell}^{(12)}(x_{i}; f)+L_{\ell}^{(2)}(x_{i};f)\] \[\quad+\frac{W_{\ell}(x_{i};f)}{x_{i}}\] We set \[\mathbb{P}_{\ell}^{V}:=P\bigg{[}\sup_{X_{\ell-1}<x_{i}\leqslant X_{\ell}}\frac{ \ell^{K/2}V_{\ell}(x_{i};f)}{x_{i}}>\frac{6T(\ell)}{C}\bigg{]}.\] **Lemma 6.10**.: _We have \(\sum_{\ell\geqslant 1}\mathbb{P}_{\ell}^{V}\) converges._ Proof.: It suffices to observe that \[\mathbb{P}_{\ell}^{V}\leqslant\mathbb{P}_{\ell}^{\lambda,1}+\mathbb{P}_{\ell} ^{\lambda,2}+\mathbb{P}_{\ell}^{\lambda,3}+\mathbb{P}_{\ell}^{(12)}+\mathbb{P} _{\ell}^{(2)}+\mathbb{P}_{\ell}^{W}.\] Since \(T_{1}(\ell)\geqslant\ell^{4}\), * by Proposition 6.5, \(\sum_{\ell\geqslant 1}\mathbb{P}_{\ell}^{\lambda,1}\) converges, * by Lemma 6.6, \(\sum_{\ell\geqslant 1}\mathbb{P}_{\ell}^{\lambda,2}\) and \(\sum_{\ell\geqslant 1}\mathbb{P}_{\ell}^{\lambda,3}\) converge, * by Lemma 6.8, \(\sum_{\ell\geqslant 1}\mathbb{P}_{\ell}^{(12)}\) converges, * by Lemma 6.9, \(\sum_{\ell\geqslant 1}\mathbb{P}_{\ell}^{(2)}\) converges, * by Proposition 6.1\(\sum_{\ell\geqslant 1}\mathbb{P}_{\ell}^{W}\) converges. Now we set the following event \[\Sigma:=\bigg{\{}\forall x_{i}\in]X_{\ell-1},X_{\ell}],V_{\ell}(x_{i};f) \leqslant\frac{6T(\ell)x_{i}}{C\ell^{K/2}}\bigg{\}}\] and for each \(x_{i}\in]X_{\ell-1},X_{\ell}]\) \[\Sigma_{i}:=\bigg{\{}V_{\ell}(x_{i};f)\leqslant\frac{6T(\ell)x_{i}}{C\ell^{K/2 }}\bigg{\}}.\] Note that \(\mathbb{P}\big{[}\,\overline{\Sigma}\,\big{]}=\mathbb{P}_{\ell}^{V}\). **Proposition 6.11**.: _The sum \(\sum_{\ell\geqslant 1}\mathbb{P}\big{[}\mathcal{B}_{\ell}^{(1)}\big{]}\) converges._ Proof.: Let \(T(\ell)=\ell^{6}\), this guarantees the convergence of \(\sum_{\ell\geqslant 1}\mathbb{P}_{\ell}^{V}\) by Lemma 6.10. We have \[\mathbb{P}\big{[}\mathcal{B}_{\ell}^{(1)}\big{]} =\mathbb{P}\bigg{[}\sup_{X_{\ell-1}<x_{i}\leqslant X_{\ell}}\frac {|M_{f}^{(1)}(x_{i})|}{\sqrt{x_{i}}R(x_{i})}>1\bigg{]}\] \[\leqslant\sum_{X_{\ell-1}<x_{i}\leqslant X_{\ell}}\mathbb{P} \bigg{[}\bigg{\{}|M_{f}^{(1)}(x_{i})|\geqslant\sqrt{x_{i}}R(x_{i})\bigg{\}} \bigcap\Sigma_{i}\bigg{]}+\mathbb{P}_{\ell}^{V}.\] At this stage, we can't apply Lemma 3.12. In fact, under the event \(\Sigma_{i}\), it is hard to say if \(M_{f}^{(1)}(x_{i})\) still a sum of martingale difference sequence. However the Lemma 3.13 allows us to give a strong upper bound under the event \(\Sigma_{i}\). Thus, \[\mathbb{P}\bigg{[}\bigg{\{}|M_{f}^{(1)}(x_{i})|\geqslant\sqrt{x_ {i}}R(x_{i})\bigg{\}}\bigcap\Sigma_{i}\bigg{]} \leqslant 2\exp\bigg{(}\frac{-2Cx_{i}R(x_{i})^{2}\ell^{K/2}}{6T(\ell)x_ {i}}\bigg{)}\] \[\leqslant 2\exp\bigg{(}-C_{1}\ell^{K+2K\varepsilon-6}\bigg{)}\] where \(C_{1}\) is an absolute constant. Since by assumption \(K\varepsilon=25\), we have then \[\sum_{X_{\ell-1}<x_{i}\leqslant X_{\ell}}\exp\bigg{(}-C_{1}\ell^{K+2K\varepsilon- 6}\bigg{)}\leqslant\exp\bigg{(}\frac{\log 2}{c_{0}}\ell^{K}-C_{1}\,\ell^{K}\ell^{44} \bigg{)}\] Finally, we get \[\mathbb{P}\big{[}\mathcal{B}_{\ell}^{(1)}\big{]}\ll\exp\bigg{(}\frac{\log 2}{c_{ 0}}\ell^{K}-C_{1}\,\ell^{K}\ell^{44}\bigg{)}+\mathbb{P}_{\ell}^{V}\] Thus the sum \(\sum_{\ell\geqslant 1}\mathbb{P}\big{[}\mathcal{B}_{\ell}^{(1)}\big{]}\) converges. ## 7 Bounding \(\mathbb{P}[\mathcal{B}_{\ell}^{(2)}]\) The goal of this section is to give an upper bound of \(\mathbb{P}[\mathcal{B}_{\ell}^{(2)}]\). We set \[N_{ij}(f):=\sum_{\begin{subarray}{c}y_{j-1}^{2}<d\leqslant x_{i}\\ v_{P(d)}(d)\geqslant 2\end{subarray}}^{y_{j-1},y_{j}}f(d)\sum_{ \begin{subarray}{c}n\leqslant x_{i}/d\\ P(n)\leqslant y_{j-1}\end{subarray}}f(n).\] Following Lau-Tenenbaum-Wu in [13], let \(m>1\), we have \[\mathbb{E}(|N_{ij}(f)|^{2m}\,|\,\mathcal{F}_{j-1})\leqslant\bigg{(}\sum_{ \begin{subarray}{c}d\leqslant x_{i}\\ v_{P(d)}(d)\geqslant 2\end{subarray}}^{y_{j-1},y_{j}}\tau_{2m-1}(d)|\Psi_{f}(x_{i}/d,y_ {j-1})|^{2}\bigg{)}^{m}. \tag{32}\] We set \[X_{i,j}:=\sum_{\begin{subarray}{c}d\leqslant x_{i}\\ v_{P(d)}(d)\geqslant 2\end{subarray}}^{y_{j-1},y_{j}}\tau_{2m-1}(d)|\Psi_{f}(x_{i}/d,y_ {j-1})|^{2}.\] Note that \[\mathbb{E}[X_{i,j}]=\sum_{\begin{subarray}{c}d\leqslant x_{i}\\ v_{P(d)}(d)\geqslant 2\end{subarray}}^{y_{j-1},y_{j}}\tau_{2m-1}(d)\mathbb{E} \bigg{[}|\Psi_{f}(x_{i}/d,y_{j-1})|^{2}\bigg{]}\leqslant x_{i}\sum_{ \begin{subarray}{c}d\leqslant x_{i}\\ v_{P(d)}(d)\geqslant 2\end{subarray}}^{y_{j-1},y_{j}}\frac{\tau_{2m-1}(d)}{d}.\] By writing \(d=rp^{2}\) where \(p=P(d)\), we have \[\begin{split}\sum_{\begin{subarray}{c}d\leqslant x_{i}\\ v_{P(d)}(d)\geqslant 2\end{subarray}}^{y_{j-1},y_{j}}\frac{\tau_{2m-1}(d)}{d}& \leqslant\bigg{(}\sum_{r\leqslant\frac{x_{i}}{y_{j-1}^{2}}}^{y_{ j-1},y_{j}}\frac{\tau_{2m-1}(r)}{r}+1\bigg{)}\sum_{y_{j-1}<p\leqslant y_{j}}\frac{(2m-1)^{2}}{p^{2}} \\ &\ll\frac{m^{2}}{y_{j-1}}\bigg{(}\sum_{r\leqslant\frac{x_{i}}{y_{ j-1}^{2}}}^{y_{j-1},y_{j}}\frac{\tau_{2m-1}(r)}{r}+1\bigg{)}\sum_{y_{j-1}<p\leqslant y _{j}}\frac{1}{p}.\end{split} \tag{33}\] Note that \[\sum_{y_{j-1}<p\leqslant y_{j}}\frac{1}{p}\ll\frac{1}{\ell}\leqslant 1.\] Recall that \(\tau_{2m-1}(r)\leqslant(2m-1)^{\Omega(r)}\). It is clear that \[\sum_{\begin{subarray}{c}x_{i}\\ \frac{x_{i}}{y_{j}^{2}(z+\frac{1}{X})}\leqslant r\leqslant\frac{x_{i}}{y_{j-1} ^{2}}\end{subarray}}^{y_{j-1},y_{j}}\frac{\tau_{2m-1}(r)}{r}\leqslant\sum_{r \geqslant 1}^{y_{j-1},y_{j}}\frac{(2m-1)^{\Omega(r)}}{r}=\prod_{y_{j-1}<p\leqslant y _{j}}\bigg{(}1-\frac{2m-1}{p}\bigg{)}^{-1}\leqslant\mathrm{e}^{cm/\ell}\] where \(c\) is an absolute constant. Finally, we get \[\mathbb{E}[X_{i,j}]\ll\frac{x_{i}m^{2}\mathrm{e}^{cm/\ell}}{y_{j-1}}. \tag{34}\] We define the following events \[\mathcal{X}_{\ell}=\bigg{\{}\sup_{\begin{subarray}{c}X_{\ell-1}<x_{i}\leqslant X _{\ell}\\ 1\leqslant j\leqslant J\end{subarray}}\frac{X_{i,j}}{x_{i}}\leqslant\frac{1}{ \ell^{10K}}\bigg{\}}\text{ and }\mathcal{X}_{\ell,i,j}=\bigg{\{}\frac{X_{i,j}}{x_{i}} \leqslant\frac{1}{\ell^{10K}}\bigg{\}}.\] **Lemma 7.1**.: _For \(m\ll\ell^{K}\), we have \(\sum_{\ell\geqslant 1}\mathbb{P}\big{[}\,\overline{\mathcal{X}_{\ell}}\, \big{]}\) converges._ Proof.: By using the bound (34), we get \[\mathbb{P}\big{[}\,\overline{\mathcal{X}_{\ell}}\,\big{]}\leqslant\sum_{j=1} ^{J}\sum_{X_{\ell-1}<x_{i}\leqslant X_{\ell}}\frac{\ell^{10K}}{x_{i}}\mathbb{ E}[X_{ij}]\ll\sum_{j=1}^{J}\sum_{X_{\ell-1}<x_{i}\leqslant X_{\ell}}\ell^{10K} \frac{m^{2}\mathrm{e}^{cm/\ell}}{y_{j-1}}\ll\ell^{11K}\frac{\mathrm{e}^{c_{1 }\ell^{K}}}{2^{c_{2}\mathrm{e}^{\ell^{K}}}}\] where \(c_{1},c_{2}>0\) are absolute constants. Thus, the sum \(\sum_{\ell\geqslant 1}\mathbb{P}\big{[}\,\overline{\mathcal{X}_{\ell}}\, \big{]}\) converges. **Proposition 7.2**.: _The sum \(\sum_{\ell\geqslant 1}\mathbb{P}\big{[}\mathcal{B}_{\ell}^{(2)}\big{]}\) converges._ Proof.: By Cauchy-Schwarz's Inequality, we have \[\bigg{|}\sum_{1\leqslant j\leqslant J}N_{ij}(f)\bigg{|}^{2m}\leqslant J^{2m-1 }\sum_{1\leqslant j\leqslant J}|N_{ij}(f)|^{2m}.\] On the other hand, we have \[\mathbb{P}\big{[}\mathcal{B}_{\ell}^{(2)}\big{]}\leqslant\mathbb{P}\big{[} \mathcal{B}_{\ell}^{(2)}\cap\mathcal{X}_{\ell}\big{]}+\mathbb{P}\big{[}\, \overline{\mathcal{X}_{\ell}}\,\big{]}. \tag{35}\] and \[\mathbb{P}\big{[}\mathcal{B}_{\ell}^{(2)}\cap\mathcal{X}_{\ell} \big{]} \leqslant\sum_{X_{\ell-1}<x_{i}\leqslant x_{\ell}}\sum_{j=1}^{J} \frac{\mathbb{E}\big{[}|N_{ij}(f)|^{2m}\,|\,\mathcal{X}_{\ell,i,j}\big{]}J^{2m -1}}{(x_{i}R(x_{i})^{2})^{m}} \tag{36}\] \[\ll\sum_{X_{\ell-1}<x_{i}\leqslant x_{\ell}}\sum_{j=1}^{J}\frac{ 1}{J}\bigg{(}\frac{c_{5}mJ^{2}}{R(x_{i})^{2}\ell^{10K}}\bigg{)}^{m}\] \[\ll 2^{\ell^{K}/c_{0}}\bigg{(}\frac{c_{5}mJ^{2}}{R(x_{i})^{2} \ell^{10K}}\bigg{)}^{m}.\] where \(c_{5}\) is an absolute constant. By taking, \(m=\ell^{K}\) and recall that \(J\ll\ell^{K}\), we get then \[\mathbb{P}\big{[}\mathcal{B}_{\ell}^{(2)}\cap\mathcal{X}_{\ell}\big{]}\ll 2 ^{\ell^{K}/c_{0}}\bigg{(}\frac{c_{6}}{\ell^{15K/2}}\bigg{)}^{\ell^{K}}\] where \(c_{6}\) is an absolute constant. Thus, the sum \(\sum_{\ell\geqslant 1}\mathbb{P}\big{[}\mathcal{B}_{\ell}^{(2)}\cap \mathcal{X}_{\ell}\big{]}\) converges. By Lemma 7.1 and the inequality (35), the sum \(\sum_{\ell\geqslant 1}\mathbb{P}\big{[}\mathcal{B}_{\ell}^{(2)}\big{]}\) converges. This ends the proof. ## Acknowledgement The author would like to thank his supervisor Regis de la Breteche for his patient guidance, encouragement and the judicious advices he has provided throughout the work that led to this paper. The author would also thank Gerald Tenenbaum and Adam Harper for their helpful remarks and useful comments.
2301.04323
Synchronization Lower Bounds the Efficiency of Near-Degenerate Thermal Machines
We study the relationship between quantum synchronization and the thermodynamic performance of a four-level near-degenerate extension of the Scovil-Schulz Dubois thermal maser. We show how the existence of interacting coherences can potentially modify the relationship between synchronization and the coherent power output of such a maser. In particular, the cooperation and competition between interacting coherences, causes the coherent heat and efficiency to be bounded by the synchronization measure in addition to the well-studied power synchronization bound. Overall, our results highlight the role of quantum synchronization in the working of a thermal machine.
Taufiq Murtadho, Juzar Thingna, Sai Vinjanampathy
2023-01-11T06:17:58Z
http://arxiv.org/abs/2301.04323v1
# Synchronization Lower Bounds the Efficiency of Near-Degenerate Thermal Machines ###### Abstract We study the relationship between quantum synchronization and the thermodynamic performance of a four-level near-degenerate extension of the Scovil-Schulz-DuBois thermal maser. We show how the existence of interacting coherences can potentially modify the relationship between synchronization and the coherent power output of such a maser. In particular, the cooperation and competition between interacting coherences, causes the coherent heat and efficiency to be bounded by the synchronization measure in addition to the well-studied power synchronization bound. Overall, our results highlight the role of quantum synchronization in the working of a thermal machine. ## I Introduction Quantum coherence has been a vital resource [1] in quantum thermodynamics and has been a topic of intense research in the recent past [2; 3; 4; 5]. The presence of coherence typically boosts the performance metrics of thermal machines such as engines [6], refrigerators [7], and batteries [8; 9; 10; 11]. Moreover, coherences, which can be thought of as phases, have also been subject to resource counting studies from an information theoretic perspective [12]. In the case of non-degenerate systems, that possess diagonal steady states in the absence of driving, such coherences are usually generated using a coherent drive. Such systems do not allow coherences to interact and thus a thorough understanding of the performance of quantum thermal machines with interacting coherences is lacking. Classically, interacting phases are well studied in coupled Kuramoto models where phase pulling can produce a well-known second-order phase transition involving _cooperative_ phase locking called synchronization [13]. Besides this, differences in coupling of the Kuramoto model can lead to _competition_ as well, giving rise to a variety of behavior such as anti-synchronization and chimera, a phenomenon explored even in the quantum regime [14]. Specifically, in quantum systems, such coupled phase oscillator models arise due to a variety of different reasons, one of them being degeneracies. Recently, for non-degenerate quantum systems coupled to diagonal baths, it was highlighted that synchronization measures fully account for all coherences in the system [15] and can help explain the performance of quantum thermal machines. In contrast to coherently driven quantum systems, incoherently driven near-degenerate levels can generate bath-induced coherences. Since the energetic cost of generating coherences is (nearly) zero for making transitions between (near-)degenerate levels, such systems cause the thermodynamic analysis to be decoupled from synchronization. Moreover, the (near-)degeneracies cause such coherences to interact with each other and induce cooperation or competition between the different phases. In an accompanying manuscript, we show that synchronization of driven, dissipative _exactly_-degenerate open quantum systems exhibit a crossover from cooperation to competition. In this manuscript, we highlight that this synchronous behavior aids in the understanding of nanoscale thermal machines. We show that while coherent driving of near-degenerate four-level thermal masers has a tendency to entrain the quantum system to the external drive, coupled coherences exert a simultaneous force that can be either competitive or cooperative. This interplay between cooperation and competition leads to a rich dynamical regime whose implications on the thermal maser are observed in terms of its performance metrics (power, heat, and efficiency) being bounded by the synchronization measure. In Sec. II below, we begin with a discussion of the near-degenerate four-level thermal maser. Then, we present an analysis of competition and cooperation in four-level thermal masers taking into account near-degeneracy and bath-induced coherence in Sec. III. Then, in Sec. IV and Sec. V we connect the competition and cooperation to relevant thermodynamic quantities of the thermal machine, specifically its power and heat current. In [16], the coherent power output of a non-degenerate three-level maser was shown to be related to the measures of synchronisation and it was noted in [17] that this relationship does not hold for degenerate systems where there is no energetic cost of generating coherences. Furthermore such degeneracies were related to synchronisation blockade recently [18]. We investigate this relationship in detail and connect synchronization to steady state heat current. The latter is then used to derive lower-bound on the efficiency of near-degenerate thermal machines in the mutual coupling dominant regime. Finally, we summarize our main results in Sec. VI. ## II Four-level thermal maser: near degeneracy and noise-induced coherence We begin with the analysis of a four-level thermal maser that operates under a temperature difference to create a population inversion as depicted in Fig. 1. The original Scovil-Schulz-DuBois thermal maser is a minimal setup that comprises of three-levels and is one of the first models of a thermal heat engine [19], which has been recently extended to four levels [20] and beyond [21; 22] to include effects of bath-induced coherence. As a generalization of this, we consider the four-level model consisting of a free Hamiltonian \[H_{0}=\omega_{1}\left|1\right\rangle\left\langle 1\right|+\sum_{j=2}^{3}\omega_{j} \left|j\right\rangle\left\langle j\right|, \tag{1}\] with \(\omega_{3}>\omega_{2}>\omega_{1}>\omega_{0}=0\). The _near-degenerate_ levels \((\omega_{2},\omega_{3})\) are spread over a small interval \(\Delta\equiv\omega_{3}-\omega_{2}\ll(\omega_{1}-\omega_{0})\) and \(\Delta\ll(\omega_{2}-\omega_{1})\). The system is driven out-of-equilibrium by a hot (temperature \(T_{h}\)) and a cold bath (temperature \(T_{c}\)) as well as an external time-periodic drive of strength \(\lambda\) and frequency \(\Omega\approx(\omega_{2}-\omega_{1})\). The role of the baths is to establish population inversion between the near-degenerate manifold \(\{\left|2\right\rangle,\left|3\right\rangle\}\) and the first-excited state \(\left|1\right\rangle\). Specifically, when population inversion is achieved, the external drive can easily trigger stimulated emission, thereby producing power, i.e. the system acts as an engine. On the other hand, if there is no population inversion, the system absorbs power from the external drive, i.e. the system acts as a refrigerator. When the baths are weakly coupled to the machine working fluid \(H_{0}\), the reduced dynamics of the system is governed by a quantum master equation for the reduced density matrix \(\rho\)[20] that reads, \[\frac{d\rho}{dt}=-i[H_{0}+V(t),\rho]+\mathcal{D}_{h}[\rho]+\mathcal{D}_{c}[ \rho]. \tag{2}\] The operator \(V(t)\) is a collective drive [22], \[V(t)=\lambda e^{i\Omega t}\sum_{j=2}^{3}\left|j\right\rangle\left\langle 1 \right|+\text{h.c.}, \tag{3}\] that stimulates transitions between all the near-degenerate energy levels and the first-excited state. The cold-bath dissipator \(\mathcal{D}_{c}[\rho]\) takes the Gorini-Kassakowski-Sudarshan-Lindblad (GKSL) [23; 24] form \[\mathcal{D}_{c}[\rho]=\sum_{\mu=1}^{2}\Gamma_{c_{\mu}}(2c_{\mu}\rho c_{\mu}^ {\dagger}-\{c_{\mu}^{\dagger}c_{\mu},\rho\}), \tag{4}\] with jump operators \(c_{1}=c_{2}^{\dagger}=\left|0\right\rangle\left\langle 1\right|\) and the decay rates satisfying local detailed balance \(\Gamma_{c_{1}}=\gamma_{c}(1+n_{c})\) and \(\Gamma_{c_{2}}=\gamma_{c}n_{c}\). Here, \(\gamma_{c}\) is the coupling strength squared between the system and the cold bath. The hot-bath dissipator \(\mathcal{D}_{h}[\rho]\) connects the first excited state to the near-degenerate manifold. The near-degeneracy causes a breakdown of the secular approximation [25] and results in a Bloch-Redfield form of the dissipator [26; 27; 28; 29; 20], \[\mathcal{D}_{h}[\rho]=\sum_{\mu=1}^{2}\sum_{i,j=2}^{3}\Gamma_{\mu}^{ij}[h_{\mu }^{i},\rho\,h_{\mu}^{j\dagger}]+\Gamma_{\mu}^{ji}[h_{\mu}^{i}\,\rho,h_{\mu}^{ j\dagger}]. \tag{5}\] The operators \(h_{1}^{i}=h_{2}^{i}=\left|0\right\rangle\left\langle i\right|\) and pair-wise decay rates \(\Gamma_{1}^{ij}=P_{ij}\sqrt{\gamma_{h}^{i}\gamma_{h}^{j}}\left(1+n_{h}^{(j)}\right)\) and \(\Gamma_{2}^{ij}=P_{ij}\sqrt{\gamma_{h}^{i}\gamma_{h}^{j}}\,n_{h}^{(j)}\). Here, \(\gamma_{h}^{i}\) denotes the squared coupling strength of the bath that induces transitions between the ground state \(\left|0\right\rangle\) and the \(i\)th state of the near-degenerate manifold \(\left|i\right\rangle\) (\(i=2,3\)). The mean-bosonic hot-bath distribution \(n_{h}^{(i)}=(\exp[\omega_{i}/T_{x}]-1)^{-1}\) encodes information about the hot bath temperature \(T_{h}\) (similar definition for the cold-bath with \(\omega_{i}=\omega_{1}\)). In this paper, we will throughout set \(\gamma_{h}^{(2)}=\gamma_{h}^{(3)}=\gamma_{h}\) and work with the Redfield form that despite not being positive in all parameter ranges [30; 31] provides accurate results when used appropriately in the weak system-bath coupling regime [32; 33]. Figure 1: Schematic of the four-level Scovil–Schulz-DuBois thermal maser. Here, \(\Delta\equiv\omega_{3}-\omega_{2}\) is the near-degenerate energy gap and \(p\) is the noise-induced coherence strength that arises due to the hot heat bath causing interference between the near-degenerate levels and the ground state (red arrows). The relative phases \(\varphi_{ij}\) between states \(\left|i\right\rangle\) and \(\left|j\right\rangle\) are depicted on the circles on the right and the arrows indicate the direction in which the non-degenerate phases (\(\varphi_{31}\) and \(\varphi_{21}\)) move depending on the strength of mutual coupling (orange arrows) and entrainment (green arrows). In the engine (top circle) we observe both cooperation and competition, whereas in the refrigerator regime (bottom circle) we find only cooperating behavior. The coefficients \(P_{ij}\) are elements of the symmetric _correlation matrix_[34] \[P_{ij}=\begin{cases}1&\text{if }i=j\\ p_{ij}&\text{for }i\neq j.\end{cases} \tag{6}\] where \(\left|p_{ij}\right|\leq 1\). Above since we are focused on a four-level thermal maser, the matrix \(P\) is a \(2\times 2\) matrix with its off-diagonal elements \(\left|p\right|\leq 1\). The elements \(p\) originate from the dipole-alignment factor [28; 29] with \(p=\mathbf{d_{2}}\cdot\mathbf{d_{3}}/|\mathbf{d_{2}}||\mathbf{d_{3}}|\) such that \(\mathbf{d}_{i}=\left\langle i|\mathbf{d}|i\right\rangle\) and \(\mathbf{d}\) being the dipole operator. Intuitively, \(\left|p\right|\) is the strength of quantum interference between thermalization pathways \(\left|0\right\rangle\leftrightarrow\left|2\right\rangle\) and \(\left|0\right\rangle\leftrightarrow\left|3\right\rangle\) while its sign encodes whether the interference is constructive or destructive. The parameter \(p\) determines the strength of the noise-induced coherence with \(p=0\) yielding no coherence due to the hot bath and \(\left|p\right|=1\) yielding maximum coherence. The above Redfield hot-bath dissipator takes the completely positive Lindblad form [35] when \(\Gamma_{\mu}^{ij}=\Gamma_{\mu}^{ji}\)\(\forall\)\(i,j=2,3\), which can be satisfied in the exact degenerate limit (\(\Delta=0\) wherein \(n_{h}^{(i)}=n_{h}\)). In this limit, the Kossakowski matrix becomes proportional to the correlation matrix and thus positivity can be ensured if the correlation matrix is positive definite, i.e., for \(-1\leq p\leq 1\). When the correlation matrix has a zero eigenvalue (\(p=\pm 1\)), the dissipator possesses a dark state [36; 37] that can lead to multiple steady states [38; 39]. The above quantum master equation (2) can be transformed to a rotating frame such that any operator \(\tilde{O}=\exp[i\tilde{H}t]O\exp[-i\tilde{H}t]\) with \[\tilde{H}=\frac{\Omega}{2}\sum_{j=2}^{3}\left|j\right\rangle\left\langle j \right|-\frac{\Omega}{2}\left|1\right\rangle\left\langle 1\right|. \tag{7}\] The above transformation eliminates the explicit time dependency from the coherent evolution and leaves the dissipators invariant leading to a quantum master equation in the rotating frame, \[\frac{d\tilde{\rho}}{dt}=-i[H_{0}-\tilde{H}+\tilde{V},\tilde{\rho}]+\mathcal{ D}_{h}[\tilde{\rho}]+\mathcal{D}_{c}[\tilde{\rho}], \tag{8}\] with \(\tilde{V}=\lambda\sum_{j=2}^{3}\left|j\right\rangle\left\langle 1\right|+ \text{h.c.}\). Throughout this work, we will focus our attention on the four-level thermal maser investigating the effects of near degeneracy \(\Delta\) and the dipole-alignment factor \(p\) (noise-induced coherence strength) on quantum synchronization and the thermodynamic observables of the thermal maser. In particular, unlike Ref. [22] we will not explore the effect of several degenerate levels (generalized Scovil-Schulz-DuBois thermal maser) and restrict ourselves to the easily tractable and physically intuitive four-level heat machine. ## III Coexistence of entrainment and mutual coupling in a near-degenerate thermal maser The four-level thermal maser forms the minimal model in which entrainment and mutual coupling coexist. In the accompanying work [22], we show how competition and cooperation manifest in the synchronization measure for degenerate multilevel thermal masers. Depending on the thermodynamic functionality of the maser, the phases compete when the system behaves as an engine and cooperate when it acts as a refrigerator. The aim of this section is to show that the same phenomena persist even in presence of noise-induced coherence (\(p\neq 0\)) and near-degeneracy (\(\Delta\neq 0\)) by analytically examining the minimal model. We derive the formula for phase-space synchronization measure [40]\(S_{max}\) for the _exactly_ degenerate (\(\Delta=0\)) four-level maser in presence of noise-induced coherence. This measure not only captures the strength of steady state coherences, but also its phase-matching condition [22]. The general \(SU(D)\) quantum synchronization measure applicable to \(D\)-level systems is presented in Ref. [22]. Here, we apply the general result for our specific case of \(D=4\), to obtain, \[S_{max}=\frac{1}{16\pi^{2}}\times\begin{cases}|\tilde{\rho}_{12}^{ss}|+|\tilde {\rho}_{13}^{ss}|+|\tilde{\rho}_{23}^{ss}|&n_{c}\leq n_{h}\\ |\tilde{\rho}_{12}^{ss}+|\tilde{\rho}_{13}^{ss}|-|\tilde{\rho}_{23}^{ss}|&n_{h} >n_{c}\;\&\;k>2\\ \left(1+\frac{k^{2}}{2}\right)|\tilde{\rho}_{23}^{ss}|&n_{h}>n_{c}\;\&\;k\leq 2,\end{cases} \tag{9}\] where \(k=\gamma_{h}(1+n_{h})(1+p)/\lambda\) is the _dissipation-to-driving ratio_. The superscript of \(\tilde{\rho}\) denotes the steady state, whose analytical formula can be derived for \(\Delta=0\) and \(p\neq 0\) (see Appendix A). The analytical solution for \(\tilde{\rho}^{ss}\) can then be used to derive Eq. (9) by following the recipe in the Supplementary Material of [22]. In fact, Eq. (9) has the same form as the result obtained in [22] with the exception that the dissipation-to-driving ratio \(k\) now depends on the noise-induced coherence strength \(p\). Equation (9) displays cooperation and competition between entrainment and mutual coupling in different thermodynamic regimes. The mutual-coupling (entrainment) contribution is represented by (non-) degenerate coherences \(|\tilde{\rho}_{ij}^{ss}|(|\tilde{\rho}_{1j}^{ss}|)\) for \(i,j=2,3\). In the refrigerator regime (\(n_{c}<n_{h}\)), entrainment and mutual coupling cooperate to increase the overall synchronization of the maser since all steady state coherences contribute positively to \(S_{max}\). Meanwhile, in the engine case, we observe competition between coherences for \(k<2\). The competition and cooperation is due to different phase configurations preferred by the two mechanisms, i.e., both prefer in-phase synchronization in the refrigerator case while one prefers in-phase and the other prefers out-of-phase in the engine case (see Fig.2). Moreover, in the engine case for \(k<2\), synchronization is dominated by mutual coupling contribution \(|\tilde{\rho}_{23}^{ss}|>|\tilde{\rho}_{12}^{ss}|+|\tilde{\rho}_{13}^{ss}|\). The \(p\)-dependence of \(k\) allows us to explore different synchronization regimes by tuning the strength of noise-induced coherence. For example, in absence of noise-induced coherence (\(p=0\)), the deep mutual coupling dominant regime \(k\ll 2\) can only be explored in the strong-driving regime \(\lambda\gg 2\gamma_{h}(1+n_{h})\). However, if the driving is strong, one starts to deform the limit cycle and deviate away from the typical synchronization paradigm [13]. Yet, in presence of noise-induced coherence (\(p\neq 0\)), the deep mutual coupling dominant regime of \(k\ll 2\) can be easily explored in presence of total destructive interference (\(p\rightarrow-1\)). Figures 2**a-b** show the quasi-probability phase distribution \(S(\varphi_{21},\varphi_{31})\), where \(\varphi_{ij}\equiv\phi_{i}-\phi_{j}\) is the relative phase between states \(|i\rangle\) and \(|j\rangle\). They are computed in the engine regime for \(p=0.5\) and \(p=-0.99\). Figure 2**a** shows the regime where entrainment is stronger than mutual coupling so that despite their competition, the phases localize in the rotating frame. On the other hand, Fig. 2**b** shows the deep mutual-coupling dominant regime where entrainment is effectively lost but the relative phase is still fixed. Note that in both figures, the near-degenerate gap is set to be non-zero (\(\Delta\neq 0\)). ## IV Thermodynamic observables In this section, we will derive the key thermodynamic observables, specifically steady state power and heat current following the standard recipe for weak-coupling thermodynamics [41]. We follow the standard energy partitioning [42] that defines the internal energy as \[E=\text{Tr}(\tilde{\rho}H_{0}). \tag{10}\] The internal energy here is defined as that of the bare Hamiltonian \(H_{0}\) and differs from the standard definition that involves the full Hamiltonian [41]. Importantly, such a definition remains the same in all pictures (Schrodinger, Heisenberg, and Interaction) and thereby avoids the inconsistencies when one moves from one picture to another. Using the quantum master equation (8), the change in internal energy can be expressed as, \[\frac{dE}{dt}=-i\text{Tr}([H_{0},\tilde{V}]\tilde{\rho})+\text{ Tr}(\mathcal{D}_{h}[\tilde{\rho}]H_{0})+\text{Tr}(\mathcal{D}_{c}[\tilde{\rho}]H_{0}). \tag{11}\] The energy flux is separated into three terms. The first term in (11) is power and the following terms are heat fluxes from the hot and cold bath respectively [42; 43], \[P =-i\text{Tr}([H_{0},\tilde{V}]\tilde{\rho})\] \[=2\lambda\sum_{j=2}^{3}(\omega_{j}-\omega_{1})\text{Im}(\tilde{ \rho}_{1j}), \tag{12}\] \[\dot{Q}_{h} =\text{Tr}(\mathcal{D}_{h}[\tilde{\rho}]H_{0})\] \[=2\gamma_{h}\Bigl{(}\sum_{j=2}^{3}\omega_{j}\left[n_{h}^{(j)} \tilde{\rho}_{00}-(1+n_{h}^{(j)})\tilde{\rho}_{jj}\right]\] (13) \[\quad-[(1+n_{h}^{(2)})\omega_{3}+(1+n_{h}^{(3)})\omega_{2}]p\, \text{Re}(\tilde{\rho}_{23})\Bigr{)},\] \[\dot{Q}_{c} =\text{Tr}(\mathcal{D}_{c}[\tilde{\rho}]H_{0})\] \[=2\omega_{1}\gamma_{c}\left[n_{c}\tilde{\rho}_{00}-(1+n_{c}) \tilde{\rho}_{11}\right]. \tag{14}\] In the _engine_ regime, heat flows from the hot bath to the cold bath and power is produced in the steady state, i.e. \(\dot{Q}_{h}^{ss}>0,\;\dot{Q}_{c}^{ss}<0,P^{ss}<0\). Conversely, in the _refrigerator_ regime, heat flows from the cold bath to the hot bath and power is consumed, i.e. \(\dot{Q}_{h}^{ss}<0,\;\dot{Q}_{c}^{ss}>0,\;P^{ss}>0\). The heat current to the cold bath \(\dot{Q}_{c}\) depends solely on populations whereas the heat from the hot bath \(\dot{Q}_{h}\) depends on populations and coherences. Intuitively, since the hot bath connects the ground state and the near-degenerate manifold, a finite dipole alignment factor \(p\) leads to noise-induced coherence causing the hot heat current to be dependent on both populations and coherences. Thus, the incoherent and coherent contribution to the hot bath's heat current [44; 45] can be expressed as, \[\dot{Q}_{h}^{inc}=2\gamma_{h}\sum_{j=2}^{3}\omega_{j}\left[n_{h}^{(j)}\tilde{ \rho}_{00}-(1+n_{h}^{(j)})\tilde{\rho}_{jj}\right] \tag{15}\] \[\dot{Q}_{h}^{coh}=-2\gamma_{h}[(1+n_{h}^{(2)})\omega_{3}+(1+n_{h}^{(3)}) \omega_{2}]p\,\text{Re}(\tilde{\rho}_{23}). \tag{16}\] Note that the coherent heat current is proportional to \(p\) which we set to be zero in the accompanying work [22]. The coherent heat either suppresses or enhances the natural inclination of heat flow. For example, in the exactly degenerate (\(\Delta=0\)) engine regime, a net heat flow coming from the hot bath to the system creates a population inverted state such that \(\dot{Q}_{h}^{inc,ss}>0\). From Eq. (16), we see that the sign of \(\dot{Q}_{h}^{coh,ss}\) depends on the sign of \(p\) [\(\because\)\(\text{Re}(\tilde{\rho}_{23}^{ss})<0\), see Eq. (100)]. If noise-induced coherence strength \(p>0\) (\(p<0\)), then the heat flow is enhanced (suppressed). Similarly, in the exactly degenerate (\(\Delta=0\)) refrigerator regime, we observe enhancement or suppression of heat dumped into the hot bath due to either constructive (\(p>0\)) or destructive (\(p<0\)) interference. ## V Connection between thermodynamic observables and synchronization In general the synchronization measure (\(S_{max}\)) and thermodynamic observables (\(P\), \(\dot{Q}_{c}\), and \(\dot{Q}_{h}\)) are unrelated. However, in the case of a three-level thermal maser, the steady state power of the maser is bounded by the synchronization measure [16] (power-synchronization bound) connecting two seemingly distinct quantities. In this section, we build the general framework of connecting the mathematically abstract notion of synchronization to physical thermodynamic observables for a four-level thermal maser. Despite several efforts analyzing the thermodynamic properties of the four-level thermal maser [20; 21; 36], there have so far been no connections between the synchronizing ability of such a thermal maser and its thermodynamic observables. ### Power - synchronization bound We begin by revisiting the relationship between quantum synchronization and the power of a near-degenerate thermal four-level maser. In the case of a three-level Scovil-Schulz-DuBois maser, the steady state power is bounded by the synchronization measure [16], i.e., \[|P^{ss}|\leq 2^{4}\pi\lambda(\omega_{2}-\omega_{1})S_{max}, \tag{17}\] also known as the power-synchronization (P-S) bound. The bound above has a different physical interpretation in the engine and refrigerator regimes: In the engine regime, P-S bound implies that the engine' power is limited by the amount the working substance entrains with the external drive. Whereas, in the refrigerator regime, since power is pumped into the system, the P-S bound suggests that there exists a maximum energy cost to ensure that the working substance is entrained to the external drive. In other words, in the engine regime, it can be inferred from the P-S bound that synchronization enhances the power output of the engine while in the refrigerator regime the P-S bound suggests that the external power supplied to the machine is utilized to synchronize the working substance. We now investigate the P-S bound for a near-degenerate four-level thermal maser. Using Eq. (12), the power of the maser can be expressed as, \[|P^{ss}|\leq 2\lambda\omega_{31}(|\tilde{\rho}_{12}^{ss}|+|\tilde{\rho}_{13}^ {ss}|), \tag{18}\] where \(\omega_{ij}\equiv\omega_{i}-\omega_{j}\). The steady state power \(|P^{ss}|\) depends only on those coherences that are influenced by the external drive, whereas the synchronization measure \(S_{max}\) is linearly dependent on all coherences. Specifically, the coherence \(|\tilde{\rho}_{23}^{ss}|\), while it affects the synchronization measure \(S_{max}\) [see Eq. (9)], it does _not_ appear in the expression for steady state power [Eq. (18)]. Due to the additional dependence of \(\tilde{\rho}_{23}^{ss}\), we find that the P-S bound is violated in the engine regime, while being respected in the refrigerator case, i.e., \[|P^{ss}|\leq 32\pi^{2}\lambda\omega_{31}S_{max} (n_{c}>n_{h}), \tag{19}\] \[|P^{ss}|\nleq 32\pi^{2}\lambda\omega_{31}S_{max} (n_{h}>n_{c}). \tag{20}\] Figure 3: steady state power to the synchronization ratio \(|P^{ss}|/(\kappa S_{max})\) with \(\kappa=32\pi^{2}\lambda\omega_{31}\) as a function of \(n_{h}^{(2)}/n_{c}\) (**a**) and noise-induced coherence parameter \(p\) (**b**). In **a**, the near-degenerate energy gap \(\Delta=0.05\omega_{1}\) (red empty circles), \(\Delta=0.2\omega_{1}\) (black crosses), and the noise-induced coherence strength \(p=0.5\). In **b**, the red filled circles are computed in the engine \((n_{h}^{(2)}/n_{c}=5)\) regime, and the black filled circles are computed in the refrigerator \((n_{h}^{(2)}/n_{c}=0.5)\) regime. The vertical dotted (\(\Delta=0.05\omega_{1}\)) and dash-dotted (\(\Delta=0.2\omega_{1}\)) lines in **a** represent engine-to-refrigerator boundary defined by the change of sign in power. They slightly deviate from the degenerate case \((n_{h}^{(2)}/n_{c}=1)\). Meanwhile, the dotted line in **b** marks the entrainment-dominant to the mutual-coupling dominant boundary (\(k=2\)). Power-synchronization bound is satisfied when \(|P^{ss}|/(\kappa S_{max})\leq 1\) (below the dashed horizontal line) in both panels. Other parameter values are the same as Fig. 2. The above expression is analytically obtained in the exact degeneracy limit \(\Delta=0\). The violation originates due to the presence of the near-degenerate levels that induce competition between entrainment and mutual coupling [22]. The origin of the violation can be clearly seen from the expression for \(S_{max}\) [Eq. (9)] that in the engine regime for \(k>2\) reads, \(S_{max}\propto|\tilde{\rho}_{12}^{ss}|+|\tilde{\rho}_{13}^{ss}|-|\tilde{\rho}_{2 3}^{ss}|<|\tilde{\rho}_{12}^{ss}|+|\tilde{\rho}_{23}^{ss}|\). Although Eqs. (19) and (20) are derived with the assumption \(\Delta=0\) (exact degeneracy), we show in Fig. **a-b** that the violation persists for small non-zero values of \(\Delta\) where we plot steady state power to synchronization ratio \(|P^{ss}|/(\kappa S_{max})\) with \(\kappa=32\pi^{2}\lambda\omega_{31}\) as a function of \(n_{h}^{(2)}/n_{c}\) and noise-induced coherence strength \(p\). The violation occurs if the ratio exceeds unity. In Fig. **3a**, we observe that other than a sharp discontinuity near the engine-to-refrigerator transition, \(|P^{ss}|/(\kappa S_{max})\) does not strongly depend on \(n_{h}^{(2)}/n_{c}\). We also find that the P-S bound is always satisfied in the refrigerator regime while being violated in the engine regime for small values of \(\Delta\). Interestingly, as the near-degeneracy is further lifted by increasing \(\Delta\), we find that the validity of the P-S bound is restored in both the engine and refrigerator regimes. We also plot \(|P^{ss}|/(\kappa S_{max})\) as a function of noise-induced coherence strength \(p\) in both the engine and refrigerator regime (see Fig. **3b**). We again observe that the bound is always satisfied in the refrigerator case while it is violated for most values of \(p\) in the engine case. As \(p\rightarrow-1\), the bound seems to be recovered again. However, this is not guaranteed in general. As \(p\rightarrow-1\) or equivalently \(k\to 0\), the expression for \(S_{max}\) only contains degenerate coherences while that of power only contains _non_-degenerate coherences [see Eq. (9)]. Thus, as \(p\rightarrow-1\), synchronization starts to decouple from power. The violation of the power-synchronization bound implies that in degenerate and near-degenerate four-level maser heat engines, more power can be generated than the upper bound set by synchronization. In contrast, in the refrigerator regime (\(n_{c}>n_{h}\)) the P-S bound is always satisfied but it is never saturated. This implies that synchronization can be generated with an energy cost less than the maximum cost imposed by the P-S bound. ### Coherent-heat - Synchronization Bound Intuitively, when the working substance of a machine synchronizes it achieves a low entropy state. It is therefore natural to expect that in order to maintain this state the entropy of the bath should increase. Thus, it is expected that synchronization will be coupled to heat. In this subsection, we will solidify this intuition and rigorously show the connections between heat and synchronization, leading to the heat-synchronization (Q-S) bound. Specifically, we have seen in Sec. IV that in the presence of noise-induced coherence, the heat contribution from the hot bath can be separated into incoherent and coherent terms. We will now connect the coherent heat current to synchronization by noting the inequality \[|\dot{Q}_{h}^{coh,ss}|\leq 4|p|\gamma_{h}(1+n_{h}^{(2)})\omega_{3}\text{Re}( \tilde{\rho}_{23}^{ss}). \tag{21}\] The above inequality is derived from Eq. (16) by making use of the fact that \(\omega_{3}\geq\omega_{2}\) and \(n_{h}^{(2)}\geq n_{h}^{(3)}\). Note that in contrast to power which is independent of _degenerate coherences_\(|\tilde{\rho}_{23}^{ss}|\), the coherent heat current is _only_ a function of degenerate coherences. Recall, also that synchronization measure \(S_{max}\) is a function of both degenerate and non-degenerate coherences. One may then relate coherent heat current in the steady state with the synchronization measure. In the limit of \(\Delta=0\), we can use Eq. (9) to derive a coherent heat-synchronization (Q-S) bound, \[|\dot{Q}_{h}^{coh,ss}|\leq(8\pi)^{2}|p|\gamma_{h}(1+n_{h}^{(2)})\omega_{3}S_{ max}. \tag{22}\] The validity of this bound is trivial in the refrigerator regime where \(S_{max}\) is given by the \(\ell_{1}\)-norm \(C_{\ell_{1}}\) and noting that \(\text{Re}(\tilde{\rho}_{23}^{ss})\leq|\tilde{\rho}_{23}^{ss}|\leq C_{\ell_{1}}\). Even in the engine regime, when mutual coupling dominates (\(k<2\)), the bound is easily satisfied since \(\text{Re}(\tilde{\rho}_{23}^{ss})\leq|\tilde{\rho}_{23}^{ss}|\leq(1+k^{2}/2)| \tilde{\rho}_{23}^{ss}|\). Yet, perhaps more surprisingly, this bound is also valid in the engine entrainment dominant case (\(k>2\)). In this regime, since \(S_{max}\propto(2k-1)|\tilde{\rho}_{23}^{ss}|\geq 3|\tilde{\rho}_{23}^{ss}|\) and \(\dot{Q}_{h}^{coh,ss}\propto\text{Re}(\tilde{\rho}_{23}^{ss})\leq|\tilde{\rho} _{23}^{ss}|\), the Q-S bound is always satisfied. In other words, unlike the P-S bound which can be violated in presence of an far-degeneracy, the Q-S bound is _always_ satisfied for \(\Delta=0\). Although we derived the bound using the exact degeneracy assumption (\(\Delta=0\)), we show numerically in Fig. 4 that the bound is still valid for small values of \(\Delta\neq 0\) Figure 4: The coherent heat current to synchronization ratio \(\dot{Q}_{h}^{coh,ss}/(\alpha S_{max})\) with \(\alpha=(8\pi)^{2}\gamma_{h}\omega_{3}(1+n_{h}^{(2)})|p|\) as a function of \(n_{h}^{(2)}/n_{c}\) (**a**) and noise-induced coherence parameter \(p\) (**b**). The squares in **a** are computed for \(p=0.5\) and the triangles are computed for \(p=-0.99\). The red filled circles in **b** are computed in the engine regime (\(n_{h}^{(2)}/n_{c}=5\)) and the black filled circles are in the refrigerator regime (\(n_{h}^{(2)}/n_{c}=0.5\)). All the data points lie below the dashed horizontal line which means Q-S bound is always satisfied. The dotted vertical line in **a** represents the engine-to-refrigerator boundary and in **b** it represents the transition from constructive (positive \(p\)) to destructive (negative \(p\)) interference. The near-degenerate gap \(\Delta=0.05\omega_{1}\). Other parameter values are the same as Fig. 2. In Fig. 4**a**, we plot the coherent heat to synchronization ratio \(|\dot{Q}_{h}^{coh,ss}|/(\alpha S_{max})\) with \(\alpha=(8\pi)^{2}\gamma\hbar\omega_{3}|p|(1+n_{h}^{(2)})\) for different values of \(n_{h}^{(2)}/n_{c}\) and noise-induced coherence strength \(p\). We observe that \(|\dot{Q}_{h}^{coh,ss}|/(\alpha S_{max})\) does not depend strongly on \(n_{h}^{(2)}/n_{c}\) except for a discontinuity near engine-to-refrigerator boundary. Furthermore, we find that the bound is always satisfied and is not strongly dependent on \(\Delta\). However, the bound is tight only deep in the mutual coupling regime, e.g. when \(p\rightarrow-1\). ### Lower Bound on Efficiency and Coefficient of Performance (COP) Efficiency is one of the most well-known performance metrics for thermal engines. It is defined by \(\eta=-P^{ss}/\dot{Q}_{h}^{ss}\). Previously, we have demonstrated that both steady state power \(P^{ss}\) and the coherent part of heat current \(\dot{Q}_{h}^{coh,ss}\) are connected to synchronization measure \(S_{max}\). It is then natural to ask whether efficiency is also connected to synchronization. The efficiency can be computed analytically in the degenerate limit (\(\Delta=0\)) and it is given by the standard expression \(\eta=1-(\omega_{1}/\omega_{2})\) (see Appendix B), which is also obtained for the three-level thermal maser. Thus, the efficiency in the exact degenerate limit depends only on the system's energy scale and it is independent of the synchronization measure \(S_{max}\) or any other parameters. However, this is no longer guaranteed in the near-degenerate case (\(\Delta\neq 0\)), where the steady state is not analytically solvable. In such cases, the efficiency is generally a function of all the parameters of the system, baths, and drive. It is well-known that the efficiency of a heat engine is upper-bounded by the Carnot efficiency. Here, we show that synchronization sets a _lower_ bound to the efficiency of a _near_-degenerate thermal maser. We will use the violation of P-S bound and the validity of Q-S bound which have been demonstrated for the near-degenerate \(\Delta\neq 0\) case in subsections V.1-V.2. Below, we state the efficiency-synchronization (E-S) bound as, \[\eta\geq\frac{\kappa S_{max}}{\dot{Q}_{h}^{inc,ss}+\alpha S_{max}}. \tag{23}\] The above inequality is derived under the assumption that P-S bound is violated \(|P^{ss}|>\kappa S_{max}\) and the Q-S bound is respected \(\dot{Q}_{h}^{inc,ss}+\dot{Q}_{h}^{coh,ss}+|\dot{Q}_{h}^{coh,ss}|\leq\dot{Q}_{h }^{inc,ss}+\alpha S_{max}\) with \(\kappa=32\pi^{2}\lambda\omega_{31}\) and \(\alpha=(8\pi)^{2}|p|\gamma_{h}(1+n_{h}^{(2)})\omega_{3}\). Note that the incoherent heat current \(\dot{Q}_{h}^{inc,ss}\) is only a function of populations [see Eq. (15)] and so it is unrelated to synchronization. We check the validity of the E-S bound Eq. (23) for various parameter values in Fig. 5**a-b**. Similar to the P-S and Q-S bound, we find that the ratio \(\eta_{S}/\eta\) with \[\eta_{S}=\frac{\kappa S_{max}}{\dot{Q}_{h}^{inc,ss}+\alpha S_{max}}, \tag{24}\] does not strongly depend on the baths mean occupation number ratio \(n_{h}^{(2)}/n_{c}\) (Fig. 5**a**). Instead, it strongly depends on the noise-induced coherence parameter \(p\) (Fig. 5**b**). In the limit when there is no interference effect from the hot bath, i.e., \(p\to 0\), we find that the bound is saturated. Similarly, the coefficient of performance (COP) in the _exact_ degenerate (\(\Delta=0\)) refrigerator regime can be analytically computed to be \(\chi=\omega_{1}/(\omega_{2}-\omega_{1})\) (see Appendix B). In the near-degenerate (\(\Delta\neq 0\)) case, the COP is lower-bounded by the inverse of synchronization measure \(S_{max}\), \[\chi=\frac{\dot{Q}_{c}^{ss}}{P^{ss}}\geq\frac{\dot{Q}_{c}^{ss}}{\kappa S_{max}}. \tag{25}\] The lower bound is valid because the consumed power is always upper-bounded by \(S_{max}\) in the refrigerator regime [see Eq. (19)]. We also note that \(\dot{Q}_{c}^{ss}\) is only a function of populations [see Eq. (14)] and hence is unrelated to the synchronization. We check the validity of the bound Eq. (25) for various values of parameters in Fig. 5**c-d**. Similar to the E-S bound, we find that the ratio \(\chi_{S}/\chi\), with \[\chi_{S}=\frac{\dot{Q}_{c}^{ss}}{\kappa S_{max}}, \tag{26}\] does not strongly depend on the baths mean occupation number ratio \(n_{h}^{(2)}/n_{c}\) (Fig. 5**c**), but depends strongly on the noise-induced coherence parameter \(p\) (Fig. 5**d**). However, different from the E-S bound, the bound (25) is saturated in the limit \(p\to 1\), i.e., when the hot bath causes constructive interference. ## VI Summary and Discussion There is intense debate regarding the optimal design of quantum thermal machines. Almost all design paradigms assume the systems are well modeled by Markovian master equations and furthermore pursue coherence from an information-theoretic resource cost point of view or a non-equilibrium cost point of view. While such analyses are definitely sufficient for non-degenerate systems, secular approximation begins to fail to describe near-degenerate systems interacting with a bath. Furthermore, interacting coherences can be well understood under the paradigm of coupled oscillator models, an analysis that has tremendously benefited classical dynamical systems. In this manuscript, we argue fundamentally that such a coupled oscillator model is indispensable to understanding the subtle dynamics of interacting coherences, which arise due to a variety of situations like those described in this manuscript. We developed the synchronization dynamics for a four-level atom coupled to two baths, generalizing the canonical thermal maser analysis by Scovil and Schulz-DuBois. We show that coherences interact with each other in two distinct qualitative ways. The first of these is a cooperative phase-locking phenomenon understood as entrainment dynamics of coherences being driven by an external field. Such entrainment dynamics and their contribution to the performance of a thermal maser were discussed earlier [16]. In this manuscript, we add to this entrainment dynamics and highlight the competition between coherences that can arise due to the bath coupling in our model. This coexistence of cooperation and competition gives rise to a richer tapestry of dynamics, uncoupling the power output from the thermal maser from the synchronization measure. The bound on power from the synchronization measure of the standard maser is modified to include coherent contributions to heat. In presence of noise-induced coherence, in addition to the power-synchronization bound, we also find that coherent heat is bounded by synchronization measure showing that synchronization not only influences the useful work [46; 47] but can also bound the wasteful heat. The result of this work can be summarized in the following table namely, \begin{tabular}{|l|c|c|} \hline & Engine & Refrigerator \\ \hline P-S bound & x & ✓ \\ \hline Q-S bound & ✓ & ✓ \\ \hline E-S bound & ✓ & ✓ \\ \hline \end{tabular} where the check mark indicates that the bound is satisfied whereas the cross means the bound is violated. The lower bound of COP is included in the refrigerator column of E-S bound for compactness. Unlike the standard upper bounds like the Carnot efficiency, our analysis shows that efficiency can be _lower_ bounded in presence of quantum synchronization. In other words, a quantum synchronous working substance can boost the efficiency of the machine. Even though our analysis was limited to a four-level thermal maser, it would be an interesting future avenue to investigate other coupled oscillator models for interacting coherences to inform the future designs of quantum thermal machines. ###### Acknowledgements. This research was supported by the Institute for Basic Science in South Korea (IBS-R024-Y2). S.V. acknowledges support from a Government of India DST-QUEST grant number DST/ICPS/QuST/Theme-4/2019. The authors would like to thank V. Singh for the useful discussions.
2310.12200
LATIS: The Stellar Mass-Metallicity Relation of Star-forming Galaxies at $z\sim 2.5$
We present the stellar mass - stellar metallicity relation for 3491 star-forming galaxies at $2 \lesssim z \lesssim 3$ using rest-frame far-ultraviolet (FUV) spectra from the Ly$\alpha$ Tomography IMACS Survey (LATIS). We fit stellar population synthesis models from the Binary Population And Spectral Synthesis code (BPASS v$2.2.1$) to medium resolution (R $\sim 1000$) and high signal-to-noise ($>30$ per 100 km/s over a wavelength range of 1221 - 1800 \r{A}) composite spectra of galaxies in bins of stellar mass to determine their stellar metallicity, primarily tracing $\rm Fe/H$. We find a strong correlation between stellar mass and stellar metallicity, with stellar metallicity monotonically increasing with stellar mass at low masses and flattening at high masses ($M_* \gtrsim 10^{10.3} M_\odot$). Additionally, we compare our stellar metallicity measurements with the gas-phase oxygen abundance of galaxies at similar redshift and estimate the average $\rm [\alpha/Fe] \sim 0.6$. Such high $\alpha$-enhancement indicates that high-redshift galaxies have not yet undergone significant iron enrichment through Type Ia supernovae. Moreover, we utilize an analytic chemical evolution model to constrain the mass loading parameter of galactic winds as a function of stellar mass. We find that as the stellar mass increases, the mass loading parameter decreases. The parameter then flattens or reaches a turning point at around $M_* \sim 10^{10.5} M_\odot$. Our findings may signal the onset of black hole-driven outflows at $z \sim 2.5$ for galaxies with $M_* \gtrsim 10^{10.5} M_\odot$.
Nima Chartab, Andrew B. Newman, Gwen C. Rudie, Guillermo A. Blanc, Daniel D. Kelson
2023-10-18T18:00:02Z
http://arxiv.org/abs/2310.12200v1
# LATIS: The Stellar Mass-Metallicity Relation of Star-forming Galaxies at \(z\sim 2.5\) ###### Abstract We present the stellar mass - stellar metallicity relation for 3491 star-forming galaxies at \(2\lesssim z\lesssim 3\) using rest-frame far-ultraviolet (FUV) spectra from the Ly\(\alpha\) Tomography IMACS Survey (LATIS). We fit stellar population synthesis models from the Binary Population And Spectral Synthesis code (BPASS v2.2.1) to medium resolution (R \(\sim 1000\)) and high signal-to-noise (\(>30\) per 100 km/s over a wavelength range of 1221 - 1800 A) composite spectra of galaxies in bins of stellar mass to determine their stellar metallicity, primarily tracing Fe/H. We find a strong correlation between stellar mass and stellar metallicity, with stellar metallicity monotonically increasing with stellar mass at low masses and flattening at high masses (\(M_{*}\gtrsim 10^{10.3}M_{\odot}\)). Additionally, we compare our stellar metallicity measurements with the gas-phase oxygen abundance of galaxies at similar redshift and estimate the average \([\alpha/\mathrm{Fe}]\sim 0.6\). Such high \(\alpha\)-enhancement indicates that high-redshift galaxies have not yet undergone significant iron enrichment through Type Ia supernovae. Moreover, we utilize an analytic chemical evolution model to constrain the mass loading parameter of galactic winds as a function of stellar mass. We find that as the stellar mass increases, the mass loading parameter decreases. The parameter then flattens or reaches a turning point at around \(M_{*}\sim 10^{10.5}M_{\odot}\). Our findings may signal the onset of black hole-driven outflows at \(z\sim 2.5\) for galaxies with \(M_{*}\gtrsim 10^{10.5}M_{\odot}\). Metallicity (1031); Galaxy evolution (594); High-redshift galaxies (734) ## 1 Introduction Cosmic primordial gas is predominantly composed of hydrogen and helium, with small amounts of other light elements such as lithium. The gas accreted from the intergalactic medium (IGM) and circumgalactic medium (CGM) enables a galaxy to form stars, which produce heavy elements. Feedback processes such as stellar winds and supernova explosions expel some of these heavy elements into the interstellar medium (ISM) where new stars are born. Moreover, galactic outflows driven by supernovae and black hole feedback can transfer some enriched material to the IGM and CGM (e.g., Tremonti et al., 2004; Chisholm et al., 2018). Therefore, the metal content of galaxies is linked to their fundamental evolutionary processes (e.g., star formation and inflow/outflow), and determining its relationship to global properties, such as stellar mass (\(M_{*}\)), provides useful information for constraining galaxy evolution models (see review by Maiolino and Mannucci, 2019). The metal content of high redshift galaxies (\(2\lesssim z\lesssim 3\)) is typically measured in the gas phase (metallicity of ISM) using strong rest-frame optical emission line diagnostics (e.g., Pettini and Pagel, 2004) that are calibrated locally, suffering significant uncertainties. High-redshift galaxies have been observed to exhibit distinct physical conditions in their H ii regions compared to those at \(z=0\)(e.g., Erb et al., 2006; Steidel et al., 2014; Shapley et al., 2015), implying a potential evolution of metallicity calibrations with redshift. For direct estimates of gas-phase metallicities, faint auroral lines (e.g., [Oiii]\(\lambda 4363\)) need to be detected to determine the electron temperature of the ionized gas, which is now possible for a statistically significant number of high redshift galaxies thanks to the James Webb Space Telescope (e.g., Sanders et al., 2023, Curti et al., 2023, CECILIA survey; Strom et al., 2021, AURORA survey; Shapley et al., 2021). Alternatively, stellar continuum emission can be used to measure the metal content of galaxies. Deep optical spectroscopy of \(z\sim 2-3\) galaxies allows us to access the rest-frame far-ultraviolet (FUV) part of the spectrum that contains important information about their underlying stellar population. Most of the emission in the FUV originates from short-lived O and B stars, and the inferred metallicities are expected to be similar to those derived for the ISM, out of which these young stars have recently formed. The photospheres of hot O and B stars, metal-dependent stellar winds, and interstellar lines all contribute to the FUV absorption features. These features usually have complex dependencies on age, metallicity, and initial mass function (IMF). A number of indices (e.g., 1425 A and 1978 A indices) have been identified that are optimized to depend only or mostly on metallicity (e.g., Leitherer et al., 2001; Rix et al., 2004; Sommariva et al., 2012). In recent studies, full spectral fitting has been used to measure the stellar metallicity of high-redshift galaxies (e.g., Steidel et al., 2016; Cullen et al., 2019; Kriek et al., 2019; Topping et al., 2020; Kashino et al., 2022; Carnall et al., 2022). This method has the advantage of using all of the information in the spectra. Steidel et al. (2016) used a composite rest-frame UV and optical spectrum of 30 star-forming galaxies at \(z=2.4\) and found them to be \(\alpha\)-enhanced relative to the solar abundances by a factor of 4-5. While Type Ia supernovae (SNe Ia) and core-collapse supernovae (CCSNe) both produce iron peak elements, \(\alpha\)-elements are only produced by massive, short-lived stars that explode as CCSNe. Thus, the \(\alpha\)/Fe abundance ratio is a powerful tool for constraining the relative contribution of SNe Ia and CCSNe. Based on [\(\alpha\)/Fe] \(\sim 0.6\), they conclude that CCSNe dominate the enrichment of \(z\sim 2\) star-forming galaxies. SNe Ia provide enrichment over long timescales (1-3 Gyr) (Maoz & Mannucci, 2012), while young high redshift galaxies with ages \(\lesssim 1\) Gyr are not sufficiently old for iron enrichment (Strom et al., 2017). Although it is now well established that there is a strong positive correlation between the gas-phase metallicity and stellar mass of galaxies out to \(z\sim 3.5\)(e.g., Erb et al., 2006; Steidel et al., 2014; Sanders et al., 2020; Strom et al., 2022), there are only a few studies on the relationship between stellar metallicity and stellar mass, especially at high redshift (Cullen et al., 2019; Calabro et al., 2021; Kashino et al., 2022). Cullen et al. (2019) used stacks of 681 star-forming galaxies at \(z=2.5\)-5 from the VANDELS survey (Pentericci et al., 2018) with a spectral resolution of \(R\sim 580\), and Kashino et al. (2022) utilized 1336 star-forming galaxies at \(z=1.6\)-3 drawn from zCOSMOS-deep survey (\(R\sim 200\)) (Lilly et al., 2007) to study the stellar mass - stellar metallicity relation (hereafter stellar MZR) around cosmic noon. Recently, the Ly\(\alpha\) Tomography IMACS Survey (LATIS; Newman et al., 2020) has obtained deep optical spectroscopy of \(\sim 3800\) star-forming galaxies at \(2\lesssim z\lesssim 3\) with a spectral resolution of \(R\sim 1000\). Compared to earlier studies of the stellar MZR, this data set is substantially larger and benefits from significantly higher resolution. In this paper, we utilize the Binary Population And Spectral Synthesis code (BPASS v2.2.1; Eldridge et al., 2017; Stanway & Eldridge, 2018) models to constrain the \(z\sim 2.5\) stellar MZR by fitting composite rest-frame FUV spectra of 3491 galaxies spanning a wide range of stellar masses, \(10^{9}M_{\odot}\leq M_{*}\leq 10^{11.5}M_{\odot}\). We do not employ the latest release of BPASS models (v2.3; Byrne et al., 2022) with \(\alpha\)-enhanced spectra. Although these \(\alpha\)-enhanced models are highly desired for applications at high redshift, they are limited to Main Sequence/Giant Branch stars while the spectra of OB and Wolf-Rayet stars remain unchanged from BPASS v2.2.1 and are not \(\alpha\)-enhanced (private communication, C. Byrne), making them unsuitable for fitting FUV spectra dominated by OB stars. The paper is organized as follows. In Section 2, we present an overview of the LATIS survey and details of the sample used in this work. We then describe our spectral analysis to estimate the stellar metallicities in Sections 3 and 4. Our results are presented in Section 5. We discuss our results in Section 6 and summarize them in Section 7. Throughout this work, we assume a flat \(\Lambda\)CDM cosmology with \(H_{0}=70\) kms\({}^{-1}\)Mpc\({}^{-1}\), \(\Omega_{m_{0}}=0.3\) and \(\Omega_{\Lambda_{0}}=0.7\). All magnitudes are expressed in the AB system, and the physical parameters are measured assuming a Chabrier (2003) IMF. We adopt the solar metallicity values of \(Z_{\odot}=0.0142\) and \(12+\log(\rm O/H)_{\odot}=8.69\)(Asplund et al., 2009). In this abundance scale, the solar oxygen and iron mass fractions are 0.00561 and 0.00128, respectively. ## 2 Data The LATIS survey is a five-year program (2017-2022) conducted using the Inamori-Magellan Areal Camera and Spectrograph (IMACS; Dressler et al., 2011) at the Magellan Baade telescope. The primary goal of LATIS is producing three-dimensional maps of the \(z\sim 2.5\) IGM at Mpc resolution, as traced by Lyman-\(\alpha\) absorption. LATIS densely sampled Lyman-break galaxies (LBGs) in three legacy fields, the Cosmic Evolution Survey (COSMOS; Scoville et al., 2007) and the Canada-France-Hawaii Telescope Legacy Survey (CFHTLS) D1/D4 fields. The final release catalog of CFHTLS (T0007)1 and the Ilbert et al. (2009) catalog of COSMOS were used for selection of LBGs in the CFHTLS D1/D4 and COSMOS fields, respectively. Galaxies were selected based on either their photometric redshifts or \(ugr\) optical colors with \(r-\)band magnitudes \(23.0<r<24.8\). The observations were performed using the custom grism and filter installed on IMACS resulting in a spectral resolving power of \(R=990\) and a spectral coverage of 3890-5830 A. This spectral resolution is estimated at the midpoint of the wavelength coverage and is derived from the average size of targets, which are not point-like, in the two-dimensional spectra. For a full description of the survey strategy, observation and data reduction, we refer readers to Newman et al. (2020). Footnote 1: [http://terapix.calet.org/terapix.iap.fr/cplt/T0007/doc/](http://terapix.calet.org/terapix.iap.fr/cplt/T0007/doc/) T0007-doc.html In the present paper, we use a sample of \(z>1.5\) galaxies with high-confidence spectroscopic redshifts based on multiple lines and a well-modeled spectrum (i.e., zqual=3 or zqual=4 defined in Newman et al., 2020). Out of the 7408 galaxies that were initially targeted with LATIS, 6568 received deep exposures as part of the main survey while 840 were observed on backup masks of bright candidates intended for poor conditions. A total of 5443 received a redshift quality flag (zqual) of 3 or 4 and had no severe data reduction problems. After further exclusion of 45 AGNs, 105 QSOs, and 44 spectra comprised of two heavily blended sources, we are left with 5249 galaxies, of which 3888 have a redshift \(z>1.5\). In the following sections we refine our sample further by applying stellar mass limits and requiring near-IR photometry, which narrows our sample to 3491 galaxies with \(10^{9}\mathrm{M}_{\odot}\leq\mathrm{M}_{*}\leq 10^{11.5}\mathrm{M}_{\odot}\). ### Photometry Multi-wavelength coverage from the UV to mid-infrared is essential for fitting the spectral energy distributions (SEDs) of our sample and measuring physical parameters, such as stellar mass and star formation rate (SFR) (e.g., Chartab et al., 2023). The photometry of the LATIS galaxies in the COSMOS field is taken from the latest version of the COSMOS catalog (COSMOS2020; Weaver et al., 2022). In the CFHTLS D1 and D4 fields, we construct new photometric catalogs by combining optical to near-infrared (NIR) photometry from multiple surveys. In both fields, we use \(ugriz\) images from the CFHTLS release T0007 and Spitzer/IRAC channel 1 and 2 mosaics from Anunziatella et al. (2018). In the D1 field, we use \(zYJHK_{s}\) images from the VIDEO survey (Jarvis et al., 2013), while in the D4 field we use \(JHK_{s}\) images from the WIRDS survey (Bielby et al., 2012). Images are registered and point spread function (PSF)-matched following the procedures described by Newman et al. (2012). We use SWarp(Bertin, 2010) to construct a \(\chi^{2}\) detection image from the NIR (VIDEO or WIRDS) images available in each field, and run SExtractor(Bertin & Arnouts, 1996) in dual-image model to construct catalogs from the PSF-matched images. We measure colors in 2\({}^{\prime\prime}\) diameter apertures and scale the fluxes to match FLUX_AUTO in the \(r\) band. For the distinct treatment of the IRAC bands, see Newman et al. (2012). When using the zeropoints provided by each survey, we find that the optical-NIR and NIR-IRAC colors of D1/D4 LATIS galaxies are inconsistent with those in the COSMOS field. This seems to arise, at least in part, from a flux-dependent PSF in the NIR images of the D1 and D4 fields; this effect means that matching the PSFs measured with stars does not match the PSFs applicable to faint galaxies. We therefore adjust the NIR and IRAC zeropoints to match the median optical-NIR and NIR-IRAC colors of faint galaxies to those measured in the COSMOS2015 catalog (Laigle et al., 2016, using the COSMOS2020 catalog instead results in insignificant differences). These zeropoint corrections are usually \(\approx 0.1\) mag and are fairly insensitive to the flux limits. After applying this procedure, we find that the average SEDs of the LATIS galaxies, as well as the derived distributions of stellar masses and SFRs (Section 2.2), are quite consistent across all fields. We find that 93-97% of LATIS galaxies in the COSMOS and D1 fields have clear matches in the COSMOS2020 and our newly compiled (optical to NIR) D1 field catalogs. These matches are identified based on the Figure 1: Spectroscopic redshift distribution for the entire LATIS sample. The vertical dashed lines indicate the median value along with the 5\({}^{\mathrm{th}}\) and 95\({}^{\mathrm{th}}\) percentiles. criteria of having separations \(<0\farcs 5\) and \(\Delta r<0.4\) mag. The fraction is lower (60%) in the D4 field, because the WIRDS imaging does not cover the entire area. To ensure that the stellar mass measurements are robust, we require NIR coverage, which is particularly important in D1/D4 fields where the IRAC data are not as deep. Based on these criteria, we are left with approximately 3500 galaxies with 90% of the sample at \(2.0\leq z\leq 3.1\) (Figure 1). ### Stellar Mass & Star Formation Rates We fit the UV to mid-infrared SED of galaxies to derive their physical parameters. We use the C++ version of the LePhare code (Arnouts et al., 1999; Ilbert et al., 2006) combined with a library of synthetic spectra generated by the Bruzual & Charlot (2003) population synthesis code. The redshifts are fixed to the spectroscopic redshifts from LATIS. Similar to the configuration employed in the COSMOS catalogs (Laigle et al., 2016; Weaver et al., 2022), the models incorporate exponentially declining star formation histories with nine \(e\)-folding times in the range of \(0.01<\tau<30\) Gyr and two delayed exponentially declining models (SFR \(\propto te^{-t/\tau}\)) with \(\tau=3\) and 5 Gyrs. The delayed models are included since high-redshift galaxies likely exhibit star formation histories that differ substantially from simple exponential decays (e.g., Reddy et al., 2012; Kelson & Holden, 2010). We adopt the Chabrier (2003) IMF, truncated at 0.1 and 100 M\({}_{\odot}\), and the Calzetti et al. (2000) attenuation law to apply dust extinction (\(\rm E(B-V)\leq 1.1\)). The code also includes emission lines using the Kennicutt (1998) relation between SFR and UV luminosity, as described in Ilbert et al. (2009). Three different stellar metallicities are considered: \(Z_{*}=0.02,0.008\), and 0.004. For each template, LePhare computes fluxes in all bands, then determines the template with a minimum \(\chi^{2}\) based on the model and observed fluxes. It also provides percentiles of the marginalized posterior probability distributions (probability \(\propto\rm e^{-\chi^{2}/2}\)). In this work, we use the median posterior values for stellar mass, SFR, and sSFR (SFR/\(M_{*}\)). Figure 2 shows the distribution of SED-derived stellar masses and SFRs. The main sequence (MS) relationship is also shown from Speagle et al. (2014) for comparison. Overall our sample is in agreement with the \(z\sim 2.5\) MS, although its average SFRs are slightly higher. We further compare to galaxies in the COSMOS2020 catalog, whose masses and star-formation rates were derived using similar techniques, thereby minimizing systematics. The green line in Figure 2 represents the median trend for the star-forming galaxies selected by sSFR \(>10^{-10.1}\rm yr^{-1}\)(Pacifici et al., 2016) and \(z=2-3\). At fixed stellar mass, our sample exhibits a \(\sim 0.3\) dex average enhancement in SFR compared to these galaxies. As a result, our sample predominantly represents star-forming galaxies located on and above the main sequence at cosmic noon. ### Composite spectra Most of the prominent absorption and emission features in the FUV are interstellar in origin. Typically, stellar features in the FUV spectrum are weak and require a high signal-to-noise ratio to be useful for measuring stellar metallicity. Gravitationally lensed systems often provide an adequate signal-to-noise ratio for individual galaxies. However, in our case, where individual galaxies lack sufficient signal-to-noise ratio, stacking techniques can be used to boost the signal strength. To construct a composite spectrum, the individual spectra are first corrected for Galactic extinction using the Schlafly & Finkbeiner (2011) dust map, followed by masking pixels with strong skyline contamination. The individual spectra are shifted to the rest frame using their spectroscopic redshifts and then normalized by their median fluxes within 1425-1500 A to prevent the dominance of high-SFR galaxies in the composite spectrum. The normalized spectra are resampled onto a grid of wavelengths corresponding to \(\Delta v=100\) km/s. This results in a wavelength spacing of \(\sim 0.4-0.6\) A in a rest-frame FUV spectrum covering \(\sim 1200-1800\) A. The composite spectrum is determined by the median of the normalized spectra, and the \(1\sigma\) error is derived by bootstrapping. Figure 3 illustrates the high signal Figure 2: SED-derived SFR as a function of stellar mass for the entire LATIS sample. For comparison, the main-sequence relation from Speagle et al. (2014) is also shown. The green line represents the median trend for the star-forming galaxies at \(z=2-3\) in the COSMOS2020 catalog. -to-noise composite spectrum of all 3491 galaxies with \(10^{9}\)M\({}_{\odot}\leq\) M\({}_{*}\leq 10^{11.5}\)M\({}_{\odot}\) included in the present work (S/N=227 per 100 km/s at \(\lambda_{\rm rest}\sim\) 1450 A). #### 2.3.1 Stellar mass bins To study the stellar MZR, we construct composite spectra within bins of stellar mass. The median uncertainty in stellar mass measurements is \(\sim 0.15\) dex, which provides a lower limit on the size of the stellar mass bins. Furthermore, we require that a composite spectrum has a median signal-to-noise ratio per 100 km/s of at least 30 over a wavelength range of 1221-1800 A to ensure a reliable metallicity measurement. Over a similar wavelength range, we require that at least 25 galaxies contribute to every pixel of a composite spectrum in order to obtain reliable 1\(\sigma\) error estimates. These criteria leave us with nine stellar mass bins, which will be utilized to construct the stellar MZR in Section 5.2. ## 3 Stellar Population Synthesis Models To determine the stellar metallicity of galaxies, we compare the observed composite spectra with synthetic stellar population models. We use the models from the Binary Population And Spectral Synthesis code (BPASS v2.2.1; Eldridge et al., 2017; Stanway and Eldridge, 2018) adopting the Chabrier (2003) IMF with a high-mass cutoff of 100 \(M_{\odot}\). The set of simple binary stellar population spectra (i.e., instantaneous starbursts) with an initially formed stellar mass of \(10^{6}\)\(M_{\odot}\) at ages \(\log({\rm a^{\prime}/years})=6.0-11\) in 0.1 dex increments is provided at a pixel resolution of 1 A for 13 different stellar metallicities (\(10^{-5}\leq Z_{*}\leq 0.040\)). We add nebular continuum to the stellar population models (BPASS+Nebular) using the photoionization code Cloudy v17.02 (Ferland et al., 2017). We model Hii regions with a spherical shell geometry of a fixed radius, adopting the BPASS spectrum as the incident spectrum. The metallicity of ionized gas is assumed to be \(Z_{\rm neb}=0.5Z_{\odot}\), which is the average gas-phase metallicity of galaxies at the redshift interest of our study (e.g., Steidel et al., 2016; Strom et al., 2018; Sanders et al., 2021). Additionally, we use an ionization parameter of \(\log U=-2.8\) and electron density of \(n_{e}=250\) cm\({}^{-3}\) that are consistent with recent estimates for \(z\sim 2-3\) star-forming galaxies (e.g., Sanders et al., 2016; Strom et al., 2017; Topping et al., 2020). Due to the minimal contribution of the nebular continuum to the total flux (\(\lesssim 10\%\)), our results are not sensitive to the assumptions of the Cloudy model. We construct models assuming a constant star formation history by integrating burst models over their ages. The choice of constant star formation history is motivated by the fact that the FUV spectrum is mainly reliant on the past 100 Myr history of star formation, and the star formation history is expected to be relatively constant over this timescale for an ensemble of galaxies at \(z\sim 2-3\)(e.g., Reddy et al., 2012; Cullen et al., 2019). Therefore, model spectra, F\((\lambda,a,Z_{*})\), are constructed using the following equation: \[{\rm F}=\sum_{i=1}^{\rm N}f^{\rm BPASS}(\lambda,a^{\prime}_{i},Z_{*})T(\lambda,a^{\prime}_{i})\Delta a^{\prime}_{i} \tag{1}\] where BPASS+Nebular models (\(f^{\rm BPASS}\)) are summed up over the age bins (\(a^{\prime}\)) such that N is the number of BPASS age bins within the age of a galaxy (\(a\)), \(\Delta a^{\prime}_{i}\) are the widths of the age bins, and \(T(\lambda,a^{\prime}_{i})\) represents the transmission function due to dust attenuation. A population of stars younger than the lifetime of the stellar birth clouds suffers from greater attenuation than those outside the region (Charlot and Fall, 2000). The typical lifetime of the birth clouds is 10 Myr (e.g., Blitz and Shu, 1980). We therefore model the transmission function as follows (see also Carnall et al., 2022): \[T(\lambda,a^{\prime}_{i})=T^{\rm ISM}(\lambda)\times\begin{cases}T^{\rm BC}( \lambda)&\text{if $a^{\prime}_{i}\leq 10$ Myr}\\ 1&\text{otherwise}\end{cases} \tag{2}\] where \(T^{\rm ISM}(\lambda)\) and \(T^{\rm BC}(\lambda)\) are the transmission functions of the ambient ISM and birth clouds, respectively. In this paper, we do not intend to infer ISM dust attenuation parameters; they are merely nuisance parameters in our model and are coupled with possible flux calibration errors. To properly model any imperfections in spectroscopic calibration, a fifth-order polynomial function is used for the \(T^{\rm ISM}(\lambda)\) component; however, the \(T^{\rm BC}(\lambda)\) component is modeled using the Calzetti et al. (2000) dust attenuation, wherein the visual band attenuation \(A_{\rm V}^{\rm BC}\) is included as a free parameter. Figure 4 shows examples of the model spectra as the metallicity (\(Z_{*}\)) and A\({}_{\rm V}^{\rm BC}\) are varied while the other parameters are fixed. It is evident that absorption features are much stronger as the stellar metallicity increases and that this has the largest effect on the spectrum. Moreover, the figure shows that although the FUV stellar absorption lines are relatively insensitive to the birth-cloud attenuation A\({}_{\rm V}^{\rm BC}\), the small variation in equivalent width of several features, including the 1718 A and 1501 A regions, can be used to capture any covariance between A\({}_{\rm V}^{\rm BC}\) and \(Z_{*}\), given a high signal-to-noise spectrum. ## 4 Fitting Synthesis Model Spectra In this section, we present our Bayesian full spectrum fitting method, along with the associated parameters and priors. In order to accurately compare our Figure 4: The BPASS v2.2.1 stellar population models at the average effective resolution of the LATIS composite spectral (\(\sigma_{\rm v}=250\) km/s) assuming a constant SFR over 100 Myr. The top panel shows the variation of the model with metallicity at a fixed A\({}_{\rm V}^{\rm BC}=0\). Various metallicities, ranging from \(10^{-4}\) to 0.01, are shown with different colors, as indicated in the legend. The bottom panel shows the sensitivity of the model to A\({}_{\rm V}^{\rm BC}\) at \(Z_{*}=0.005\). For illustration purposes, we normalized all model spectra using the pseudo-continuum derived from the spline fit to the wavelength windows (black squares) suggested by Rix et al. (2004). Figure 3: Composite rest-frame FUV spectrum of the entire LATIS sample. The top panel shows the number of galaxies contributing to each pixel in the composite spectrum, ranging from 536 to 3235. The composite spectrum is shown in the bottom panel; some prominent emission and absorption lines are labeled. Stellar and interstellar absorption features are color-coded with red and green, respectively. Several nebular emission lines (blue), and fine structure emission lines (purple) are also included. The red shaded regions indicate wavelength ranges for 1370 and 1425 indices as defined in Leitherer et al. (2001); these indices cover several photospheric lines such as Fe v \(\lambda\lambda 1360-1380\), O v \(\lambda 1371\), Si iii \(\lambda 1417\), C iii \(\lambda 1427\), and Fe v \(\lambda 1430\), which are not labeled for legibility. The gray line shows the \(1\sigma\) error spectrum. model spectra with the observed composite spectra, it is important to ensure that the spectral resolution is consistent between the two. We first interpolate the BPASS+Nebular models to bins of 100 km/s, aligning them with the wavelength grid of the observed spectra. We then introduce a parameter, \(\sigma_{\rm v}\), that accounts for the combined effects of galaxies' stellar velocity dispersion, the spectrograph resolution, and redshift uncertainties. We use estimates of these factors to place an informed prior on \(\sigma_{\rm v}\) while allowing for uncertainties in these estimates (In Appendix C we demonstrate the effect of adopting a fixed best estimate of \(\sigma_{\rm v}\) in our fitting). We convolve our model spectra with a Gaussian kernel in velocity space, resulting in model templates that we refer to as \(\tilde{\rm F}(\lambda,Z_{*},a,\sigma_{\rm v},A_{\rm V}^{\rm BC})\). We fit the BPASS+Nebular models (\(\tilde{\rm F}\)) to composite spectra within the framework of Bayesian inference. In the case of our problem, the parameter vector of the models is \(\mathbf{\theta}=(Z_{*},a,\sigma_{\rm v},A_{\rm V}^{\rm BC})\), and assuming Gaussian and independent uncertainties in the composite spectra, the logarithm of the likelihood function can be constructed as, \[\log\mathcal{L}=-\frac{1}{2}\sum_{\lambda}\left(\frac{[f(\lambda)-\tilde{\rm F }(\lambda,\mathbf{\theta})]^{2}}{\sigma(\lambda)^{2}}+\ln[2\pi\sigma(\lambda)^{2} ]\right) \tag{3}\] where \(f(\lambda)\) and \(\sigma(\lambda)\) are the flux and \(1\sigma\) uncertainty of the observed composite spectrum, respectively. Due to the fact that our models are provided for discrete values of the parameters, we linearly interpolate \(\tilde{\rm F}\) models across different grids in order to define the likelihood at any given parameter value. Rather than treating \(T^{\rm ISM}\) as a free parameter in the model, we determine its value by fitting a fifth-order polynomial function to \(f(\lambda)/\tilde{\rm F}(\lambda,\mathbf{\theta})\). This step is performed at every evaluation of the likelihood function during the Bayesian sampling process. This approach ensures that the model's continuum shape matches that of the observed spectra while keeping the computational complexity manageable. \(A_{\rm V}^{\rm BC}\) are given a uniform prior, but a logarithmic prior is applied for age and stellar metallicity. A Gaussian prior is adopted for \(\sigma_{\rm v}\). In order to calculate this prior, we consider several factors: the spectral resolution of \(\sigma_{\rm inst}=125\pm 10\) km/s achieved for typical galaxies (representing the average and range over the rest-frame wavelength range of 1221-1800 A, Section 2), random errors in redshifts of \(\sigma_{z}=93\pm 27\) km/s (Newman et al. in prep), and a typical stellar velocity dispersion of \(\sigma_{*}=100\pm 50\) km/s (e.g., Barro et al., 2014). The BPASS models are distributed in wavelength bins of 1 A, coarser than our pixels. Binning can be regarded as convolution by a top-hat kernel, which we approximate as a Gaussian with equal FWHM, leading to a model resolution of \(\sigma_{\rm mod}=86\pm 10\) km/s. Since \(\sigma_{\rm v}\) represents the kernel required to match the BPASS models to the observations, it can be computed as \(\sigma_{\rm v}^{2}=\sigma_{\rm inst}^{2}+\sigma_{z}^{2}+\sigma_{*}^{2}-\sigma_ {\rm mod}^{2}\). Combining our estimates of each term results in an effective resolution of \(\sigma_{\rm v}=165\pm 35\) km/s. Table 1 presents the list of the parameters and priors. In order to obtain the posterior distribution of the parameters, we use the MULTINEST nested sampling algorithm (Skilling, 2006; Feroz & Hobson, 2008; Feroz et al., 2009; Feroz & Skilling, 2013; Feroz et al., 2019), which can be accessed via the PYMULTINEST interface (Buchner et al., 2014). As shown in Figure 3, not every part of FUV spectra originates from stellar emission. Most of the prominent lines in a rest-UV spectrum are predominantly formed in the ISM and/or have a nebular origin. These lines are not included in the model spectra (\(\tilde{\rm F}\)). We therefore exclude the wavelength regions contaminated by interstellar absorption or nebular emission lines. The He ii\(\lambda 1640\) emission line is not excluded since this line originates primarily from hot stars that are well incorporated in BPASS models. Table 2 summarizes the masked regions ignored during the fitting. These masked wavelengths are also shown in Figure 3 as shaded regions. We note that our full spectral fitting procedure includes the 1370 and 1425 indices (Leitherer et al., 2001) which are attributed to Fe v\(\lambda\lambda 1360-1380\) and O v\(\lambda 1371\), and to Si iii\(\lambda 1417\), C iii\(\lambda 1427\), and Fe v\(\lambda 1430\), respectively. The red lines in Figure 3 show some other stellar photosphere absorption lines within the wavelength range we considered for fitting (\(1221-1800\) A). When conducting spectral fitting, we employ two iterations to address any potential discrepancies between the observed and model spectrum. During the first iteration, we exclusively mask out the regions where the ISM absorption lines are present, as specified in Table 2. Following the initial fit, we mask the pixels displaying \(3\sigma\) deviations from the best-fit model and repeat the analysis. We find only a marginal change in our metallicity measurement during the second iteration. However, we find that this iterative process leads to more robust estimates of \(\sigma_{\rm v}\) that are closer to our prior expectation. \begin{table} \begin{tabular}{c c c} Free parameter & Prior & Limits \\ \hline \(Z_{*}\) & Logarithmic & \((10^{-4},0.04)\) \\ \(a\) (year) & Logarithmic & \((10^{7},10^{9.6})\) \\ \(\sigma_{\rm v}\) (km/s) & Gaussian (\(\mu=165,\sigma=35\)) & (100,400) \\ \(\rm A_{\rm V}^{\rm BC}\) & Uniform & (0,4) \\ \end{tabular} \end{table} Table 1: Parameters and priors used for the model fitting As discussed in Section 2.3.1, we do not include \(\lambda>1800\) A in our fitting procedure since we require the contribution of a minimum of 25 galaxies to every pixel of the composite spectrum in each stellar mass bin. Therefore, we do not use the 1935-2020 A region known as the 1978 index (Rix et al., 2004), which is mainly sensitive to the iron abundance (Fe iii \(\lambda\lambda 1949-1966\)). We investigate the effect of excluding these regions, based on the composite spectrum of the entire sample, where there are \(\sim 10\times\) more galaxies than in the stellar mass bins, allowing us to perform spectral fitting over a wider range of wavelength, \(\lambda=1221-2020\) A. We find that excluding the \(\lambda=1800-2020\) A region from the full spectral fitting has a minimal effect on the inferred parameters. Full spectral fitting in the range of \(\lambda=1221-2020\) A results in slightly higher \(Z_{*}\) value, by \(\sim 0.02\) dex. ## 5 Measurements of stellar metallicity We now use the framework built in Sections 3 and 4 to analyze composite LATIS spectra and infer their metallicities. We first consider the composite spectrum of the entire sample, and then turn to the composite spectra in stellar mass bins. ### Average metallicity of \(\sim 10^{10}\)M\({}_{\odot}\) star-forming galaxies at \(z\sim 2.5\) The signal-to-noise ratio of a composite spectrum can be maximized by stacking spectra of the whole sample, resulting in lower measurement uncertainties for the sample average metallicity. We fit the BPASS models to the composite spectrum of the full sample, which has a median stellar mass of \(M_{*}\sim 10^{9.8}M_{\odot}\), and find that the average log(\(Z_{*}/Z_{\odot}\)) is \(-0.87^{+0.01}_{-0.01}\). Figure 5 shows a corner plot demonstrating that all parameters are well constrained. Figure 6 shows that the models generally provide a good fit to the composite spectrum with root mean square (RMS) of residuals \(\sim 2\%\). The RMS of the fractional noise for the composite spectrum is \(\sim 1\%\). Given the very high signal-to-noise ratio of the full sample's composite spectrum, the RMS of the residuals is notably increased by model inaccuracies. Therefore, the error estimates of parameters derived by fitting the full composite spectrum may be underestimated, since they may not fully account for the model's limitations. Underestimation of errors is less of a concern for composite spectra in the stellar mass bins, where observational noise outweighs model inaccuracies. The stellar photospheric absorption features in the rest-frame FUV spectrum are predominantly caused by transitions of highly ionized iron (e.g., Brandt et al., 1998). Therefore \(Z_{*}\) can be translated to [Fe/H] by \(\rm[Fe/H]\approx log(Z_{*}/Z_{\odot})=-0.87^{+0.01}_{-0.01}\). In this study, we measure the FUV-weighted stellar metallicity, which is expected to closely resemble the gas-phase metallicity measured in the ISM where recent star formation has occurred. In general, the FUV-weighted metallicity is expected to be marginally lower than the gas-phase metallicity by only \(\lesssim 0.1\) dex (Kashino et al., 2022). However, similar galaxies at this redshift have an average gas-phase oxygen abundance of [O/H] \(\sim-0.3^{+0.1}_{-0.1}\), as derived from rest-frame optical strong lines (e.g., Erb et al., 2006; Steidel et al., 2014; Strom et al., 2018; Sanders et al., 2021). To assess the gas-phase oxygen abundance within our sample, we measure [O/H] for the composite rest-frame optical spectra of 17 LATIS galaxies that were observed using Keck/MOSFIRE as part of the MOSDEF survey2(Kriek et al., 2015). We ensure that these galaxies have H\(\alpha\) line detection with S/N \(\geq 3\) and are not flagged as AGNs based on X-ray emission or IRAC colors. Ad \begin{table} \begin{tabular}{c c c} \hline \hline \(\lambda_{\rm min}\) (Å) & \(\lambda_{\rm max}\) (Å) & Interstellar spectral features \\ \hline 1248 & 1270 & S ii \(\lambda\lambda\)1250.58,1253.81, Si ii \(\lambda\)1260.42, Si ii\({}^{*}\)\(\lambda\)1265.00 \\ 1291 & 1320 & O i \(\lambda\)1302.17, Si ii \(\lambda\)1304.37, Si ii\({}^{*}\)\(\lambda\)1309.28, Ni ii \(\lambda\)1317.22 \\ 1326 & 1340 & C i \(\lambda\)1328.83, C ii \(\lambda\)1334.53, C ii\({}^{*}\)\(\lambda\)1335.71 \\ 1386 & 1406 & Si iv \(\lambda\lambda\)1393.76,1402.77 \\ 1450 & 1471 & Ni ii \(\lambda\lambda\lambda\)1454.84,1467.26,1467.76, Co ii \(\lambda\)1466.21 \\ 1521 & 1529 & Si ii \(\lambda\)1526.71 \\ 1531 & 1540 & Si ii\({}^{*}\)\(\lambda\)1533.43 \\ 1543 & 1565 & C iv \(\lambda\lambda\)1548.19,1550.77, C i \(\lambda\)1560.31 \\ 1605 & 1615 & Fe ii \(\lambda\)1608.45, Fe ii \(\lambda\)1611.20 \\ 1654 & 1677 & C i \(\lambda\)1656.93, O iii] \(\lambda\lambda\)1660.81,1666.15, Al ii \(\lambda\)1670.79 \\ 1706 & 1715 & Ni ii \(\lambda\)1709.60 \\ 1737 & 1755 & Ni ii \(\lambda\lambda\)1741.55,1751.91 \\ \hline \hline \end{tabular} \end{table} Table 2: Rest-wavelength ranges excluded from fitting ditionally, we require that log([Nii]\(\lambda\)6584/H\(\alpha\)) \(<-0.3\) to exclude optical AGNs (e.g., Coil et al., 2015). The selected 17 galaxies have a median redshift of \(z\sim 2.4\) with a median stellar mass of \(10^{9.9}\)M\({}_{\odot}\), which is comparable to the median mass of the entire sample. Additionally, the median of log(SFR[M\({}_{\odot}\)/yr]) for these 17 galaxies is 1.31, which is slightly lower than the 1.54 observed for the full sample. According to Sanders et al. (2018), this difference in SFR corresponds to a deviation of \(\sim 0.03\) dex in metallicity, which is within our measurement error, thereby ensuring the 17 galaxies are representative for purposes of metallicity comparison. We construct the composite spectrum and measure the flux of the emission lines following the method described in Figure 5: Corner plot showing the posterior distribution for fitted parameters on the composite spectrum of the entire LATIS sample. Contours correspond to \(1\sigma\), \(2\sigma\), \(3\sigma\) and \(4\sigma\) levels. The medians of the marginalized posteriors are indicated by red dashed lines. The black dashed lines on marginalized posteriors indicate the median values along with the \(16^{\rm th}\) and \(84^{\rm th}\) percentiles. Chartab et al. (2021). The resulting composite spectrum is shown in Figure 7. We determine \(\langle[\rm{Nii}]\lambda 6584/\rm{H}\alpha\rangle=0.13\pm 0.02\), which corresponds to a gas-phase oxygen abundance of \(12+\rm{log}(O/H)=8.39\pm 0.03\) ([O/H] = \(-0.30\pm 0.03\)) using the calibrations of Bian et al. (2018). This calibration relies on direct-method metallicities obtained from stacked spectra of local analogs of \(z\sim 2\) galaxies; as a result, there is no evaluation of its inherent scatter. We assume that the scatter is the same as Pettini & Pagel (2004) calibration, which is 0.18 dex. Since the composite spectrum is composed of 17 galaxies, the intrinsic error in the oxygen abundance of the composite spectrum reduces to \(0.18/\sqrt{17}=0.04\)(e.g., Erb et al., 2006; Sanders et al., 2015). We add this in quadrature to the measurement uncertainty, yielding a final result of [O/H] = \(-0.30\pm 0.05\). We emphasize that this error is purely statistical; the dominant systematic errors are discussed below. Oxygen is the most abundant among the \(\alpha\) elements. Therefore, a measurement of [O/Fe] can be taken as an approximation of [\(\alpha\)/Fe]. In this study, we find \(\rm{[Fe/H]}=-0.87\pm 0.01\) and [O/H] = \(-0.30\pm 0.05\), resulting in [\(\alpha\)/Fe] \(\sim[\rm{O/Fe}]=0.57\pm 0.05\) (statistical errors only). This result is consistent with the average \(\alpha\)-enhancement reported for star-forming galaxies at \(z\sim 2.5\)(e.g., Steidel et al., 2016; Strom et al., 2018; Cullen et al., 2019; Kashino et al., 2022), suggesting that young high-redshift galaxies have not yet undergone significant iron enrichment through SN Ia. There are substantial systematic uncertainties associated with gas-phase metallicity measurements. For instance, there is a systematic offset of \(\sim 0.3\) dex between "direct" electron-temperature-based and photoionization-model-based gas-phase metallicities (e.g., Kewley & Ellison, 2008; Blanc et al., 2019). Furthermore, there is a 0.24 dex offset between gas-phase metallicities derived from collisionally excited lines and recombination lines (Steidel et al., 2016) in local H ii regions. These discrepancies are of particular importance when comparing gas-phase and stellar metallicity on an absolute scale. Consequently, the errors we have estimated for [O/Fe] represent measurement errors and do not encompass systematic errors. Taking into account the tendency of direct method gas-phase abundances to underestimate [O/H] (e.g., Cameron et al., 2023), it becomes plausible that the \(\alpha\)-enhancement could be larger than our estimate. ### Stellar Mass - Metallicity Relation This section presents the relationship between stellar mass and stellar metallicity for our sample at \(z\sim 2.5\). The results of our full spectral fitting analysis on composite spectra in bins of stellar mass, as described in Section 2.3.1, are presented in Appendix A and summa Figure 6: Similar to Figure 3 but the red line represents the best-fit model, while the orange line shows the unsmoothed BPASS model with its original resolution. The shaded regions in light blue, affected by interstellar absorption or nebular emission lines, are excluded from the fitting process. The bottom panel shows the fit residuals. rized in Table 3. As shown in the left panel of Figure 8, the stellar \(M_{*}-Z_{*}\) relation exists at \(z\sim 2.5\), such that \(Z_{*}\) increases with increasing stellar mass at low masses and flattens at \(M_{*}\gtrsim 10^{10.3}M_{\odot}\). This flattening has important implications for galactic winds in massive galaxies, which we will consider in Section 5.4. Results from other studies at relatively similar redshifts are included in the left panel of Figure 8 for comparison. Our \(M_{*}-Z_{*}\) relation is in good agreement with that of zCOSMOS-deep survey from Kashino et al. (2022) at \(1.6<z<3\). Their sample shares a comparable range of redshifts with ours and was selected in a similar manner. Additionally, they employed stellar population models similar to ours. Therefore similar results are anticipated to be obtained. However, we find slightly lower metallicities compared to the VANDELS survey (Cullen et al., 2019). In our analysis, we utilize the BPASS v2.2.1 models, whereas, in Cullen et al. (2019), the WM-basic stellar population models were employed. They performed a comparative experiment using BPASS v2.1 models, which resulted in slightly lower metallicities by \(\sim 0.09\) dex. Therefore, it is recommended to adjust their measurements by \(\sim 0.1\) dex to ensure a meaningful comparison with our findings. After adjusting for this systematic offset, our result is in better agreement with Cullen et al. (2019). It should be highlighted that while our study focuses on \(z\sim 2-3\), the VANDELS survey explored a broader redshift range of \(z=2.5-5\). However, Cullen et al. (2019) did not observe significant redshift trends within this range. The lack of such trends suggests that the different redshift ranges between our studies may not be detrimental to our metallicity comparison. Moreover, our comparison supports their observation that stellar MZR does not evolve strongly over \(z\sim 2-5\). Despite the generally satisfactory agreement with existing literature, we note that prior studies did not take into account the effect of extra attenuation in birth clouds, which may have an impact on the derived metallicity. The measurement of metallicity can be biased toward less obscured, lower metallicity environments if the extra attenuation present in the birth clouds of young stars is not taken into account (Chartab et al., 2022). To assess this effect, we repeat the analyses considering \(A_{\rm V}^{\rm BC}=0\) (Appendix B). We then determine the offset between the metallicity measurements and those obtained with additional dust attenuation considered for birth clouds. Our finding indicates that neglecting additional dust attenuation within birth clouds results in an underestimated stellar metallicity by \(\sim 0.1\) dex. In the right panel of Figure 8, we compare our stellar MZR with the gas-phase \(\rm M_{*}-[O/H]\) relation from Sanders et al. (2021). We find that there is no clear trend of [O/Fe] with stellar mass, which is in agreement with previous studies (e.g., Strom et al., 2022; Kashino et al., 2022). Instead, the values remain relatively constant across all mass ranges, with an average value of \([\alpha/{\rm Fe}]\sim[{\rm O}/{\rm Fe}]\sim 0.6\pm 0.1\). This suggests that the galaxies in our sample are \(\alpha\)-enhanced across the entire mass range. It is also worth noting that the average \([\alpha/{\rm Fe}]\) value we find here is consistent with what we estimate for the composite spectrum of the entire sample in Section 5.1. Previous studies of the gas-phase MZR at \(z\sim 2\) have not shown such a strong flattening at the massive end (right panel of Figure 8), although Sanders et al. (2021) reported signs of flattening for the most massive stellar mass bin at \(M_{*}\sim 10^{10.5}M_{\odot}\). This difference could be partly explained by the smaller samples used in gas-phase studies, which are mostly restricted to \(M_{*}<10^{10.5}M_{\odot}\), with only a handful of galaxies beyond this stellar mass. In contrast, our sample includes 219 galaxies in the last stellar mass bin of \(\log(M_{*}/M_{\odot})=10.4-11.5\), and our overall larger sample also enables finer mass bins that better define the shape of the MZR. In stellar MZR studies, Cullen et al. (2019) analyzed a sample of \(M_{*}\lesssim 10^{10}M_{\odot}\) galaxies, the stellar mass range where we find no flattening consistent with their findings. The results of Kashino et al. (2022), although not explicitly stated in their paper, suggest signs of flattening at the massive end before a sharp rise (with large uncertainty) in their last stellar mass bin at \(M_{*}\sim 10^{11}M_{\odot}\). Moreover, the right panel of Figure 8 includes the stellar MZR at \(z\sim 2\) from Strom et al. (2022), who used a different method to constrain [Fe/H]. They combined BPASS stellar population synthesis models with Cloudy photoionization models to infer [Fe/H] from the rest-optical nebular spectra of galaxies in the Keck Baryonic Figure 7: Composite spectrum of 17 LATIS galaxies in the rest-frame optical, obtained from Keck/MOSFIRE observations of the MOSDEF survey. Shaded regions indicate the corresponding errors obtained from bootstrapping. Structure Survey (KBSS; Rudie et al., 2012; Steidel et al., 2014). We find slightly lower [Fe/H] along with a more complex mass dependence than the linear form used in the Strom et al. (2022) model, but the overall level agreement is still encouraging considering the very different data sets and methods underpinning these analyses. ### UV Slope Dependence of the MZR The flattening of the stellar MZR at high stellar masses observed in our study may have several possible explanations. One possibility is that in the highest-mass bins, our sample is potentially biased against the most metal-rich galaxies. If such galaxies are also dusty and red, they may be underrepresented in our sample Figure 8: _Left:_ The stellar MZR for our sample of star-forming galaxies at \(z\sim 2.5\). The data points represent the posterior median metallicity derived from composite spectra in bins of stellar mass, as described in Section 4. The error bars represent the 1\(\sigma\) uncertainty. For comparison, the MZR from previous studies at similar redshifts by Cullen et al. (2019) and Kashino et al. (2022) are shown in green and blue, respectively. _Right:_ Comparison of our stellar [Fe/H] with the gas-phase [O/H] at \(z\sim 2\) measured by Sanders et al. (2021) and Strom et al. (2022). The blue dashed line indicates the stellar mass – stellar [Fe/H] relation inferred by Strom et al. 2022 based on photoionization modeling of rest-frame nebular spectra. \begin{table} \begin{tabular}{c c c c c c} \hline \hline \(\log(M_{*}/M_{\odot})\) & \(\langle\log\frac{\mathrm{M_{*}}}{\mathrm{M_{\odot}}}\rangle\) & N\({}_{\mathrm{gal}}\)a & median S/N & N\({}_{\mathrm{min}}\) & log(Z\({}_{*}/\mathrm{Z_{\odot}}\)) \\ & & & per pixelb & per pixelc & \\ \hline \hline \multicolumn{6}{c}{The entire LATIS sample} \\ \hline 9 – 11.5 & 9.78 & 3491 & 138 & 536 & \(-1.18^{+0.04}_{-0.03}\) \\ \hline \hline \multicolumn{6}{c}{Stellar mass bins} \\ \hline 9.00 – 9.25 & 9.15 & 237 & 31 & 25 & \(-1.18^{+0.04}_{-0.03}\) \\ 9.25 – 9.45 & 9.36 & 448 & 45 & 30 & \(-0.97^{+0.02}_{-0.03}\) \\ 9.45 – 9.60 & 9.53 & 481 & 51 & 64 & \(-1.00^{+0.03}_{-0.03}\) \\ 9.60 – 9.75 & 9.67 & 563 & 58 & 62 & \(-0.96^{+0.02}_{-0.02}\) \\ 9.75 – 9.90 & 9.82 & 567 & 57 & 96 & \(-0.86^{+0.02}_{-0.02}\) \\ 9.90 – 10.05 & 9.97 & 434 & 49 & 92 & \(-0.79^{+0.02}_{-0.02}\) \\ 10.05 – 10.20 & 10.12 & 313 & 44 & 67 & \(-0.75^{+0.02}_{-0.02}\) \\ 10.20 – 10.40 & 10.29 & 230 & 36 & 46 & \(-0.67^{+0.03}_{-0.03}\) \\ 10.40 – 11.50 & 10.65 & 219 & 31 & 50 & \(-0.70^{+0.03}_{-0.03}\) \\ \hline \hline \end{tabular} \end{table} Table 3: Properties of composite spectra and measurements of stellar metallicity due to their faint FUV fluxes, which could lead to a flattening of the MZR at high masses. Another possibility is that the physical processes governing the stellar metallicity of galaxies change at high stellar masses. In this section, we explore the impact of dust on the MZR to assess the potential biases against red, dusty galaxies at high masses. In order to determine the UV continuum slope, denoted as \(\beta\), we employ a power law (\(f_{\lambda}\propto\lambda^{\beta}\)) fit to the photometric data (outlined in Section 2.1) within the rest-frame range of \(\lambda=1300-2600\) A. We require our galaxies to have at least three photometric measurements in this wavelength range. We do not use spectroscopic data to measure \(\beta\) since not all galaxies are covered in the desired wavelength range. Therefore, we rely on photometrically-measured \(\beta\) values to ensure consistent measurement across our sample. The median uncertainty in \(\beta\) is \(\sim 0.14\) (Figure 9). To investigate the effect of dust content on the stellar MZR, we divide the sample into three subsamples corresponding to the tertiles of \(\beta\). Figure 10 shows the stellar MZR for the subsamples representing the upper and lower tertiles of \(\beta\). This comparison indicates no significant dependence of the MZR on \(\beta\). If this independence holds over the full range of \(\beta\) present in a mass-limited sample, then the flattening of the MZR we observe cannot be attributed to color-related selection effects. However, we cannot guarantee that the independence of \(\beta\) and metallicity extends to extremely dusty massive galaxies. By examining Figure 9, we note that our sample closely follows the relation of the mass-selected star-forming sample in McLure et al. (2018) (green line) except for the very highest mass bin, where our sample is biased toward bluer galaxies. Nonetheless, at the turn-over stellar mass of the stellar MZR (\(M_{*}\approx 10^{10.3}M_{\odot}\)), our sample does match with the trend reported by McLure et al. (2018). Therefore, we conclude that the observed flattening of the MZR is due to physical mechanisms rather than sample selection. ### Mass-Dependent Trends in Galactic Winds Analytic models provide valuable insight into the physical mechanisms governing the metallicity enrichment of galaxies. These models consider the interplay between key factors such as the inflow and outflow rates, SFR, the return of stellar material to the ISM, and changes in the total gas mass to unveil the complex mechanisms that shape the metallicity evolution of galaxies. When modeling the FUV-based stellar metallicity, which serves as a proxy for the iron abundance, it is important to incorporate the delay time distribution (DTD) of SNe Ia in order to track the evolution of iron-peak elements separately from the \(\alpha\)-elements. This distinction is crucial as iron-peak elements are produced by both CCSNe and SNe Ia, while \(\alpha\)-elements are only produced by CCSNe. Incorporating the DTD for SNe Ia in the analysis allows for a more comprehensive understanding of the metallicity evolution in galaxies. Figure 10: The stellar MZR for two subsamples of galaxies split based on the \(M_{*}-\beta\) relation as shown in the inset. The blue and red points represent the lower and upper tertiles of \(\beta\) subsamples, respectively. We find that there is no significant dependence of the MZR on \(\beta\), suggesting that the flattening observed in the stellar MZR for high-mass galaxies is not heavily affected by selection effects related to color. Figure 9: UV spectral slope (\(\beta\)) plotted against stellar mass for the LATIS sample (blue points), with red diamonds indicating the median \(\beta\) in each stellar mass bin. The green line shows the \(M_{*}-\beta\) relation from McLure et al. (2018) at \(2<z<3\) for comparison. The median uncertainty in the \(\beta\) measurements is displayed in the lower right corner. In this work, we adopt the one-zone model of Weinberg et al. (2017), which accounts for the DTD of SNe Ia. We can then constrain the mass loading factor (\(\eta=\frac{\dot{M}_{\rm{sun}}}{\dot{M}_{*}}\)) of \(z\sim 2.5\) galaxies based on the observed stellar MZR. The Weinberg et al. (2017) model assumes that the star formation efficiency (SFE), which represents the ratio of the star formation to the gas mass available, remains constant over time ("linear Schmidt law"). The DTD for enrichment from Type Ia SNe is modeled as an exponentially declining function, with a minimum delay time of 0.15 Gyr and an e-folding timescale of 1.5 Gyr. In this work, we adopt a simple constant star formation history and the fiducial values from Weinberg et al. (2017) for all free parameters of the model, except the mass loading parameter, SFE and CCSNe oxygen yield. We refer the reader to their paper for further details. The adopted values for the iron and oxygen yields in Weinberg et al. (2017) result in a maximum [O/Fe] \(\sim 0.4\) for metal enrichment entirely from CCSNe. This value is smaller than that measured in \(z\sim 2.5\) galaxies (Section 5.2; Steidel et al., 2016), and it is also exceed by low-metallicity stars in the Milky Way. For instance, Figure 11 shows that the abundances of APOGEE (DR17; Abdurro'uf et al., 2022) stars reach up to [O/Fe] \(\sim 0.6\) (see also Steidel et al., 2016), which is closer to the theoretical yield calculations of Nomoto et al. (2006). Therefore, we have adjusted the IMF-integrated oxygen yield to from 0.015 to 0.021 in order to achieve a plateau at [O/Fe] \(\approx 0.6\). It is important to highlight that this adjustment does not affect our main result, trends in the mass loading factor, which is based on iron abundances. To estimate the SFE timescale, also referred to as the depletion timescale (M\({}_{\rm{gas}}\)/SFR), we utilize the scaling relation introduced by Tacconi et al. (2020). They found that the depletion timescale depends mainly on the redshift and the offset from the main sequence. We compare our sample with a sample of star-forming galaxies (sSFR\(>10^{-10.1}\) yr\({}^{-1}\); Pacifici et al., 2016) at \(z=2-3\) in COSMOS2020 catalog and find an average offset of \(\sim 0.3\) dex from the star-forming main sequence for our sample. By employing Equation 4 of Tacconi et al. (2020), we derive an average depletion timescale of \(\sim 0.3\) Gyr for our sample at \(z\sim 2.5\). To explore the sensitivity of the analytic model to the mass-loading parameter \(\eta\), we plot tracks of [O/Fe], as a representative of alpha-enhancement, against [Fe/H] for different values of \(\eta\) in Figure 11. Additionally, we include the average [Fe/H] and [O/Fe] values of the LATIS sample, represented by a red square. The different-sized circles on the [O/Fe] - [Fe/H] tracks represent the age of a galaxy. Figure 11 indicates that our galaxies should be dominated by recently formed stars (\(\lesssim 0.5\) Gyr) to retain such high [\(\alpha\)/Fe] values. This is consistent with the median age of our sample inferred from SED fitting in Section 2.2. In the following, we aim to infer the mass loading parameter \(\eta\) and its dependence on stellar mass \(M_{*}\) using the observed stellar MZR. We incorporate the \(M_{*}\)-dependent ages of galaxies into the chemical evolution modeling, estimated as the median of the characteristic star formation timescale, defined as \(\tau_{\rm{SF}}^{*}=M_{*}\)/SFR, at a given stellar mass (Figure 12). Along with the other parameters described above, we can now use the Weinberg et al. (2017) model to compute [Fe/H] given \(\eta\) and \(M_{*}\)3. We utilize a cubic functional form for the mass loading parameter \(\eta(M_{*})\) and determine the polynomial coefficients by fitting the model to the observed [Fe/H]-\(M_{*}\) relation. The derived mass loading parameter, presented in Figure 13, indicates that it declines as the stellar mass increases, but flattens and possibly even increases for \(M_{*}\gtrsim 10^{10.5}M_{\odot}\). Footnote 3: We compute the model ISM metallicity at the end of the star-formation history and neglect its difference with the measured FUV-weighted stellar metallicity, which is expected to be only \(\sim\)0.02 dex at \(z\sim 2.2\)(Kashino et al., 2022). We note that the absolute value of \(\eta\) is indeed sensitive to the various assumptions in the model, yet the overall trend remains consistent. Blue dashed and dotted lines in Figure 13 illustrate the trend for alternative Figure 11: Chemical evolution tracks in the [O/Fe] – [Fe/H] plane for different values of \(\eta\). The different sizes of circles represent the ages of the galaxies along each track. The red square in the figure denotes our measurements for the entire LATIS sample at \(z\sim 2.5\). The gray density map illustrates the distribution of APOGEE (DR17) stars without a STAR_BAD flag and with reliable abundance ratio values (O_FE_FLAG == 0 and FE_H_FLAG == 0). mass-independent values of age, 200 Myr and 500 Myr, respectively. In all cases, a decline in \(\eta\) is observed as the stellar mass increases for low-mass galaxies; however, this trend plateaus/reverses at \(M_{*}\sim 10^{10.5}M_{\odot}\). Sanders et al. (2021) derived the mass-dependence of \(\eta\) by applying a different chemical evolution modeling framework to observations of the gas-phase MZR at \(z=2.3\) and \(3.3\). They found that the metal mass-loading factor \(\zeta(M_{*})\), which differs from \(\eta\) by the ratio of the outflow and ISM metallicities, follows \(\zeta(M_{*})\propto M_{*}^{-0.35\pm 0.02}\) at low masses, and they showed that their data were compatible both with models in which \(\zeta\) flattens at high masses \(M_{*}\gtrsim 10^{10.5}M_{\odot}\) and models in which \(\zeta\) continues to decline. We find that \(\eta\propto M_{*}^{-0.35\pm 0.01}\) at low masses \(M_{*}\lesssim 10^{10.25}M_{\odot}\), in excellent agreement with Sanders et al. (2021), but our data clearly prefer a flattening at high masses. Our estimates of \(\eta\) are systematically higher than those of Sanders et al. (2021) by \(\sim 0.3\) dex, but we expect that the absolute value of \(\eta\) is significantly more sensitive to modeling assumptions than the slope of \(\eta(M_{*})\). To better understand the behavior of the mass loading parameter as a function of stellar mass, we compare our results with those obtained from the TNG50 cosmological simulation (orange dashed line in Figure 13). TNG50 is a cosmological hydrodynamic simulation within a 50 Mpc box sampled by \(2160^{3}\) gas cells, resulting in a baryon mass resolution of \(8\times 10^{4}M_{\odot}\) and an average spatial resolution \(\sim 100-200\) pc in ISM gas. Nelson et al. (2019) measured the mass loading parameter at \(z\sim 2\) in TNG50 and found a similar nonmonotonic behavior as a function of galaxy stellar mass. Specifically, they found that the mass loading parameter turns over and rises rapidly above \(10^{10.5}M_{\odot}\), consistent with our finding. In the simulation, this behavior is traced to winds driven by supermassive black holes, which dominate over stellar-driven winds in massive galaxies. ## 6 Discussion Our results show that the stellar MZR exists at \(z\sim 2.5\), such that stellar metallicity increases with increasing stellar mass at low masses. However, we also find evidence of flattening at the massive end of the MZR (\(M_{*}\gtrsim 10^{10.3}M_{\odot}\)). In Figure 14, we compare stellar MZR measurements at \(z\sim 0\) with our result at \(z\sim 2.5\). Using full spectral fitting to the rest-frame optical spectra, Zahid et al. (2017) measured the stellar MZR for \(\sim 200000\) star-forming galaxies in the Sloan Digital Sky Survey (SDSS). Their result is in agreement with the relation derived by Kudritzki et al. (2016) based on analysis of individual supergiant stars. The results of Zahid et al. (2017) differ from those of Gallazzi et al. (2005), possibly due to the fact that the latter used Lick indices to measure metallicity and included data from both star-forming and quiescent galaxies in their analysis. The slope of our stellar MZR at \(z\sim 2.5\) below the turnover mass (\(M_{*}\sim 10^{10.3}M_{\odot}\)) is 0.40 \(\pm\) 0.04, which agrees with the slope meausured at \(z\sim 0\) by Zahid et al. (2017). However our \(z\sim 2.5\) MZR shows a shift toward lower metallicities by \(\sim 0.7\) dex compared to the local stellar MZR. We note that the actual offset is greater, as \(z\sim 0\) metallicities in Zahid et al. (2017) are weighted by optical luminosity, and when converted to FUV-weighted for comparison with our measurements, they need to be shifted upwards by \(\sim 0.1\) dex to account for the differences in the stellar populations being traced (Kashino et al., 2022). Therefore, the average stellar metallicity of galaxies at fixed stellar mass increases by a factor of \(\sim 5\) from \(z\sim 2.5\) to 0 (see also Cullen et al., 2019). However, Sanders et al. (2021) reported that the gas-phase metallicity [O/H] only increased by a factor of 2 in the same time interval, leading to a discrepancy between the evolution of the FUV-weighted stellar metallicities and gas-phase oxygen abundances. While it is expected that these two metallicities evolve somewhat differently due to the different timescales that they trace, they should still closely follow each other with a difference of \(\sim 0.1-0.2\) dex (Kashino et al., 2022). Thus, the large observed difference in their evolution is likely due to delayed iron enrichment by SN Ia. We note that our sample is primarily positioned on or above the star formation main sequence, with a median offset of \(\sim 0.3\) dex. At low redshifts, SFR is anticor Figure 12: The characteristic star formation timescale (\(\tau_{\rm SF}^{*}=M_{*}/{\rm SFR}\)) is shown as a function of stellar mass for our sample. The red diamonds represent the median values of \(\tau_{\rm SF}^{*}\) for each stellar mass bin, while the black line illustrates the best fit line to the median relationship. related with gas-phase metallicity at fixed stellar mass (e.g., Mannucci et al., 2010; Curti et al., 2020). Although the existence and form of the "fundamental metallicity relation" at higher redshifts is debated, Sanders et al. (2018) found that, at a fixed stellar mass, the relationship \(\Delta\log(\rm O/H)\sim-0.15\times\Delta\log(\rm SFR)\) holds for star-forming galaxies at \(z\sim 2.3\). If a similar relation applies to the iron abundance of young stars, then our estimates of the stellar metallicity would be \(\sim\)0.05 dex lower than that of a mass-limited sample. This is a minor potential bias and is not likely to affect our main results on the shape of the stellar MZR. Our MZR observations imply a transition in the characteristics of galactic winds around masses of \(M_{*}\approx 10^{10.3}M_{\odot}\), but they cannot uniquely identify the physical origin of this transition. For further insights we turn to simulations and other observations. We find that our \(M_{*}-\eta\) relation, derived using an analytic chemical evolution model, matches closely the one measured in the TNG50 simulation at \(z=2\). In the simulation, the mass loading attributed to star formation declines monotonically with stellar mass, and the upturn in \(\eta\) at high masses \(M_{*}\gtrsim 10^{10.5}M_{\odot}\) is caused by winds driven by supermassive black holes (SMBHs; Nelson et al., 2019). In the TNG model, low-luminosity AGN in this population drive high-velocity winds that expel cool, dense gas from the nuclear ISM. These outflows eventually lead to the "inside-out" quenching of star formation. Furthermore, independent observations support the widespread occurrence of AGN-driven outflows in a similar population of \(z\sim 1\)-2 galaxies. Genzel et al. (2014) and Forster Schreiber et al. (2014) find a rapid increase at masses \(M_{*}\gtrsim 10^{10.9}M_{\odot}\) of the incidence of broad, nuclear emission lines with line ratios indicative of shocks or AGN photoionization. This mass scale is higher than the mass at which we find a turnover of the MZR. How Figure 13: The mass loading factor as a function of stellar mass at \(z\sim 2.5\), derived from the best-fit analytic model to our observed MZR. The relationship between the mass loading factor and the stellar mass is modeled as a cubic function. The shaded region in the main figure represents the \(1\sigma\) error around the best fit. The blue dashed and dotted lines indicate the trends for constant, mass-independent age values of 200 and 500 Myr, respectively. The orange dashed line shows the trend from TNG50 simulation (Nelson et al., 2019) for outflows with a radial velocity greater than zero. The inset figure shows the best model MZR (blue line) along with the observed MZR (red diamonds). ever, it is possible that AGN are driving winds in slightly lower-mass galaxies that are sufficient to affect the chemical evolution but are not as readily detectable via optical emission line diagnostics. This possibility might be tested by observations using the higher angular resolution afforded by extremely large telescopes. Taken together, these additional lines of evidence suggest that SMBH-driven winds in massive galaxies are the most likely origin of the flattening of the stellar MZR that we observe. ## 7 Summary In this paper, we present the stellar mass - stellar metallicity relation for 3491 star-forming galaxies at \(2\lesssim z\lesssim 3\) using rest-frame FUV spectra from the LATIS survey. We utilize the BPASS (v2.2.1) synthesis models to fit high signal-to-noise composite spectra of galaxies. Our findings can be summarized as follows: * We find that the stellar metallicity increases monotonically with increasing stellar mass at lower masses but flattens at \(M_{*}\gtrsim 10^{10.3}M_{\odot}\). The slope of our \(z\sim 2.5\) MZR at the low mass end is similar to that of local stellar MZR. However, there is a significant offset of \(\sim 0.7\) dex toward lower metallicities compared to the local stellar MZR. * \(M_{*}\) relation. [O/Fe] remains relatively constant across the full mass rangewith an average value of [O/Fe] \(\sim 0.6\), suggesting that young galaxies at \(z\sim 2.5\) have yet to experience substantial iron enrichment resulting from SN Ia. * Using an analytical model of chemical evolution, we constrain the mass loading parameter as a function of stellar mass. We find that the mass loading parameter decreases with stellar mass at low masses but plateaus or reverses at \(M_{*}\sim 10^{10.5}M_{\odot}\). In combination with the TNG simulations and other observations, our results suggest that SMBH-driven outflows in massive galaxies at \(z\sim 2.5\) are responsible for the upturn in the \(M_{*}-\eta\) relation, which in turn explains the flattening of the MZR at the massive end. Massive galaxies undergo strong SMBH-driven outflows, which remove the metal-rich gas from the ISM. We thank the anonymous referee for providing insightful comments and suggestions that improved the quality of this work. This paper includes data gathered with the 6.5 meter Magellan Telescopes located at Las Campanas Observatory, Chile. We thank the staff at Las Campanas Observatory for their dedication and support. N.C. and A.B.N. acknowledge support from the National Science Foundation under Grant No. 2108014. G.A.B. acknowledges the support from the ANID Basal project FB210003. NumPy (Harris et al., 2020), Matplotlib (Hunter, 2007), Astropy (Astropy Collaboration et al., 2013, 2018, 2022), PyMultiNest (Buchner et al., 2014).
2302.12011
A Generalized Weighted Loss for SVC and MLP
Usually standard algorithms employ a loss where each error is the mere absolute difference between the true value and the prediction, in case of a regression task. In the present, we introduce several error weighting schemes that are a generalization of the consolidated routine. We study both a binary classification model for Support Vector Classification and a regression net for Multi-layer Perceptron. Results proves that the error is never worse than the standard procedure and several times it is better.
Filippo Portera
2023-02-22T17:56:54Z
http://arxiv.org/abs/2302.12011v1
# A Generalized Weighted Loss for SVC and MLP ###### Abstract Usually standard algorithms employ a loss where each error is the mere absolute difference between the true value and the extrapolation, in case of a regression task. In the present, we introduce several error weighting schemes that are a generalization of the consolidated routine. We study both a binary classification model for Support Vector Classification and a regression net for Mully-layer Perceptron. Results proves that the error is never worse than the standard procedure and several times it is better. Machine Learning, Binary Classification, SVC, Regression, MLP. ## I Introduction We would like to show that a standard loss generalization for binary classification (in our case we have chosen SVC and MLP) could produce results not worse w.r.t. the consolidated loss. In fact, the possibility that a given data-set presents non-IID samples can be exploited by these generalized losses. The loss studied to generalize SVC and the full optimization problem are: \[P=\frac{1}{2}||\vec{v}||^{2}+C\sum_{i=1}^{l}\xi_{i}w_{i} \tag{1}\] subject to: \[y_{i}(\vec{v}^{\prime}\vec{x_{i}}+b)\geq 1-\xi_{i}\hskip 28.452756pti\in 1,..,l \tag{2}\] and: \[\xi_{i}\geq 0\hskip 28.452756pti\in 1,..,l \tag{3}\] where \(\vec{v}\) represents the linear weights of the extrapolator function, \(l\) is the number of training examples, \(C\) is a trade-off hyper-parameter, \(\xi_{i}\) is the error on sample \(i\), and \(w_{i}\) are some scalar weights that are a function of a distribution \(s_{i}\) of the samples: \[s_{i}=\sum_{j=1}^{l}(e^{-\gamma s||\vec{x_{i}}-\vec{x_{j}}||^{2}}) \tag{4}\] Other distribution can be adopted (e.g., 1 + the RBF norm instead of the RBF dot product). And let: \[sy_{i}=y_{i}y_{j}\sum_{j=1}^{l}(e^{-\gamma s||\vec{x_{i}}-\vec{x_{j}}||^{2}}) \tag{5}\] with \(\gamma_{S}\) additional hyper-parameter. Here lies the complexity of the algorithm since this calculation is \(O(l^{2})\). Perhaps it can be overtaken with pattern sampling or, in the case of MLP with a sort of weights learning. \[w_{i}=f(s_{i}) \tag{6}\] This implies a quadratic problem that is different from traditional SVC: The Lagrangian would be: \[L=\frac{1}{2}||\vec{v}||^{2}+C\sum_{i=1}^{l}\xi_{i}w_{i}-\sum_{i=1}^{l}\alpha _{i}(y_{i}(\vec{v}^{\prime}\vec{x_{i}}+b)-1+\xi_{i}) \tag{7}\] \[-\sum_{i=1}^{l}\eta_{i}\xi_{i} \tag{8}\] subject to: \[\alpha_{i}\geq 0\hskip 28.452756pti\in 1,..,l \tag{9}\] \[\eta_{i}\geq 0\hskip 28.452756pti\in 1,..,l \tag{10}\] Applying the KKT condition for optimaility: \[\frac{\partial L}{\partial\vec{v}}=\vec{v}-\sum_{i=1}^{1}\alpha_{i}y_{i}\vec {x_{i}}=0\Rightarrow\vec{v}=\sum_{i=1}^{1}\alpha_{i}y_{i}\vec{x_{i}} \tag{11}\] \[\frac{\partial L}{\partial\vec{\xi}}=\vec{\alpha}+\vec{\eta}-C\vec{w}=0 \Rightarrow\vec{\alpha}\leq C\vec{w} \tag{12}\] \[\frac{\partial L}{\partial b}=\sum_{i=1}^{l}\alpha_{i}y_{i}=0 \tag{13}\] Thus, the dual becomes: \[D=\sum_{i=1}^{l}\alpha_{i}-\frac{1}{2}\sum_{i=1}^{l}\sum_{j=1}^{l}\alpha_{i}y _{i}\alpha_{j}y_{j}K(\vec{x_{i}},\vec{x_{j}}) \tag{14}\] subject to: \[0\leq\alpha_{i}\leq Cw_{i}\hskip 28.452756pti\in 1,..,l \tag{15}\] \[\sum_{i=1}^{l}\alpha_{i}y_{i}=0 \tag{16}\] This is very similar to standard SVC dual [4], apart the constraints on the lagrangian multipliers. We wrote an ad-hoc quadratic optimizer for this problem1, with a SMO-like method ([2]). Footnote 1: The code of this work is available at OSF GWL Project We iteratively select 2 distinct multipliers and we modify them with an attempt to improve the dual objective function: \[\alpha_{i}^{t+1}=\alpha_{i}^{t}+\nu y_{i} \tag{17}\] \[\alpha_{j}^{t+1}=\alpha_{j}^{t}-\nu y_{j} \tag{18}\] The motivation is the enforcement of the second dual constraint on the \(\sum_{i=1}^{l}\alpha_{i}y_{i}=0\). The \(\nu\) in the optimal direction is obtained deriving \(D\) by \(\nu\) as it has been done in section 5.1 of ([5]). This direction is: \[\nu=\frac{y_{j}-y_{i}-\sum_{p=1}^{l}\alpha_{p}y_{p}K(\vec{x_{j}},\vec{x_{p}})+ \sum_{p=1}^{l}\alpha_{p}y_{p}K(\vec{x_{i}},\vec{x_{p}})}{K(\vec{x_{i}},\vec{x_{ i}})-2K(\vec{x_{i}},\vec{x_{j}})+K(\vec{x_{j}},\vec{x_{j}})} \tag{19}\] Once the candidate \(\nu\) has been determined, it has to be clipped in order to satisfy the constraints on both the multipliers. At each iteration we compute \(b\) with the support vectors that lie in the margin (for which \(0<\alpha_{i}<Cw_{i}\)) as it has been reported in How to calculate \(b\). The kernel used to compute \(K(\vec{x},\vec{y})\) is RBF with hyper-parameter \(\gamma_{K}\). The whole procedure is iterated \(50l^{2}\) times for each training problem. ## II Related works In ([1]) they learn the loss weights directly from the training and validation sets. They assert that there is a substantial improvement in the generalization error and they also provide theoretical bounds. ## III Method We use the acronym GWL for Generalized Weighted Loss. We tried 4 distinct algorithms: the Python 3 package sklearn.svm.SVC, GWL SVC with \(w_{i}=1\), GWL (here we mean the generalized loss with \(w_{i}\)'s built as described), and GWL with random weights. We would like to know if, in the general case, the optimal solutions use \(w_{i}\) not equal to \(1\). We have selected at least 8 cases of study, to determine the weight \(w_{i}\) of a sample \(i\). Therefore, some evaluated weighting functions are: 1. \[w_{i}=\sqrt{s_{i}}\] (20) 2. \[w_{i}=s_{i}\] (21) 3. \[w_{i}=s_{i}^{2}\] (22) 4. \[w_{i}=\frac{1}{\sqrt{s_{i}}}\] (23) 5. \[w_{i}=\frac{1}{s_{i}}\] (24) 6. \[w_{i}=\frac{1}{s_{i}^{2}}\] (25) 7. \[w_{i}=sy_{i}\] (26) 8. \[w_{i}=1+\text{rand}[0,1]\] (27) The case \(8\) is useful to show that a weighting scheme based on the training distribution is more convenient w.r.t. a random weighting scheme. ## IV Results We explored a 2 dimensional hyper-parameters grid for sklearn.svm.svc, involving \(\gamma_{K}\) and \(C\). While we used the additional hyper-parameter \(\gamma_{S}\) to generete the loss weights. That is the reason why the experiments with loss weights take more time to terminate. Obviously, the second grid is an extension of (it covers) the first one. Those are the results for the 5-fold cross-validation with data-sets extracted from the UCI website, and opportunely treated (double or inconsistent samples removed, shuffling): We also have tried 2 MLP nets with PyTorch on a regression task with \(w_{i}=s_{i}\) and results are interesting (but the random initialization of the net weights should be considered in this case: Wine data-set: 3961 samples, 11 features; MLP: 100, 50, 20, 1, and Wine data-set, different MLP 100, 80, 40, 1 nodes per layer. The theory underneath deep neural architectures can be foun in [3]. In this scenario it would be useful to determine the difference between eah couple of values, to understand which is the strategy that, in the most of the cases, performs best with the test set. An idea is to learn weights, starting to run in parallel \(n\) nets with different random weight vectors and selecting at each parallel one the best vector in terms of MAE and perturbing it and re-run the procedure for a given amount of iterations. GWL has been written in C. The regression code for the wine data-set has been written in Python 3.10 and torch. Hardware employed: a notebook with 8 cores Intel(R) i5-10210U CPU @ 1.60GHz and 16GB of RAM, and a PC with 16 cores 11th Gen Intel(R) Core(TM) i7-11700 @ 2.50GHz and 32 GB of RAM. Baseline SVC algorithms have been measured on the notebook, while GWL times have been determined with the PC. ## V Conclusion Results confirm the theory, they're not worse than the particular case. In particular, it looks like that the preferred generalization scheme is the one that gives more importance to patterns that are isolated, on 3 data-sets from 4 for the SVC case. Nevertheless it should be considered the fact concerning the unique geometry of each data-set, so each generalization scheme should be tested. The next step would be to leverage this method in order to learn the weights. Perhaps this generalization could be employed in other contests such as SVR, multi-class classification, and other MLPs.
2303.14991
Empowering Dual-Encoder with Query Generator for Cross-Lingual Dense Retrieval
In monolingual dense retrieval, lots of works focus on how to distill knowledge from cross-encoder re-ranker to dual-encoder retriever and these methods achieve better performance due to the effectiveness of cross-encoder re-ranker. However, we find that the performance of the cross-encoder re-ranker is heavily influenced by the number of training samples and the quality of negative samples, which is hard to obtain in the cross-lingual setting. In this paper, we propose to use a query generator as the teacher in the cross-lingual setting, which is less dependent on enough training samples and high-quality negative samples. In addition to traditional knowledge distillation, we further propose a novel enhancement method, which uses the query generator to help the dual-encoder align queries from different languages, but does not need any additional parallel sentences. The experimental results show that our method outperforms the state-of-the-art methods on two benchmark datasets.
Houxing Ren, Linjun Shou, Ning Wu, Ming Gong, Daxin Jiang
2023-03-27T08:34:42Z
http://arxiv.org/abs/2303.14991v1
# Empowering Dual-Encoder with Query Generator for ###### Abstract In monolingual dense retrieval, lots of works focus on how to distill knowledge from cross-encoder re-ranker to dual-encoder retriever and these methods achieve better performance due to the effectiveness of cross-encoder re-ranker. However, we find that the performance of the cross-encoder re-ranker is heavily influenced by the number of training samples and the quality of negative samples, which is hard to obtain in the cross-lingual setting. In this paper, we propose to use a query generator as the teacher in the cross-lingual setting, which is less dependent on enough training samples and high-quality negative samples. In addition to traditional knowledge distillation, we further propose a novel enhancement method, which uses the query generator to help the dual-encoder align queries from different languages, but does not need any additional parallel sentences. The experimental results show that our method outperforms the state-of-the-art methods on two benchmark datasets. ## 1 Introduction Information Retrieval (IR) aims to retrieve pieces of evidence for a given query. Traditional methods mainly use sparse retrieval systems such as BM25 Robertson and Zaragoza (2009), which depend on keyword matching between queries and passages. With the development of large-scale pre-trained language models (PLMs) Vaswani et al. (2017); Devlin et al. (2019) such as BERT, dense retrieval methods Lee et al. (2019); Karpukhin et al. (2020) show quite effective performance. These methods usually employed a dual-encoder architecture to encode both queries and passages into dense embeddings and then perform approximate nearest neighbor searching Johnson et al. (2021). Recently, leveraging a cross-encoder re-ranker as the teacher model to distill knowledge to a dual-encoder has shown quite effective to boost the dual-encoder performance. Specifically, these methods first train a warm-up dual-encoder and a warm-up cross-encoder. Then, they perform knowledge distillation from the cross-encoder to the dual-encoder by KL-Divergence or specially designed methods. For example, RocketQAv2 Qu et al. (2021) proposed dynamic distillation, and AR2 Zhang et al. (2021) proposed adversarial training. However, there are two major problems when scaling the method to the cross-lingual dense retrieval setting. Firstly, the cross-encoder typically requires large amounts of training data and high-quality negative samples due to the gap between pre-training (token-level task) and fine-tuning (sentence-level task), which are usually not satisfied in the cross-lingual setting Asai et al. (2021). Due to expensive labeling and lack of annotators in global languages, especially low-resource languages, the training data in cross-lingual are quite limited. Then with the limited training data, the dual-encoder is not good enough to provide Figure 1: The performance of cross-encoder and query generator when varying the number of training samples and retrievers. We use BM25 and DPR as retrievers, respectively. For the cross-encoder (BERT-Large), we use retrieved top-100 passages which do not contain the answer as negative and contrastive loss for training. For the query generator (T5-Base), we firstly train it with the query generation task and then fine-tune the model with the same setting as BERT-Large. The reported performance is the top-5 score of re-ranked top 500 passages on the NQ test set. high-quality negative samples to facilitate the cross-encoder. Secondly, the cross-lingual gaps between different languages have a detrimental effect on the performance of cross-lingual models. Although some cross-lingual pre-training methods such as InfoXLM Chi et al. (2021) and LaBSE Feng et al. (2022) have put lots of effort into this aspect by leveraging parallel corpus for better alignment between different languages, these parallel data are usually expensive to obtain and the language alignment could be damaged in the fine-tuning stage if without any constraint. To solve these problems, we propose to employ a query generator in the cross-lingual setting, which uses the likelihood of a query against a passage to measure the relevance. On the one hand, the query generator can utilize pre-training knowledge with small training data in fine-tuning stage, because both of its pre-training and fine-tuning have a consistent generative objective. On the other hand, the query generation task is defined over all tokens from the query rather than just the _[CLS] token_ in the cross-encoder, which has been demonstrated to be a more efficient training paradigm Clark et al. (2020). As shown in Figure 1, with the number of training samples dropping, the performance of BERT-Large drops more sharply than T5-Base. Besides, the query generator is less sensitive to high-quality negative samples. As we can see, using BM25 as the retriever to mine negative samples for re-ranker training, the gap between cross-encoder and query generator is smaller than the gap using DPR as the retriever. Finally, the query generator can provide more training data by generation, which is precious in the cross-lingual setting. To sum up, the query generator is more effective than the cross-encoder in the cross-lingual setting. Based on these findings, we propose a novel method, namely QuiCK, which stands Query generator improved dual-encoder by Cross-lingual Knowledge distillation. Firstly, at the passage level, we employ a query generator as the teacher to distill the relevant score between a query and a passage into the dual-encoder. Secondly, at the language level, we use the query generator to generate synonymous queries in other languages for each training sample and align their retrieved results by KL-Divergence. Considering the noise in the generated queries, we further propose a scheduled sampling method to achieve better performance. The contributions of this paper are as follows: * We propose a cross-lingual query generator as a teacher model to empower the cross-lingual dense retrieval model and a novel iterative training approach is leveraged for the joint optimizations of these two models. * On top of the cross-lingual query generator, a novel cost-effective alignment method is further designed to boost the dense retrieval performance in low-resource languages, which does not require any additional expensive parallel corpus. * Extensive experiments on two public cross-lingual retrieval datasets demonstrate the effectiveness of the proposed method. ## 2 Related Work **Retrieval.** Retrieval aims to search relevant passages from a large corpus for a given query. Traditionally, researchers use bag-of-words (BOW) based methods such as TF-IDF and BM25 Robertson and Zaragoza (2009). These methods use a sparse vector to represent the text, so we call them sparse retrievers. Recently, some studies use neural networks to improve the sparse retriever such as DocTQuery Nogueira et al. (2019) and DeepCT Dai and Callan (2020). In contrast to sparse retrievers, dense retrievers usually employ a dual-encoder to encode both queries and passages into dense vectors whose lengths are much less than sparse vectors. These methods mainly focus on two aspects: pre-training Lee et al. (2019); Guu et al. (2020); Lu et al. (2021); Gao and Callan (2021, 2021); Zhou et al. (2022) and fine-tuning methods, including negative sampling Karpukhin et al. (2020); Luan et al. (2021); Xiong et al. (2021); Zhan et al. (2021) and multi-view representations Khattab and Zaharia (2020); Humeau et al. (2020); Tang et al. (2021); Zhang et al. (2022). Another fine-tuning method is jointly training the dual-encoder with a cross-encoder. For example, RDR Yang and Seo (2020) and FID-KD Izacard and Grave (2021) distill knowledge from a reader to the dual-encoder; RocketQA Qu et al. (2021), PAIR Ren et al. (2021), Rock-etQAv2 Ren et al. (2021), and AR2 Zhang et al. (2021) jointly train the dual-encoder with a cross-encoder to achieve better performance. Recently, with the development of cross-lingual pre-trained models Conneau et al. (2020), researchers pay more attention to cross-lingual dense retrieval Asai et al. (2021); Longpre et al. (2020). For example, CORA Asai et al. (2021) leverages a generator to help mine retrieval training data, Sentri Sorokin et al. (2022) proposes a single encoder and self-training, and DR.DECR Li et al. (2021) uses parallel queries and sentences to perform cross-lingual knowledge distillation. **Re-ranking.** Re-ranking aims to reorder the retrieved passages as the relevant scores. Due to the small number of retrieved passages, re-ranking usually employs high-latency methods to obtain better performance, _e.g.,_ cross-encoder. Traditionally, the re-ranking task is heavily driven by manual feature engineering Guo et al. (2016); Hui et al. (2018). With the development of pre-trained language models (_e.g.,_ BERT), researchers use the pre-trained models to perform re-ranking tasks Nogueira and Cho (2019); Li et al. (2020). In addition to cross-encoder, researchers also try to apply generator to re-ranking. For example, monoT5 Nogueira et al. (2020) proposes a prompt-based method to re-rank passages with T5 Raffel et al. (2020) and other studies dos Santos et al. (2020); Zhuang et al. (2021); Lesota et al. (2021) propose to use the log-likelihood of the query against the passage as the relevance to perform the re-ranking task. Recently, with the size of pre-trained models scaling up, the generative models show competitive zero-shot and few-shot ability. Researchers start to apply large generative models to zero-shot and few-shot re-ranking. For example, SGPT Muennighoff (2022) and UPR Sachan et al. (2022) propose to use generative models to perform zero-shot re-ranking. P\({}^{3}\) Ranker Hu et al. (2022) demonstrates that generative models achieve better performance in the few-shot setting. Note that all of these works are concurrent to our work. Instead of using a query generator as a re-ranker only, we propose to leverage the query generator as a teacher model to enhance the performance of the cross-lingual dual-encoder. In addition to the traditional knowledge distillation, we further propose a novel cost-effective alignment method to boost the dense retrieval performance in low-resource languages. ## 3 Preliminaries In this section, we give a brief review of dense retrieval and re-ranking. The overviews of all methods are presented in Figure 2. **Dual-Encoder.** Given a query \(q\) and a large corpus \(C\), the retrieval task aims to find the relevant passages for the query from a large corpus. Usually, a dense retrieval model employs two dense encoders (_e.g.,_ BERT) \(E_{Q}(\cdot)\) and \(E_{P}(\cdot)\). They encode queries and passages into dense embeddings, respectively. Then, the model uses a similarity function, often dot-product, to perform retrieval: \[f_{de}(q,p)=E_{Q}(q)\cdot E_{P}(p), \tag{1}\] where \(q\) and \(p\) denote the query and the passage, respectively. During the inference stage, we apply the passage encoder \(E_{P}(\cdot)\) to all the passages and index them using FAISS Johnson et al. (2021) which is an extremely efficient, open-source library for similarity search. Then given a query \(q\), we derive its embedding by \(\mathbf{v}_{q}=E_{Q}(q)\) and retrieve the top \(k\) passages with embeddings closest to \(\mathbf{v}_{q}\). **Cross-Encoder Re-ranker.** Given a query \(q\) and top \(k\) retrieved passages \(C\), the re-ranking task aims to reorder the passages as the relevant scores. Due to the limited size of the corpus, the re-ranking task usually employs a cross-encoder to perform interaction between words across queries and passages at the same time. These methods also introduce a special token _[SEP]_ to separate q and p, and then the hidden state of the _[CLS] token_ from the cross-encoder is fed into a fully-connected layer to output the relevant score: \[f_{ce}(q,p)=\mathbf{W}\times E_{C}(q||p)+b, \tag{2}\] where "\(||\)" denotes concatenation with the _[SEP] token_. During the inference stage, we apply the Figure 2: Overview of different model architectures designed for retrieval or re-ranking. cross-encoder \(E_{C}(\cdot)\) to all <q, p> pair and reorder the passages by the scores. **Query Generator Re-ranker.** Similar to cross-encoder re-ranker, query generator re-ranker also aims to reorder the passages as the relevant scores. For the query generator re-ranker, we use the log-likelyhood of the query against the passage to measure the relevance: \[f_{qg}(q,p)=\log P(q|p)=\sum_{t=0}\log P(q_{t}|q_{<t},p), \tag{3}\] where \(q_{<t}\) denotes the previous tokens before \(q_{t}\). The rest of settings are the same as the cross-encoder re-ranker and are omitted here. **Training.** The goal of retrieval and re-ranking is to enlarge the relevant score between the query and the relevant passages (_a.k.a.,_ positive passages) and lessen the relevant score between the query and the irrelevant passages (_a.k.a.,_ negative passages). Let \(\{q_{i},p_{i}^{+},p_{i,0}^{-},p_{i,1}^{-},\ldots,p_{i,n}^{-}\}\) be the \(i\)-th training sample. It consists of a query, a positive passage, and \(n\) negative passages. Then we can employ the contrastive loss function, called InfoNCE (van den Oord et al., 2018), to optimize the model: \[\mathcal{L}_{R}=-\log\frac{e^{f(q_{i},p_{i}^{+})}}{e^{f(q_{i},p_{i}^{+})}+ \sum_{j=0}^{n}e^{f(q_{i},p_{i,j}^{-})}}, \tag{4}\] where \(f\) denotes the similarity function, _e.g.,_\(f_{de}\) in Eq. (1), \(f_{ce}\) in Eq. (2), or \(f_{qg}\) in Eq. (3). **Cross-lingual Retrieval.** In the cross-lingual information retrieval task, passages and queries are in different languages. In this paper, we consider the passages are in English and the queries are in non-English languages. A sample consists of three components: a query in a non-English language, a positive passage in English, and a span answer in English. Given a non-English query, the task aims to retrieve relevant passages in English to answer the query. If a retrieved passage contains the given span answer, it is regarded as a positive passage, otherwise, it is a negative passage. ## 4 Methodology In this section, we present the proposed QuiCK. The overview of the proposed method is presented in Figure 3. We start with the training of the query generator, then present how to perform distillation and alignment training for the dual-encoder, and we finally discuss the entire training process. ### Training of Query Generator In our method, we employ mT5 (Xue et al., 2021) as the query generator. The query generator has two roles: teacher and generator. As a teacher, it aims to better re-rank the candidate passages with relevance and distill the knowledge to the dual-encoder. As a generator, it aims to generate synonymous queries in different languages. **Input Format.** As we employ mT5, we design a prompt template for input sentences. Considering that most passages are long, we propose introducing the span answer as input to encourage the generator to focus on the same segment and generate parallel queries in different languages. As a result, we use _"generate [language] query: answer: [span answer] content: [content]"_ as the template. For a specific sample, we fill the three placeholders with the language of the target query, the span answer, and the passage content, respectively. **Training.** Considering the two roles of the query generator, the entire training process for the query generator contains two stages: query generation training and re-ranking training. Firstly, we train the generator with the generation task, which takes the positive passage as input and aims to generate the query. The task can be for Figure 3: Overview of the proposed QuiCK. mulated as maximizing the conditional probability: \[\hat{q} =\operatorname*{arg\,max}_{q}P(q|p,a) \tag{5}\] \[=\operatorname*{arg\,max}_{q}\prod_{t=0}P(q_{t}|p,a,q_{<t}),\] where \(q_{t}\) is the \(t\)-th token of the generated query, \(a\) denotes the span answer, and \(q_{<t}\) represents the previous decoded tokens. Then we can employ cross-entropy loss to optimize the model: \[\mathcal{L}_{QG}=\frac{1}{T}\sum_{t=0}^{T}-\log P(q_{t}|p,a,q_{<t}), \tag{6}\] where \(T\) denotes the number of the query tokens. Secondly, we train the generator with the re-ranking task, which takes a query and a passage as input and outputs the relevant score of the two sentences. The detailed training process is introduced in Section 3 and is omitted here. ### Distillation for Dual-Encoder We then present how to distill knowledge from query generator to dual-encoder. Similar to previous methods Ren et al. (2021); Zhang et al. (2021), we employ KL-Divergence to perform distillation. Formally, given a query \(q\) and a candidate passage set \(C_{q}=\{p_{i}\}_{1\leq i\leq n}\) which is retrieved by the dual-encoder, we compute relevant scores by query generator and dual-encoder, respectively. After that, we normalize the scores by softmax and compute the KL-Divergence as the loss: \[s_{qg}(q,p) =\frac{\exp(f_{qg}(q,p))}{\sum_{p^{\prime}\in C_{q}}\exp(f_{qg}(q,p^{\prime}))}, \tag{7}\] \[s_{de}(q,p) =\frac{\exp(f_{de}(q,p))}{\sum_{p^{\prime}\in C_{q}}\exp(f_{de}(q,p^{\prime}))},\] \[\mathcal{L}_{D} =\sum_{p\in C_{q}}s_{qg}(q,p)\frac{s_{qg}(q,p)}{s_{de}(q,p)},\] where \(f_{qg}\) and \(f_{de}\) denote the relevant score given by the query generator and the dual-encoder which are presented in Section 3. ### Alignment for Dual-Encoder Alignment is a common topic in the cross-lingual setting, which can help the model better handle sentences in different languages. Previous works Zheng et al. (2021); Yang et al. (2022) usually use parallel data or translated data to perform alignment training among different languages. Here, we propose a novel method to align queries in different languages for cross-lingual retrieval, which does not need any parallel data. The core idea of our method is to leverage the query generator to generate synonymous queries in other languages to form parallel cases. **Generation.** For each case in the training set, we generate a query in each target language (_a.k.a.,_ if there are seven target languages, we generate seven queries for the case). Then we use the confidence of the generator to filter the generated queries. Specially, we set filter thresholds to accept 50% of generated queries. **Scheduled Sampling.** In this work, we select a generated query to form a pair-wise case with the source query. Considering the semantics of generated queries, we carefully design a scheduled sampling method to replace the random sampling. For a generated query \(q^{\prime}\), we first use the dual-encoder to retrieve passages for the source query \(q\) and generated query \(q^{\prime}\), respectively, namely \(C_{q}\) and \(C^{\prime}_{q}\). Then we calculate a coefficient for the generated query \(q^{\prime}\) as \[c^{\prime} =\frac{|C_{q}\cap C^{\prime}_{q}|}{\max(|C_{q}|,|C^{\prime}_{q}|)}, \tag{8}\] \[c^{\prime} =\begin{cases}c^{\prime}&\text{ if }c^{\prime}\geq T,\\ 0&\text{ if }c^{\prime}<T,\end{cases}\] where threshold \(T\) is a hyper-parameter and \(|\cdot|\) denotes the size of the set. The basic idea is that the larger the union of retrieved passages, the more likely the queries are to be synonymous. When sampling the generated query, we first calculate coefficients \(\{c^{\prime}_{1},\dots,c^{\prime}_{m}\}\) for all generated queries \(\{q^{\prime}_{1},\dots,q^{\prime}_{m}\}\), then normalize them as the final sampling probability \(p\): \[p_{i}=\frac{c^{\prime}_{i}}{\sum_{j=0}^{m}c^{\prime}_{i}}, \tag{9}\] where \(m\) denotes the number of generated queries. During the training stage, for each training case, we sample a generated query to form the pair-case with the source query \(q\) based on the probabilities. **Alignment Training.** After sampling a generated query, we present the how to align the source query and the generated query. Different to previous works Zheng et al. (2021), we employ asymmetric KL-Divergence rather than symmetric KL-Divergence due to the different quality of the source query and the generated query: \[\mathcal{L}_{A}=\sum_{p\in C_{q}\cup C^{\prime}_{q}}c^{\prime}s_{de}(q,p)\frac{s _{de}(q,p)}{s_{de}(q^{\prime},p)}, \tag{10}\] where \(q\) denotes the query, \(C_{q}\) denotes the set of retrieved passages, superscript "\(\prime\)" denotes the generated case, and \(c^{\prime}\) is the coefficient of the generated query. Note that \(s_{de}\) in Eq. (10) are normalized across \(C_{q}\cup C^{\prime}_{q}\) instead of \(C_{q}\) or \(C^{\prime}_{q}\) in Eq. (7). ### Training of Dual-Encoder As shown in Figure 3, we combine the distillation loss and the alignment loss as final loss: \[\mathcal{L}=\mathcal{L}_{D}+\mathcal{L}^{\prime}_{D}+\alpha\times\mathcal{L} _{A}, \tag{11}\] where \(\mathcal{L}_{D}\) denotes the distillation loss for the source queries, \(\mathcal{L}^{\prime}_{D}\) denotes the distillation loss for the generated queries, \(\mathcal{L}_{A}\) denotes the alignment loss, and \(\alpha\) is a hyper-parameter to balance the loss. Based on the training method of dual-encoder and query generator, we conduct an iterative procedure to improve the performance. We present the entire training procedure in Algorithm 1. ``` Input: Dual-Encoder \(R\), Query Generator \(G\), Corpus \(C\), and Training Set \(D\). 1 Initialize \(R\) and \(G\) with pre-trained model; 2 Train the warm-up \(R\) with Eq. (4) on \(D\); 3 Train the warm-up \(G\) with Eq. (6) on \(D\); 4 Generate queries for each sample in \(D\); 5 Build ANN index for \(R\); 6 Retrieve relevant passages on corpus \(C\); 7 Fine-tune the \(G\) with Eq. (4) on \(D\) and retrieved negative passages. 8whilemodels has not convergeddo 9 Fine-tune the \(R\) with Eq. (11) on \(D\) and retrieved passages; 10 Refresh ANN index for \(R\); 11 Retrieve relevant passages on corpus \(C\); 12 Fine-tune the \(G\) with Eq. (4) on \(D\) and retrieved negative passages. 13 end while ``` **Algorithm 1**The training algorithm. ## 5 Experiments In this section, we construct experiments to demonstrate the effectiveness of our method. ### Experimental Setup **Datasets.** We evaluate the proposed method on two public cross-lingual retrieval datasets: XOR-Retrieve Asai et al. (2021) and MKQA Longpre et al. (2020). The detailed descriptions of the two datasets are presented in Appendix A. **Evaluation Metrics.** Following previous works Asai et al. (2021); Sorokin et al. (2022), we use R@2kt and R@5kt as evaluation metrics for the XOR-Retrieve dataset and R@2kt as evaluation metrics for the MKQA dataset. The metrics measure the proportion of queries to which the top k retrieved tokens contain the span answer, which is fairer with different passage sizes. **Implementation Details.** For the warm-up training stage, we follow XOR-Retrieve to first train the model on NQ Kwiatkowski et al. (2019) data and then fine-tune the model with XOR-Retrieve data. For the iteratively training stage, we generate seven queries for each case (because the XOR-Retrieve data contains seven languages). We set the number of retrieved passages as 100, the number of iterations as \(5\), threshold \(T\) in Eq. (8) as 0.3 and coefficient \(\alpha\) in Eq. (11) as 0.5. The detailed hyper-parameters are shown in Appendix C. And we conduct more experiments to analyze the parameter sensitivity in Appendix D. All the experiments run on 8 NVIDIA Tesla A100 GPUs. The implementation code is based on HuggingFace Transformers Wolf et al. (2020). For the dual-encoder, we use XLM-R Base Conneau et al. (2020) as the pre-trained model and use the average hidden states of all tokens to represent the sentence. For the query generator, we leverage mT5 Base Xue et al. (2021) as the pre-trained model, which has almost the same number of parameters as a large cross-encoder. ### Results **Baselines.** We compare the proposed QuiCK with previous state-of-the-art methods, including mDPR, DPR+MT Asai et al. (2021), Sentri Sorokin et al. (2022), DR.DECR Li et al. (2021). Note that Sentri introduces a shared encoder with large size, DR.DECR introduces parallel queries and parallel corpus, but our method only utilizes an encoder with base size, XOR-Retrieve and NQ training data. For more fairly comparison, we also report their ablation results. Here, "Bi-Encoder" denotes two unshared encoders with base size. "KD\({}_{XOR}\)" de notes a distillation method which introduces synonymous English queries. "KD\({}_{PC}\)" denotes a distillation method which introduces parallel corpus. In addition, we also employ LaBSE base (Feng et al., 2022) to evaluate the proposed QuiCK with parallel corpus, which is a state-of-the-art model pre-trained with parallel corpus. **XOR-Retrieve.** Table 1 shows the results on XOR-Retrieve dev set. The proposed QuiCK outperforms mDPR, DPR+MT, and Sentri with a clear edge in almost all languages. Although QuiCK does not introduce any parallel corpus, it also outperforms DR.DECR w/o KD\({}_{XOR}\). Finally, QuiCK based on LaBSE outperforms all baselines, especially DR.DECR w/o KD\({}_{XOR}\), and even outperforms DR.DECR which utilizes both parallel queries and parallel corpus. Note that knowledge distillation with parallel corpus in DR.DECR is designed for cross-lingual dense retrieval, but LaBSE is a general pre-trained model for all cross-lingual tasks. These results show the effectiveness of the proposed QuiCK. Our method combines two methods in dense retrieval and cross-lingual tasks, namely distillation and alignment. We further analyze the contribution of each component in Section 5.3. In addition, we show the results on XOR-Retrieve test set in Table 2, which is copied from the leaderboard1 on June 15, 2022. As we can see, our method achieves the top position on the leaderboard of XOR-Retrieve. Footnote 1: [https://nlp.cs.washington.edu/xorqa](https://nlp.cs.washington.edu/xorqa) **MKQA.** Furthermore, we evaluate the zero-shot performance of our method on the MKQA test set. Following previous works (Sorokin et al., 2022), we directly evaluate the dual-encoder training on XOR-Retrieve data and report the performance of unseen languages on MKQA. As shown in Table 3, our method outperforms all baselines and even performs better than Sentri. Note that Sentri uses a shared encoder with large size. The comparison between Sentri and Sentri w/ Bi-Encoder shows that the large encoder has better transfer ability. Finally, the proposed QuiCK w/ LaBSE outperforms all baselines with a clear edge. It shows the better transfer ability of our methods. ### Methods Analysis **Ablation Study.** Here, we check how each component contributes to the final performance. We construct the ablation experiments on XOR-Retrieve data. We prepare four variants of our method: \begin{table} \begin{tabular}{l|c c c c c c c|c|c c c c c c|c} \hline \hline \multirow{2}{*}{Methods} & \multicolumn{8}{c|}{R@2kt} & \multicolumn{8}{c}{R@5kt} \\ \cline{2-13} & Ar & Bn & Fi & Ja & Ko & Ru & Te & Avg & Ar & Bn & Fi & Ja & Ko & Ru & Te & Avg \\ \hline mDPR\({}^{*}\) & 38.8 & 48.4 & 52.5 & 26.6 & 44.2 & 33.3 & 39.9 & 40.5 & 48.9 & 60.2 & 59.2 & 34.9 & 49.8 & 43.0 & 55.5 & 50.2 \\ DPR + MT\({}^{*}\) & 43.4 & 53.9 & 55.1 & 40.2 & 50.5 & 30.8 & 20.2 & 42.0 & 52.4 & 62.8 & 61.8 & 48.1 & 58.6 & 37.8 & 32.4 & 50.6 \\ \hline Sentri\({}^{*}\) & 47.6 & 48.1 & 53.1 & 46.6 & 49.6 & 44.3 & 67.9 & 51.0 & 56.8 & 62.2 & 65.5 & 53.2 & 55.5 & 52.3 & 80.3 & 60.8 \\ w/ Bi-Encoder\({}^{*}\) & 47.8 & 39.1 & 48.9 & 51.2 & 40.2 & 41.2 & 49.4 & 45.4 & 55.1 & 43.3 & 59.5 & 59.4 & 51.2 & 52.0 & 56.9 & 53.9 \\ \hline DR.DECR\({}^{*}\) & - & - & - & - & - & - & 66.0 & 70.2 & **85.9** & 69.4 & 65.1 & 68.8 & 68.8 & 83.2 & 73.1 \\ w/o KD\({}_{XOR}^{*}\) & - & - & - & - & - & - & 60.6 & - & - & - & - & - & - & - & 68.6 \\ w/o KD\({}_{PC}^{*}\) & - & - & - & - & - & - & 56.6 & - & - & - & - & - & - & - & 63.6 \\ \hline QuiCK & 52.8 & 70.1 & 62.2 & 54.8 & 62.8 & 57.8 & 70.6 & 61.3 & 63.8 & 78.0 & 65.3 & 63.5 & 69.8 & 67.1 & 74.8 & 68.9 \\ QuiCK w/ LaBSE & **67.3** & **78.9** & **65.9** & **59.8** & **66.3** & **63.7** & **80.7** & **68.9** & **72.2** & 83.2 & **69.7** & **68.0** & **70.9** & **71.7** & **84.9** & **74.4** \\ \hline \hline \end{tabular} \end{table} Table 1: Comparison results on XOR-Retrieve dev set. The best results are in bold. “\(*\)” denotes the results are copied from the source paper. Results unavailable are left blank. \begin{table} \begin{tabular}{l|c} \hline \hline Methods & R@2kt \\ \hline CORA\({}^{*}\) & 41.1 \\ BM25 + MT\({}^{*}\) & 42.0 \\ Sentri + & 53.3 \\ w/ Bi-Encoder\({}^{*}\) & 45.3 \\ \hline QuiCK & 53.4 \\ QuiCK w/ LaBSE & **60.3** \\ \hline \hline \end{tabular} \end{table} Table 3: Average performance of 20 unseen languages in MKQA test set. “\(*\)” denotes the results are copied from the Sentri paper. (1) w/o Sampling denotes without the scheduled sampling but keep the threshold \(T\) for \(c^{\prime}\), _a.k.a.,_ if \(c^{\prime}\geq T\), then \(c^{\prime}=1\), otherwise \(c^{\prime}=0\); (2) w/o Alignment denotes without \(\mathcal{L}_{A}\) in Eq. (11); (3) w/o Generation denotes without \(\mathcal{L}^{\prime}_{D}\) and \(\mathcal{L}_{A}\) in Eq. (11); (4) w/o All denotes without the enhanced training, _a.k.a.,_ the warm-up dual-encoder. Table 4 presents all comparison results of the four variants. As we can see, the performance rank of R@5kt can be given as: w/o All < w/o Generation < w/o Alignment < w/o Sampling < QuiCK. These results indicate that all components are essential to improve performance. And we can find the margin between w/o Alignment and w/o Sampling is small, it denotes that the generated queries are noisy and demonstrate the effectiveness of our schedule sampling strategy. **Effect of Alignment.** As we mentioned in Section 1, the alignment established in the pre-training stage may be damaged without any constraint in the fine-tuning stage. Here, we construct experiments on both XLM-R and LaBSE to analyze the effectiveness of the proposed alignment training. As shown in Table 5, the proposed alignment training is effective based on the two models. It indicates that the alignment constraint in the fine-tuning stage is effective for models which pre-trained with parallel corpus. And we find that the gains of alignment training based on XLM-R are larger than LaBSE, which shows that the alignment constraint is more effective for models which do not pre-trained with parallel corpus. **Cross-Encoder versus Query Generator.** Here, we analyze the re-ranking ability of cross-encoder and query generator. Here, we use the warm-up dual-encoder to retrieve passages, vary the number of candidate passages, and then evaluate the re-ranked result. As shown in Figure 4, when we use the top-100 candidate passages, the performance of the cross-encoder and generator is almost the same. But as the number of candidate passages increases, especially when it surpasses 500, the gap between the performance of the cross-encoder and query generator gradually becomes larger. It shows low generalization performance of the cross-encoder when there are not enough training samples. **Visualization of the Training Procedure.** We visualize the performance changes of R@2kt during the training of both dual-encoder and query generator re-ranker which re-ranks the retrieved top-100 passages. We also incorporate a cross-encoder (initialized with XLM-R Large) to perform distillation and re-ranking for comparison. As shown in Figure 5, the R@2kt of all models gradually increases as the iteration increases. While the training advances closer to convergence, the improvement gradually slows down. In the end, the performance of the dual-encoder is improved by approximately 17%, and the performance of the query generator is \begin{table} \begin{tabular}{l|c c} \hline \hline Methods & R@2kt & R@5kt \\ \hline QuiCK & **61.3** & **68.9** \\ w/o Sampling & 59.5 & 67.5 \\ w/o Alignment & 59.9 & 67.1 \\ w/o Generation & 58.8 & 65.9 \\ w/o All & 41.5 & 53.4 \\ \hline \hline \end{tabular} \end{table} Table 4: Ablation results on XOR-Retrieve dev set. Figure 4: Re-ranking performance of cross-encoder and query generator on XOR-Retrieve dev set with different numbers of candidate passages. Figure 5: The changes of R@2kt during the iteratively training on XOR-Retrieve dev set. Here, “QG” denotes Query Generator and “CE” denotes Cross Encoder. \begin{table} \begin{tabular}{l|c c|c c} \hline \hline \multirow{2}{*}{Methods} & \multicolumn{2}{c|}{XLM-R} & \multicolumn{2}{c}{LaBSE} \\ \cline{2-5} & R@2kt & R@5kt & R@2kt & R@5kt \\ \hline QuiCK & **61.3** & **68.9** & **68.9** & **74.4** \\ w/o Alignment & 59.9 & 67.1 & 67.9 & 73.8 \\ \hline \hline \end{tabular} \end{table} Table 5: Effect of alignment based on different pre-trained languages models. improved by approximately 20%. Finally, comparing the performance of the cross-encoder and the query generator, we can find that there are approximately 6% gaps for both teachers and students. It shows the effectiveness of our method. ## 6 Conclusion In this paper, we showed that the cross-encoder performs poorly when there are not sufficient training samples which are hard to obtain in the cross-lingual setting. Then we proposed a novel method that utilizes the query generator to improve the dual-encoder. We firstly proposed to use a query generator as the teacher. After that, we proposed a novel alignment method for cross-lingual retrieval which does not need any parallel corpus. Extensive experimental results show that the proposed method outperforms the baselines and significantly improves the state-of-the-art performance. Currently, our method depends on training data in all target languages. As future work, we will investigate how to perform the proposed method for zero-shot cross-lingual dense retrieval. ## 7 Limitations The limitations are summarized as follows. * The method depends on training data in all target languages. Intuitively, the method can be directly applied to the zero-shot cross-lingual dense retrieval if we only take the passage as input for the query generator, but the query generator performs poorly in the zero-shot setting. As future work, novel pre-training tasks for cross-lingual generation can be considered. * The method does not investigate how to effectively train the query generator for the re-ranking task, just directly applies the training method for the cross-encoder re-ranker. We believe the potential of query generators for re-ranking is strong and designing a special re-ranking training method for query generators such as token-level supervision may be interesting for future work. * The method requires large GPU sources. The final model approximately costs 12 hours on 8 NVIDIA Tesla A100 GPUs. Although researchers who do not have enough GPU sources can use the "gradient accumulation" technique to reduce GPU memory consumption, they also need to pay more time. * This work does not consider the inconsistency between different countries (_e.g.,_ law and religion), which leads to inconsistent positive passages for synonymous queries in different languages (_e.g.,_ the legal age of marriage varies from country to country). Because we find that most of the queries in XOR-Retrieve contain the target country such as _"Mika on yleisin kissa laji Suomessa?" (translation: What is the most common cat breed in Finland?)_.
2305.10879
The warm inflation story
Warm inflation has normalized two ideas in cosmology, that in the early universe the initial primordial density perturbations generally could be of classical rather than quantum origin and that during inflation, particle production from interactions amongst quantum field, and its backreaction effects, can occur concurrent with inflationary expansion. When we first introduced these ideas, both were met with resistance, but today they are widely accepted as possibilities with many models and applications based on them, which is an indication of the widespread influence of warm inflation. Open quantum field theory, which has been utilized in studies of warm inflation, is by now a relevant subject in cosmology, in part due to this early work. In this review I first discuss the basic warm inflation dynamics. I then outline how to compute warm inflation dynamics from first principles quantum field theory (QFT) and in particular how a dissipative term arises. Warm inflation models can have an inflaton mass bigger than the Hubble scale and the inflaton field excursion can remain sub-Planckian, thus overcoming the most prohibitive problems of inflation model building. I discuss the early period of my work in developing warm inflation that helped me arrive at these important features of its dynamics. Inflationary cosmology today is immersed in hypothetical models, which by now are acting as a diversion from reaching any endgame in this field. I discuss better ways to approach model selection and give necessary requirements for a well constrained and predictive inflation model. A few warm inflation models are pointed out that could be developed to this extent. I discuss how at this stage more progress would be made in this subject by taking a broader view on the possible early universe solutions that include not just inflation but the diverse range of options.
Arjun Berera
2023-05-18T11:18:11Z
http://arxiv.org/abs/2305.10879v2
# The warm inflation story ###### Abstract Warm inflation has normalized two ideas in cosmology, that in the early universe the initial primordial density perturbations generally could be of classical rather than quantum origin and that during inflation, particle production from interactions amongst quantum field, and its backreaction effects, can occur concurrent with inflationary expansion. When we first introduced these ideas, both were met with resistance, but today they are widely accepted as possibilities with many models and applications based on them, which is an indication of the widespread influence of warm inflation. Open quantum field theory, which has been utilized in studies of warm inflation, is by now a relevant subject in cosmology, in part due to this early work. In this review I first discuss the basic warm inflation dynamics. I then outline how to compute warm inflation dynamics from first principles quantum field theory (QFT) and in particular how a dissipative term arises. Warm inflation models can have an inflaton mass bigger than the Hubble scale and the inflaton field excursion can remain sub-Planckian, thus overcoming the most prohibitive problems of inflation model building. I discuss the early period of my work in developing warm inflation that helped me arrive at these important features of its dynamics. Inflationary cosmology today is immersed in hypothetical models, which by now are acting as a diversion from reaching any endgame in this field. I discuss better ways to approach model selection and give necessary requirements for a well constrained and predictive inflation model. A few warm inflation models are pointed out that could be developed to this extent. I discuss how at this stage more progress would be made in this subject by taking a broader view on the possible early universe solutions that include not just inflation but the diverse range of options. keywords: early universe cosmology, warm inflation, quantum field theory, model building ## I Introduction Warm inflation was introduced 28 years ago. At that time the standard inflation scenario, hereafter called cold inflation, was overwhelming accepted as the valid description of the early phases of the universe, with much anticipation of its confirmation from the planned cosmic microwave background (CMB) experiments within the coming decades. In that time warm inflation has gone from being considered by many in cosmology as a distraction to one of the most promising solutions. The idea stems from an elementary observation. The central theme of inflationary dynamics has been the evolution of a scalar field, which during inflation carries most of the energy of the universe and which interacts with other fields. On the one hand, in the standard inflation picture the tacit assumption made is that these interactions have no effect apart from modifying the scalar field effective potential through quantum corrections. On the other hand, in the warm inflation picture interactions not only do that but also lead to fluctuation and dissipation effects. In condensed matter systems, interactions certainly lead in general to all three of these effects (some examples in [1]). Moreover from a statistical mechanics perspective, the scalar field would want to dissipate its energy to other fields, and the system as a whole would try to equally distribute the available energy. Ultimately a thorough dynamical calculation is needed to address the question. In cosmology, there is one important way this scalar field dynamics differs from condensed matter systems, which is that all processes for the former occur in an expanding universe. Expansion acts to constantly alter the state of the cosmological system. For example due to expansion, radiation energy in the universe is continually being diluted. Similarly, the configuration of any cosmological scale process is being altered over time. Thus if the quantum mechanical processes that lead to dissipation operate at a time scale much slower than the expansion rate of the universe, than these processes would be totally shut down due to expansion, even if in a nonexpanding system, like a condensed matter system, the same processes operate efficiently. This is the important question that must be understood. In the early years of inflation, there was a viewpoint that inflation had to be in a supercooled phase, since expansion would be too fast for any such microphysical processes to occur that lead to dissipation. However our work in warm inflation changed this point of view. Today this possibility is accepted without much question, thus one indicator of the wide influence and success of warm inflation. The other major influence warm inflation has had is in normalizing the possibility of the initial primordial fluctuations being classical not quantum. Again due to our timescale analysis, we showed there is considerable dynamical range in the early universe for multiparticle processes, such as those leading to thermalization or other statistical states. When Li-Zhi Fang and I first were working on this idea [2], neither of us had a full scenario in mind. We
2306.09878
A new derivation of the Hubble constant from $γ$-ray attenuation using improved optical depths for the Fermi and CTA era
We present $\gamma$-ray optical-depth calculations from a recently published extragalactic background light (EBL) model built from multiwavelength galaxy data from the Hubble Space Telescope Cosmic Assembly Near-Infrared Deep Extragalactic Legacy Survey (HST/CANDELS). CANDELS gathers one of the deepest and most complete observations of stellar and dust emissions in galaxies. This model resulted in a robust derivation of the evolving EBL spectral energy distribution up to $z\sim 6$, including the far-infrared peak. Therefore, the optical depths derived from this model will be useful for determining the attenuation of $\gamma$-ray photons coming from high-redshift sources, such as those detected by the Large Area Telescope on board the Fermi Gamma-ray Space Telescope, and for multi-TeV photons that will be detected from nearby sources by the future Cherenkov Telescope Array. From these newly calculated optical depths, we derive the cosmic $\gamma$-ray horizon and also measure the expansion rate and matter content of the Universe including an assessment of the impact of the EBL uncertainties. We find $H_{0}=61.9$ $^{+2.9}_{-2.4}$ km s$^{-1}$ Mpc$^{-1}$ when fixing $\Omega_{m}=0.32$, and $H_{0}=65.6$ $^{+5.6}_{-5.0}$ km s$^{-1}$ Mpc$^{-1}$ and $\Omega_{m}=0.19\pm 0.07$, when exploring these two parameters simultaneously.
A. Domínguez, P. Østergaard Kirkeberg, R. Wojtak, A. Saldana-Lopez, A. Desai, J. R. Primack, J. Finke, M. Ajello, P. G. Pérez-González, V. S. Paliya, D. Hartmann
2023-06-16T14:43:07Z
http://arxiv.org/abs/2306.09878v2
A new derivation of the Hubble constant from \(\gamma\)-ray attenuation using improved optical depths for the \(Fermi\) and CTA era ###### Abstract We present \(\gamma\)-ray optical-depth calculations from a recently published extragalactic background light (EBL) model built from multiwavelength galaxy data from the _Hubble Space Telescope Cosmic Assembly Near-Infrared Deep Extragalactic Legacy Survey_ (HST/CANDELS). CANDELS gathers one of the deepest and most complete observations of stellar and dust emissions in galaxies. This model resulted in a robust derivation of the evolving EBL spectral energy distribution up to \(z\sim 6\), including the far-infrared peak. Therefore, the optical depths derived from this model will be useful for determining the attenuation of \(\gamma\)-ray photons coming from high-redshift sources, such as those detected by the Large Area Telescope on board the _Fermi Gamma-ray Space Telescope_, and for multi-TeV photons that will be detected from nearby sources by the future Cherenkov Telescope Array. From these newly calculated optical depths, we derive the cosmic \(\gamma\)-ray horizon and also measure the expansion rate and matter content of the Universe including an assessment of the impact of the EBL uncertainties. We find \(H_{0}=61.9\begin{subarray}{c}+2.9\\ -2.4\end{subarray}\) km s\({}^{-1}\) Mpc\({}^{-1}\) when fixing \(\Omega_{m}=0.32\), and \(H_{0}=65.6\begin{subarray}{c}+5.6\\ -5.0\end{subarray}\) km s\({}^{-1}\) Mpc\({}^{-1}\) and \(\Omega_{m}=0.19\pm 0.07\), when exploring these two parameters simultaneously. keywords: galaxies: evolution - galaxies: formation - diffuse background ## 1 Introduction Star-formation processes produce radiation that is redshifted and accumulated in intergalactic space over cosmic history. These photons at ultraviolet (UV), optical, and infrared (IR) wavelengths are known as the extragalactic background light (EBL), and its understanding is essential for building a complete picture of galaxy formation (e.g. Hauser et al., 1998; Driver et al., 2008; Dwek and Krennrich, 2013; Driver, 2021). Gamma rays with energies above approximately 30 GeV traveling over cosmological lengths are attenuated through pair production by EBL photons (e.g. Nikishov, 1962; Gould and Schreder, 1966). This interaction is dependent on the distance to the source and the energy of the \(\gamma\)-ray photon, leaving a signature in the \(\gamma\)-ray spectrum of extragalactic sources. This effect has been systematically measured with different instruments and techniques in blazars (e.g. Ackermann et al., 2012; Abramowski et al., 2013; Dominguez et al., 2013; Biteau and Williams, 2015; Abdalla et al., 2017; Abdollahi et al., 2018; Acciari et al., 2019; Abeysekara et al., 2019; Desai et al., 2019) and also \(\gamma\)-ray bursts (Desai et al., 2017). An accurate and precise characterization of the EBL is necessary for the correct estimate of the intrinsic properties of blazars (e.g. Dominguez and Ajello, 2015; van den Berg et al., 2019; Nievas Rosillo et al., 2022), the understanding of \(\gamma\)-ray propagation physics over space (e.g. Aharonian et al., 1994; Coppi and Aharonian, 1997; de Angelis et al., 2007; Sanchez-Conde et al., 2009; Dominguez et al., 2011; Broderick et al., 2012; Buehler et al., 2020; Franceschini, 2021; Biteau and Meyer, 2022), the derivation of cosmological parameters (Dominguez and Prada, 2013; Biteau and Williams, 2015; Dominguez et al., 2019) and also detectability predictions at TeV energies extrapolated from GeV observations (e.g. Hassan et al., 2017; Paiano et al., 2021). However, most of current EBL models are constructed based on rather limited galaxy data (e.g. Finke et al., 2010; Dominguez et al., 2011; Gilmore et al., 2012; Helgason and Kashlinsky, 2012; Stecker et al., 2016; Franceschini and Rodighiero, 2017; Andrews et al., 2017) resulting in large uncertainties, specially at high-redshift and in the mid-to-far IR. In order to reduce such uncertainties, we used a sample of more than 150,000 galaxies with multiwavelength observation from \(z=0\) to \(z=6\) to produce a purely observationally-based EBL model (Saldana-Lopez et al., 2021, hereafter S21). This model uses the unprecedented high-redshift and IR data from the _Cosmic Assembly Near-Infrared Deep Extragalactic Legacy Survey_ (CANDELS) collaboration, thus reducing uncertainties and relying in less extrapolations than other previous empirical models. The CANDELS survey gathers the deepest datasets in the UV (from GALEX and HST/WFC3/UVIS), optical (from HST/ACS and ground based telescopes such as GTC, Subaru, or VLT), near-infrared (from HST/WFC3 and ground based telescopes such Subaru or UKIRT), mid-infrared (from Spitzer/IRAC), and far-infrared (from Spitzer/MIPS and Herschel PACS and SPIRE). Combining measurements of the \(\gamma\)-ray attenuation and observationally determined EBL model enables estimating cosmological distance as a function of redshift. This method provides absolute calibration of cosmological distances and thus it is a self-contained technique to determine the Hubble constant (Dominguez et al., 2019). Measuring the Hubble constant based on diverse and independent methods of distance calibration will play an important role in pinning down the cause of the so-called Hubble constant tension (Riess et al., 2022). This problem is a discrepancy between the Hubble constant derived from the Planck observations of the cosmic microwave background (CMB) radiation (Aghanim et al., 2018) and its counterpart obtained in the Supernovae and \(H_{0}\) for the dark energy equation of state (SHOES) program using observations of Cepheids and type Ia supernovae with direct local geometric distance anchors (Riess et al., 2022). The Hubble constant tension may signify a cosmological anomaly, which would require revision of the standard cosmological model, most likely in the early universe before recombination (Schoneberg et al., 2021). The leading cosmological solution to the Hubble constant tension involves early dark energy (Poulin et al., 2023) and it has testable implications for structure formation at high redshifts (Klypin et al., 2021). On the other hand, the possibility of hidden systematic effects is not completely ruled out. For example, type Ia supernovae have been recently shown to be the most likely source of possible unaccounted systematic errors in the local determination of the Hubble constant (Wojtak and Hjorth, 2022). In this article, we discuss the \(\gamma\)-ray optical depths, cosmic \(\gamma\)-ray horizon (CGRH), and the cosmological parameters from \(\gamma\)-ray attenuation extracted from S21. This paper is organized as follows. In Section 2, we summarizes the S21 model. Section 3 describes the theoretical aspects related with deriving the optical depths from an EBL model, including the cosmological dependence. Then, Section 4 describes and discusses the results, followed by Section 5 that concludes this work. ## 2 Extragalactic background light model The S21 empirical EBL model relies on observationally derived galaxy spectral energy distributions (SEDs), which were used to constrain the evolution of the (galaxy) cosmic emissivity at 0.1-1000 \(\mu m\) from \(z=0\) to 6, and thus the evolution of the EBL itself. This section summarizes the analysis and main results presented in S21. ### Galaxy data and spectral energy distributions In S21, we used multiwavelength data from CANDELS (Grogin et al., 2011; Koekemoer et al., 2011). CANDELS is a _Hubble Space Telescope_ (HST) large program which made use of the Advanced Camera for Surveys (ACS) and the Wide Field Camera 3 (WFC3) to provide broad-band near-ultraviolet (NUV) to near-infrared (NIR) observations (0.4-1.6 \(\mu m\)). CANDELS observed in five of the so-called cosmological fields: GOODS-S (Guo et al., 2013), UDS (Galametz et al., 2013), COSMOS (Nayyeri et al., 2017), EGS (Stefanon et al., 2017) and GOODS-N (Barro et al., 2019). Thanks to the use of five different fields, potential cosmic variance effects are mitigated in the S21 EBL determination. The effective sky area per field is \(\sim\)200 arcmin\({}^{2}\), with around 35,000 galaxies detected in each field. CANDELS includes two observational modes with different 5\(\sigma\)\(H\)-band limiting magnitude (m\({}_{H}^{lim}\)): CANDELS/Wide (m\({}_{H}^{lim}\)\(\sim\) 27 AB) and CANDELS/Deep (m\({}_{H}^{lim}\)\(\sim\) 27.7 AB). Data from CANDELS have been also used for the determination of cosmological quantities such as the evolution of the star formation rate and stellar mass density of the Universe (via luminosity and mass functions) or the relative importance of dust-enshrouded star formation as a function of redshift. The HST observations were augmented by a multitude of follow-up observations in other world-wide ground- and space-based facilities. In particular, the _Spitzer Infrared Telescope_ and the Herschel observatory provided valuable mid- and far-IR photometry (3.6-500 \(\mu m\)) for the five CANDELS footprints. Additionally, GALEX FUV and NUV observations are available for GOODS-S and GOODS-N fields (Morrissey et al., 2007), and also UVIS data. Such a huge community effort results in more than 25 broad- and intermediate-band observations for each object in the CANDELS survey, one of the most comprehensive imaging data sets ever. To access the CANDELS data and retrieve the photometry for every galaxy, we used the Rainbow Cosmological Surveys Database1(Barro et al., 2011, 201, 201). Our primary sample selection required detection in the F160W band, and a signal-to-noise threshold \(\geq 5\sigma\) in the same band. After removing potential stars in the field (cf. Guo et al., 2013), the working sample is composed of 154,447 out of a total of 186,435 sources. The redshift of every source is given by the photometric redshift reported by the CANDELS collaboration. Footnote 1: [http://arcoiraix.cab.inta-csic.es/Rainbow_navigator_public](http://arcoiraix.cab.inta-csic.es/Rainbow_navigator_public) The UV-to-NIR SED of every galaxy was modelled by stellar population templates and in a homogeneous fashion for the whole working sample, following Barro et al. (2019). The stellar population templates were extracted from a semi-empirical library of 1,876 synthetic SEDs, built from GOODS-S/N IRAC selected sources (Perez-Gonzalez et al., 2005, 2008). Assuming a Chabrier initial mass function (Chabrier, 2003) and a Calzetti extinction law (Calzetti, 2001), the stellar emission of the reference templates was characterized using the Pegase2.0 models (Fioc and Rocca-Volmerange, 1997), including nebular continuum and emission lines' flux contribution. The models were constructed assuming a single stellar population, driven by an exponentially declining star-formation history. Given the low number of detections at MIR-to-FIR bands, we followed a novel approach to infer the IR contribution for the galaxies in our sample. In short, we split the catalogs into three categories. For type (1), galaxies where there is detection by both _Spitzer_ MIPS/24\(\mu m\) and at least one Herschel band, we take the best-fit Chary and Elbaz (2001) template to reproduce the IR-SED. In case (2), sources where the reddest detection is MIPS/24\(\mu m\), the global dust emission has to be extrapolated from a single point. We use the expression from Wuyts et al. (2011) that derives the total IR luminosity -L(TIR )- at 8-1000\(\mu m\) given the observed MIPS/24\(\mu m\) flux density. Then, we associate a Chary & Elbaz (2001) scaled dust emission template to every inferred L(TIR). Finally, in case (3), L(TIR) is computed from the SFR-excess at 2800A i.e., the difference between the SED-modelled SFR in the NUV and the dust-corrected SFR at the same wavelength, which gives an indication of the total energy absorbed by the dust (Schaerer et al., 2013). The corrected SFR at 2800A was computed following the methods in Barro et al. (2019), based on measured UV \(\beta\)-slopes and assuming typical literature IRX-\(\beta\) relationships (Meurer et al., 1999). Once again, a scaled Chary & Elbaz (2001) model, recalibrated as a function of mass, was linked to every inferred L(TIR). For more details about the CANDELS survey, sample selection and the derived galaxy-SEDs, see the dedicated sections in S21 and references therein. ### Empirical formalism of the model From the SED of every galaxy, in S21 we recovered the EBL SED by first calculating the evolution with redshift of the so-called cosmic luminosity density, \(j(\lambda,z)\), at 0.1-1000 \(\mu m\). The CANDELS sample was first divided into 15 redshift bins (\(\Delta z_{\ell}\)) from \(z=0\) to \(z=6\), and the 'observed luminosity density (\(j_{\rm obs}(\lambda,\Delta z_{i})\)) obtained by stacking together the rest-frame galaxy-SEDs contained within the same redshift interval, and dividing by the corresponding comoving volume. Then, a completeness correction was applied to the 'observed' luminosity density by adding the contribution of galaxies below the mass completeness limit at every \(z\)-bin. This contribution is computed using the Ilbert et al. (2013) (\(0\leq z<4\)) and Grazian et al. (2015) (\(4\leq z<6\)) Stellar Mass Functions, which gives us the number of'missing' galaxies as a function of redshift (i.e., below the mass limit). The characteristic emission of such populations of low-mass (or faint) galaxies is given by the average SED of the galaxies immediately above the mass-completeness limit, weighted by the fraction of star-forming versus quiescent galaxies (UVJ selection, see Williams et al., 2009; Whitaker et al., 2011). In S21, we noted that the effect of our completeness corrections is only noticeable at the higher redshifts of the model (\(z\geq 3\)). As a result, S21 obtained values for the monochromatic luminosity density (at specific band-passes), which are fully compatible with other reported values in the literature, in particular, at the FUV (1500A), NUV (2800A), \(B\)- (4400A) and \(K\)- (2.2 \(\mu m\)) bands, up to the higher redshifts available in the previous publications. Moreover, the derived total-IR luminosity density (integrated over at 8-1000\(\mu m\)) and the Cosmic Star-Formation History (CSFH) from the S21 model are in agreement with literature inferences up to \(z=2\). At higher \(z\), literature values usually fall below our estimations due to our accounting for IR non-detected galaxies (see previous section). However, our derived CSFH is in agreement within 1\(\sigma\) with the \(\gamma\)-ray attenuation results by Abdollah et al. (2018) whose method is, in principle, sensitive to all the EBL photons in the cosmic budget, giving us confidence regarding the validity of our model. Finally, the EBL-SED was computed by integrating the evolution of the comoving luminosity density, \(j(\lambda,z)\), as follows (see e.g., Mo et al., 2010), \[\lambda I_{\lambda}(\lambda,z_{i})=\frac{c^{2}}{4\pi\lambda}\int_{z_{\ell}}^{ z_{max}}j\left(\lambda(1+z_{i})/(1+z^{\prime}),z^{\prime})\right)\left|\frac{dt}{dz^{ \prime}}\right|dz^{\prime}, \tag{1}\] where \(\lambda I_{\lambda}(\lambda,z_{i})\) is given in nW m\({}^{-2}\) sr\({}^{-1}\), \(c\) is the speed of light in vacuum, and the \(\left|dt/dz^{\prime}\right|\) factor comes from the adopted cosmology and is given by \[\frac{dt}{dz^{\prime}}=\frac{1}{c(1+z^{\prime})H(z^{\prime})}. \tag{2}\] This cosmology is initially described by the fiducial flat \(\Lambda\)CDM framework with a matter density parameter \(\Omega_{\rm m}=0.3\) and \(H_{0}=70\) km s\({}^{-1}\) Mpc\({}^{-1}\), although later we will let the cosmological parameters vary. For this cosmology, the Hubble parameter \(H(z)\) is given by \[H^{2}(z)=H_{0}^{2}[\Omega_{\rm m}(1+z)^{3}+1-\Omega_{\rm m}] \tag{3}\] and the comoving volume \(V_{\rm com}(z)=(4/3)\pi D_{\rm com}^{3}(z)\), where \(D_{\rm com}(z)\) is the comoving distance. Uncertainties on the EBL model came from the proper propagation of single galaxy-SED errors into the luminosity density SED, and subsequently into the EBL-SED using the previous equation. At \(z=0\), S21 report an integrated intensity of \(23.1\pm 4.0\) nW m\({}^{-2}\) sr\({}^{-1}\) for the optical peak, which is consistent within 1\(\sigma\) with previous estimations from empirical models such as (e.g., Finke et al., 2010; Dominguez et al., 2011b; Gilmore et al., 2012; Helgason & Kashlinsky, 2012), galaxy number counts such as those by Driver et al. (2016), and the \(\gamma\)-ray derivations by Abdollah et al. (2018); Desai et al. (2019). For the IR peak, S21 give \(32.0\pm 7.6\) nW m\({}^{-2}\) sr\({}^{-1}\), compatible with other galaxy counts studies such as Madau & Pozzetti (2000); Driver et al. (2016); Bethermin et al. (2012). At higher redshifts, the evolution of the optical peak generally agrees with previous models up to \(z=1\) and after that, our EBL model generally follows the lower bound of the the latest results from \(\gamma\)-ray attenuation in Abdollah et al. (2018), up to \(z=3\). We stress that for this current paper, the S21 EBL uncertainties are recalculated. For constructing the upper and lower regions of the uncertainty band, we took the maximum and minimum values of the luminosity densities at each redshift bin. This procedure also assumes that the uncertainties in each redshift bin are correlated and translates in an overestimate of the uncertainties. Now, the EBL uncertainties are recomputed by building 500 MCMC realizations of the EBL spectral intensities, and their evolution, that are permitted by the luminosity density measurements, drawing random values within the uncertainties. These are assumed to be independent thus they are estimated in 15 non-overlapping redshift bins. In summary, the S21 EBL empirical model stands as the only one to date that provides the cosmic EBL intensity at \(z=0-6\) purely based on observational data, also capable to reproduce (1) a diversity of FUV, NUV, optical, NIR, and TIR luminosity density in the literature, (2) the CSFH, which is in agreement with usual galaxy evolutionary hierarchical scenarios, (3) the lower bound of the direct detection data, (4) galaxy counts, and (5) the latest results from \(\gamma\)-ray attenuation studies. These results will allow us to derive the evolution of the \(\gamma\)-ray opacity of the Universe with unprecedented accuracy. ## 3 Theoretical background on \(\gamma\)-ray attenuation The optical depth \(\tau(E,z)\) that is produced by pair production interactions between \(\gamma\) rays and EBL photons is analytically given by \[\tau(E,z)=\int_{0}^{z}\left(\frac{dl}{dz^{\prime}}\right)dz^{\prime}\int_{0}^{2 }d\mu\frac{\mu}{2}\int_{\varepsilon_{th}}^{\infty}ds^{\prime}\ \sigma_{\gamma\gamma}(\beta^{\prime})n(\varepsilon^{\prime},z^{\prime}), \tag{4}\] where \(E\) and \(z\) are respectively the observed energy and redshift of the \(\gamma\)-ray source, \(\sigma_{\gamma\gamma}\) is the photo-photon pair production cross section and \(n(\varepsilon^{\prime},z^{\prime})\) is the proper number density of EBL photons with rest-frame energy \(\varepsilon^{\prime}\) at redshift \(z^{\prime}\) given by \[\varepsilon^{\prime}n(\varepsilon^{\prime},z^{\prime})=\frac{4\pi}{c\varepsilon^{ \prime}}J_{\lambda}(hc/\varepsilon^{\prime},z^{\prime}). \tag{5}\] The cross section \(\sigma_{\gamma\gamma}\) depends on relative rest-frame energies of \(\gamma\)-ray photon (\(E^{\prime}\)), EBL photon (\(e^{\prime}\)) and the rest mass energy of electron \(m_{e}c^{2}\) via \[\theta^{\prime}=\frac{\epsilon_{th}}{\epsilon^{\prime}}, \tag{6}\] where \(\epsilon_{th}\) is the energy threshold for photon-photon pair production given by \[\epsilon_{th}\equiv\frac{2m_{e}^{2}c^{4}}{E^{\prime}\mu} \tag{7}\] and variable \(\mu=(1-\cos\theta)\) relates the energy threshold to the angle of interaction \(\theta\). The optical depth \(\tau(E,z)\) given by Equation 4 depends on cosmological model in a twofold way. First, the integral along the line of sight involves a relation between the proper distance \(l\) and redshift \(z\) given by \[\frac{dl}{dz}=\frac{cdt}{dz}. \tag{8}\] Second, cosmological model enters also calculation of the EBL photon density via comoving luminosity density \(j(\lambda,z)\) and integral over lookback time given by Equation 1. Rederiving the EBL model in different cosmological models requires the following steps. The comoving luminosity density resulting from stacking SED templates in redshift bis is rescaled according to the following equation: \[j=j_{\rm fid}\frac{D_{\rm L}(z_{i})^{2}}{D_{\rm L,fid}(z_{i})^{2}}\frac{V_{\rm com,fid}(z_{i}+\Delta z_{i}/2)-V_{\rm com,fid}(z_{i}-\Delta z_{i}/2)}{V_{\rm com}(z _{i}+\Delta z_{i}/2)-V_{\rm com}(z_{i}-\Delta z_{i}/2)}, \tag{9}\] where \(D_{\rm L}\) is the luminosity distance, \(V_{\rm com}\) is the comoving volume, \(z_{i}\) and \(\Delta z_{i}\) is the middle point and width of the \(i\)-th redshift bin. Distances \(D_{\rm L}(z_{i})\) and volumes \(V_{\rm com}(z_{i})\) are computed in a new cosmological model, while their counterparts marked with the subscript "fid" are for the fiducial model assumed in S21. Cosmological model changes also the stellar mass completeness limit and the corresponding completeness correction described by S21. Here, the lower limit of stellar masses scales with \(D_{\rm L}(z_{i})^{2}\), while the normalisation of the stellar mass function used to extrapolate galaxy counts below the completeness limit scales with the inverse comoving volume as \(1/(V_{\rm com}(z_{i}+\Delta z_{i}/2)-V_{\rm com}(z_{i}-\Delta z_{i}/2))\). Deriving the EBL density from observations of galaxies involves three consecutive scalings with the Hubble constant: \(\propto H_{0}^{-2}\) for converting observed galaxy fluxes to luminosities, \(\propto H_{0}^{3}\) for computing the comoving luminosity density and \(\propto H_{0}^{-1}\) for integrating the luminosity density over redshifts, as expressed in Equation 1. These three scalings cancel each other out so that the EBL density becomes nearly independent of the Hubble constant and the estimated optical depth for \(\gamma\)-ray attenuation proportional to \(H_{0}^{-1}\) via the integral along the line of sight (4). Therefore, the Hubble constant can be directly constrained from independent measurements of the \(\gamma\)-ray attenuation in spectra of objects in the Hubble flow by fitting normalisation of the predictions based on EBL model. Additional constraints from analogous measurements at high redshift are degenerated with the matter density parameter. This degeneracy results primarily from fitting a cosmological model to effectively a single distance measurement and it can be broken to a large extent by measuring the \(\gamma\)-ray attenuation in a possibly wide range of redshifts. ## 4 Results and Discussion In this section, we compare the EBL estimates from S21 with \(\gamma\)-ray attenuation data, and we present and discuss the results from the optical depths calculations, cosmic \(\gamma\)-ray horizon, and the measurement of \(H_{0}\) and \(\Omega_{m}\). ### Extragalactic background light comparison with other \(\gamma\)-ray attenuation data The EBL spectral energy distribution in the local Universe from S21 is shown in Figure 1 (note that the fiducial values for \(H_{0}\) and \(\Omega_{m}\) are used). This figure also shows the results derived from \(\gamma\)-ray attenuation data and galaxy counts. Note that the results determined from galaxy data tend to be on the lower bound of the calculations by Desai et al. (2019). We note the intriguing discrepancy at about \(4\sigma\) from the absolute photometry measurement by the _New Horizons_ spacecraft (Laver et al., 2022) and intensities derived from galaxies (e.g., Driver et al., 2016) at approximately \(0.6\mu\)m (see also Symons et al., 2023). We stress that in Figure 1 the \(\gamma\)-ray results by the MAGIC and VERITAS collaborations (Acciari et al., 2019; Abeysekara et al., 2019, respectively) are compatible with the Laver et al. (2022) data. The results from MAGIC at these wavelengths become upper limits when including systematic uncertainties. The same happens to the measurements by Biteau and Williams (2015). Furthermore, we also note that the results from the H.E.S.S. collaboration (Abdalla et al., 2017) when including systematic uncertainties are compatible with Biteau and Williams (2015). Therefore, in general, the \(\gamma\)-ray data from IACTs cannot rule out the high intensity EBL obtained by Lauer et al. (2022). Figure 2 shows the evolution of the EBL from S21, the new model by Finke et al. (2022) based on fitting a large compilation of galaxy data, and \(\gamma\)-ray attenuation data from Abdollahi et al. (2018) at four different redshifts. We see some discrepancies between the results derived from galaxies and \(\gamma\)-ray attenuation in the near infrared wavelength range at high redshifts \(z\gtrsim 0.6\). It is not entirely clear what is the source of this tension. One possibility is that there exists an extra amount of the EBL, which is not accounted for in the current EBL models. The additional EBL component would dominate in the near infrared range and perhaps at longer wavelengths. On this direction, there are works claiming that there may be a significant amount of light from galaxy halos, known as intra-halo light or from low-surface brightness galaxies (e.g., Zemcov et al., 2014; Borlaff et al., 2019; Trujillo et al., 2021; Cheng et al., 2021). Another reason for the discrepancy may lie in the measurements of the \(\gamma\)-ray attenuation. In this case, overestimation of the EBL density derived from the \(\gamma\)-ray attenuation data may be caused by a hidden systematic effect biasing up the optical depth measurements at high energies and redshifts. This tension could be explained by extra effects in the \(\gamma\)-ray photon propagation such as the development of particle cascades (e.g. Coppi and Aharonian, 1997) with or without the phenomenon of plasma instabilities (e.g. Broderick et al., 2012), although there is no other evidence of this effect so far (e.g., Finke et al., 2015; Ackermann et al., 2018). There could be also more exotic solutions such as the existence of axion-like particles (e.g. de Angelis et al., 2007; Sanchez-Conde et al., 2009; Buehler et al., 2020) or dark matter decay (e.g. Bernal et al., 2022). ### Optical depths By substituting \(n(\varepsilon,z)\) in Equation 4 for that given by S21, we can estimate optical depths2. Figure 3 shows these calculations in comparison with those from other EBL models and \(\gamma\)-ray attenuation data. We note that relative to other models, the agreement is good for the lower redshifts but our results tend to estimate lower optical depths at the higher redshifts, in particular for large energies of the order of 1 TeV. In comparison with the \(\gamma\)-ray attenuation data (Abdollahi et al., 2018; Desai et al., 2019), our results are in general in rather good agreement within uncertainties. Footnote 2: These files are available at [https://www.ucm.es/blazars/ebl](https://www.ucm.es/blazars/ebl) ### Cosmic \(\gamma\)-ray horizon An important cosmological observable is the energy as a function of the distance (or redshift) at which the optical depth is one. This property of the Universe is known as the CGRH and limits the Universe into a transparent and opaque region (e.g. Fazio and Stecker, 1970; Dominguez et al., 2013). As an example, looking at Figure 4, it is less likely to observe 1 TeV photons from \(z>0.1\) since those photons will be strongly attenuated. The same is expected for 100 GeV photons coming from \(z>1\). However the Universe is transparent for photons with energies \(E<30\) GeV. Note that the estimate from the S21 model is in agreement with the independent measurement from Abdollahi et al. (2018). ### The local Hubble constant and matter density of the Universe We constrain the Hubble constant \(H_{0}\) and the matter density parameter \(\Omega_{\rm m}\) by fitting optical depths based on the EBL model from S21 to the \(\gamma\)-ray measurements (Abdollahi et al., 2018; Desai et al., 2019). A flat \(\Lambda\)CDM cosmological model is assumed (See Equation 3) and the dependence on cosmology both in the EBL model and the line-of-sight integral for the optical depth is taken into account. We use log-likelihood given by \[\ln L\propto-\sum_{i,j}\frac{[\tau_{\rm model}(z_{i},E_{j})-\tau_{i,j}]^{2}}{2 \sigma_{i,j}^{2}}, \tag{10}\] where indices \(i\) and \(j\) run over redshift and energy bins, \(\tau_{i,j}\) are the measured values of the optical depth and \(\sigma_{i,j}\) takes the upper or lower error if respectively \(\tau_{\rm model}(z_{i},E_{j})>\tau_{i,j}\) or \(\tau_{\rm model}(z_{i},E_{j})<\tau_{i,j}\). For the upper and lower limits we use \(\sigma_{i,j}\gg 1\). Best fit cosmological Figure 1: Spectral intensities of the extragalactic background light at \(z=0\) from the EBL model by S21 (orange band, \(1\sigma\) uncertainty), galaxy counts from Driver et al. (2016, green symbols), and results based on \(\gamma\)-ray data from Biteau and Williams (2015, brown symbols), the H.E.S.S. collaboration in Abdollahi et al. (2017, red symbols), the MAGIC collaboration in Acciari et al. (2019, turquoise band) showing here their results using the D11 model, the VERITAS collaboration using their 68% containment result by Abeysekara et al. (2019, light red band) and Desai et al. (2019, purple band). We refer to the reader to the main text for an explanation on how the systematic uncertainties affect these results. The direct detection data from Lauer et al. (2022) from _New Horizons_ are shown at \(L=0.608~{}\mu\)m. (orange triangle, uncertainties for this are barely visible). Note that the uncertainties that we are showing for the S21 model are smaller than the ones in that paper because we have recalculated them using a more appropriate methodology that is described in Section 2.2. parameters and the corresponding uncertainties are computed using the Markov Chain Monte Carlo method implemented in the Python-based _emcee_ code (Foreman-Mackey et al., 2013). The likelihood function given by Equation 10 is conditioned to a fixed EBL model and thus does account for errors in the EBL model. Incorporating uncertainties of the EBL model directly in the likelihood would require computations of rather non-trivial correlations between redshift and energy bins of the optical depth. Instead, we account for the EBL uncertainties in our cosmological analysis by marginalising it over EBL spectral intensities permitted by its measurement errors. The marginalisation is carried out by recomputing MCMC over a large number of Monte Carlo realisation (500) of the EBL model and finding best fit cosmological parameter using the sum of all chains. We calculate the EBL realisations by drawing samples from the luminosity density given its errors and computing the corresponding EBL densities. Uncertainties of the luminosity function take into account errors of multi-band photometry and extrapolations of the stellar mass function to its low-mass end. They are estimated in 15 non-overlapping redshift bins and thus assumed to be independent. The optical depth data are taken from Abdollahi et al. (2018) and Desai et al. (2019, see these references for details, in particular, Figure 2 of the latter one). In Abdollahi et al. (2018), optical depths are estimated by measuring the \(\gamma\)-ray attenuation from a sample of 739 blazars plus one \(\gamma\)-ray burst, all detected by _Fermi_-LAT. These optical depths are given in twelve redshifts bins reaching \(z\sim 3.10\). These redshift bins are chosen in such a way that the signal's strength is the same in each one of them. The optical depths are given in six logarithmically equally spaced energy bins from approximately 10 GeV up to 1000 GeV. The Abdollahi et al. (2018) results are especially relevant for constraining \(B_{m}\) because the larger dependence of the optical depth with \(\Omega_{m}\) occurs at the larger redshifts. In Desai et al. (2019), a sample of 38 blazars detected by Imaging Atmospheric Cherenkov Telescopes was used to measure the optical depths in two redshift bins up to \(z\sim 0.6\). These optical depths are measured in four equally spaced logarithmic energy bins from 0.1 TeV up to approximately 20 TeV. These results from Desai et al. (2019) are especially important for measuring \(H_{0}\) because the largest dependence of the optical depth with \(H_{0}\) occurs at the higher energies and lower redshifts as shown by Dominguez et al. (2019). We stress, as stated by Abdollahi et al. (2018) and Desai et al. (2019), apart from the statistical uncertainty, an additional \(\approx 10\%\) systematic uncertainty exists. This systematic uncertainty accounts for shape differences of optical depth curves, intrinsic \(\gamma\)-ray spectral models, and energy biases. The impact of these uncertainties are taken into account and the combined statistical plus systematic uncertainties on the EBL optical-depth estimates are reported by these works. Our methodology leads to \(H_{0}=61.9^{+2.9}_{-2.4}\) km s\({}^{-1}\) Mpc\({}^{-1}\) when we fix \(\Omega_{m}=0.32\), shown in Figure 5, as found by Aghanim et al. (2018). Figure 6 shows the main cosmological constraints inferred from the complete \(\gamma\)-ray data set, exploring simultaneously the \(H_{0}\) and \(\Omega_{m}\) parameter space. In this case, the best fit Hubble constant is \(H_{0}=65.6^{+5.6}_{-5.0}\) km s\({}^{-1}\) Mpc\({}^{-1}\) and it is consistent with its value derived from the Planck observations of the CMB (Aghanim et al., 2018). Figure 2: Spectral intensities of the extragalactic background light in four different redshifts in co-moving frame from the EBL model by S21 (blue band, 1\(\sigma\) uncertainty), Finke et al. (2022) (brown line) and those from Abdollahi et al. (2018, yellow band). et al., 2018) assuming the standard flat \(\Lambda\)CDM cosmological model. The data favour lower values of the matter density parameter than \(\Omega_{\rm m}\approx 0.3\) measured consistently from a wide range of observations (e.g., Scolnic et al., 2018). However, this trend is not strong enough to be considered discrepant with other cosmological probes. In particular, \(\Omega_{\rm m}\) from the Planck cosmological model lies well within \(2\sigma\) credibility range of our constraints. Noticeably lower estimates of the matter density parameter were found in previous analyses of the same \(\gamma\)-ray attenuation data, but using earlier versions of EBL models, \(\Omega_{m}=0.14^{+0.06}_{-0.07}\)(Dominguez et al., 2019). Our study demonstrates that those underestimations resulted most likely from insufficient accuracy of the employed EBL models. Using the updated EBL model from S21 restores fair consistency with the matter content in the standard cosmological model; however, the previous trend with low values of \(\Omega_{\rm m}\) seems to persist to some extent. It is natural to relate low best fit \(\Omega_{\rm m}\) to the apparent discrepancy between the EBL model and \(\gamma\)-ray observations discussed in Section 4.1. Cosmological models with low matter density are characterized by larger distances at high redshifts. This primarily affects the line-of-sight integral given by Equation 4 and thus increases the optical depths at high redshifts to the level required by the \(\gamma\)-ray data, which would otherwise necessitate scaling up the EBL model for \(\Omega_{\rm m}\approx 0.3\) (see Figure 2). The impact of high-redshift \(\gamma\)-ray data on cosmological constraints is clearly seen when we repeat the anal \begin{table} \begin{tabular}{l c c} Data & \(H_{0}[{\rm km\,s^{-1}\,Mpc^{-1}}]\) & \(\Omega_{\rm m}\) \\ \hline \hline \(\tau\) & \(65.6^{+5.6}_{-5.0}\) & \(0.19\pm 0.07\) or (\(<0.35\) (95\%)) \\ \(\tau\)(\(z<0.68\)) & \(65.5^{+5.7}_{-5.2}\) & \(<0.8\) (95\%) or (\(0.3^{+0.3}_{-0.2}\)) \\ \(\tau\)+BAO(BBN) & \(66.3^{+2.2}_{-2.0}\) & \(0.28\pm 0.04\) \\ \(\tau\)(\(z<0.68\))+BAO(BBN) & \(69.5^{+2.8}_{-2.5}\) & \(0.33\pm 0.04\) \\ BAO(BBN) & \(73.7\pm 3.7\) & \(0.396\pm 0.054\) \\ \end{tabular} \end{table} Table 1: Constraints on the Hubble constant \(H_{0}\) and the matter density parameter \(\Omega_{\rm m}\) from our joint analysis of independent \(\gamma\)-ray attenuation measurements. Beside our fiducial fit on the top, we include other fits restricted to \(z<0.68\)\(\gamma\)-ray attenuation data (\(\tau\) (\(z<0.68\))), also including BAO observations with the BBN prior as an external data set and only BAO observations with the BBN prior. Best fit parameters are provided as the posterior mean values with errors given by 16th and 84th percentile of the marginalised probability distributions. For \(\Omega_{\rm m}\) inferred solely from the \(\gamma\)-ray data we provide 95 per cent upper limits. The errors include uncertainties in both the \(\gamma\)-ray observations and the EBL model. Figure 3: Optical depths as a function of energy in different redshift bins from EBL models (Finke et al., 2010; Dominguez et al., 2011; Gilmore et al., 2012; Saldana-Lopez et al., 2021; Finke et al., 2022) and extracted from _Fermi_-LAT data (_Left panel_, Abdollahi et al., 2018) and data from Imaging Atmospheric Cherenkov Telescopes (_Right panel_, Desai et al., 2019). The uncertainties from this work are small and barely visible. In both panels, \(1\sigma\) (dark cyan) and \(2\sigma\) (light cyan) uncertainties from Abdollahi et al. (2018) and Desai et al. (2019), respectively, are also shown. ysis in two separate redshift ranges. Figure 6 demonstrates explicitly that solutions with lower matter density are primarily driven by observations at higher redshifts, while fitting low-redshift yields cosmological constraints that are fully consistent with the Planck model. Note that (1) the \(\gamma\)-ray data uncertainties at the higher redshifts are large and (2) larger attenuation implies lower \(\Omega_{m}\) (see Figure 3 in Dominguez et al., 2019). The difference between the best fit \(\Omega_{\rm m}\) measured from high- and low-redshift bins appears to be the most visible when splitting the data at \(z=0.68\). Analysis of low-redshift data subset yields nearly the same estimate of the Hubble constant with comparable errors as for the entire data set, see Table 1. This is also within \(2\sigma\) of the latest measurement of \(H_{0}\) from the SH0ES program using observations of type Ia supernovae and Cepheids in relatively nearby galaxies, giving \(H_{0}=73.3\pm 1.04\) km s\({}^{-1}\) Mpc\({}^{-1}\)(Riess et al., 2022). Cosmological measurements based on the \(\gamma\)-ray attenuation data can be further improved by combining them with independent cosmological probes giving complimentary constraints on \(\Omega_{\rm m}\). Here, we use observations of the baryon acoustic oscillations (BAO) with the sound horizon normalisation set by the physical baryon density \(\Omega_{\rm b}h^{2}\) obtained from the Big Bang nucleosynthesis theory. These are constrained by the local measurements of the primordial light element abundances, in particular \(100\Omega_{\rm b}h^{2}=2.208\pm 0.052\) derived from the recent determination of the primordial deuterium abundance in the most metal-poor damped Ly-\(\alpha\) system (Cooke et al., 2016). We intentionally do not consider the Planck observations of the CMB or type Ia supernovae in order to keep our Hubble constant determination fully independent of those. For the BAO data we include anisotropic BAO measurements from the Sloan Digital Sky Survey's (SDSS) Baryon Figure 4: The CGRH from the EBL models by Finke et al. (2010, green-dotted line), Dominguez et al. (2011b, orange-dotted line), Helgason & Kashlinsky (2012, brown-dotted line), Gilmore et al. (2012, cyan-dashed line), which have been extrapolated up to \(z=6\), and this work (red band) from observations up to \(z=6\), including uncertainties. We show the CGRH derived from \(\gamma\)-ray blazars (Abdollahi et al., 2018, blue stars) and the highest energy photons detected for 4LAC blazars (The Fermi-LAT collaboration, 2019, grey circles). Figure 5: Constraints on the Hubble constant assuming \(\Omega_{m}=0.32\). We obtain \(H_{0}=61.9\,^{+2.9}_{-2.4}\) km s\({}^{-1}\) Mpc\({}^{-1}\). Figure 6: Constraints on the Hubble constant \(H_{0}\) and the matter density parameter \(\Omega_{\rm m}\) inferred from the measurements of \(\gamma\)-ray attenuation and the EBL model from S21. Derived cosmological parameters are consistent with the Planck cosmological model when fits are limited to low-redshift (\(z<0.68\)) data, but only marginally consistent with the Planck model when we include high-redshift (\(z>0.68\)) data, for which low values of \(\Omega_{\rm m}\) are required to match the optical depth based on the EBL model against \(\gamma\)-ray measurements. The contours show \(1\sigma\) and \(2\sigma\) confidence regions containing 68 and 95 per cent of the marginalised probability distributions. Figure 7: Constraints on the Hubble constant \(H_{0}\) and the matter density parameter \(\Omega_{\rm m}\) from a joint analysis of the \(\gamma\)-ray attenuation measurements and Baryon Acoustic Oscillation (BAO) observations with the Big Bang Nucleosynthesis (BBN) prior. The contours show \(1\sigma\) and \(2\sigma\) confidence regions containing 68 and 95 per cent of the marginalised probability distributions. Oscillation Spectroscopic Survey (consensus constraints based on post-reconstruction method Alam et al., 2016), distance measurements from the 6dF survey (Beutler et al., 2011) and from the Main Galaxy Sample of the SDSS (Ross et al., 2015). Cosmological constraints are obtained using the _CAMB_ code for cosmological calculations (Lewis et al., 2000) and _CosmoMC_(Lewis and Bridle, 2002) implemented in the _cobaya_ package (Torrado and Lewis, 2021) for computing Markov chains. We assume the CMB temperature measured by the COBE/FIRS, i.e. \(T_{\rm CMB}=2.7255\) K (Fixsen, 2009). As shown in Figure 7, BAO observations and \(\gamma\)-ray attenuation data are complimentary in terms cosmological information. The apparent degeneracy between \(H_{0}\) and \(\Omega_{\rm m}\) from BAO data reflects a range of the sound horizon scale allowed by the adopted BBN prior. Combined analysis of both data sets yields stronger constraints on the Hubble constant than for the \(\gamma\)-ray data alone: \(H_{0}=66.3^{+2.2}_{-2.0}\,{\rm km\,s^{-1}\,Mpc^{-1}}\) for the complete \(\gamma\)-ray data set and \(H_{0}=69.5^{+2.8}_{-2.5}\,{\rm km\,s^{-1}\,Mpc^{-1}}\) for the low-redshift (\(z<0.68\)) subset, in both cases in agreement with the Planck value but in serious disagreement with local measurements assuming that systematic uncertainties in the EBL model do not exceed the quoted statistical errors. A slightly larger difference between these two estimates of the Hubble constant than in the case of including \(\gamma\)-ray data alone reflects relative shifts of the posterior probability contours from the BAO and \(\gamma\)-ray data on the \(\Omega_{\rm m}-H_{0}\) plane (see Fig. 7). Table 1 summarizes the results of our cosmological analysis. The constraints on the Hubble constant are consistent across the four data sets including low-redshift (\(z<0.68\)) \(\gamma\)-ray observations and combinations with the BAO measurements. In particular, virtually no difference (merely \(\sim 0.5\sigma\) shift) between the full and low-redshift data in this respect demonstrates that the discrepancy between the EBL model in the near-infrared range and the \(\gamma\)-ray observations described in Section 4.1 has a negligible impact on the Hubble constant estimation. Although the errors in our \(H_{0}\) determination are about \(3-4\) times larger than the total error quantifying the Hubble constant tension, our results clearly favor a Planckian value of \(H_{0}\): the best fit \(H_{0}\) values derived respectively from the full and low-redshift \(\gamma\)-ray data are merely \((0.5-1.0)\sigma\) lower than the Planck value (Aghanim et al., 2018), but \((1.6-2)\sigma\) lower than the local \(H_{0}\) from SHUES (Riess et al., 2022). Similar preference for the Planck value of \(H_{0}\) was reported in previous studies based on different EBL models (Dominguez et al., 2019). We note that another group, the Carnegie Supernova Project, finds a different \(H_{0}\) value with Type Ia supernova (Freedman et al., 2019). Uncertainties on the obtained cosmological constraints is primarily driven by the errors of the \(\gamma\)-ray attenuation measurements. This is demonstrated by Figure 8 that compares constraints on \(H_{0}\) and \(\Omega_{\rm m}\) from complete \(\gamma\)-ray observations conditioned to the best fit EBL model or employing marginalisation over the allowed EBL models. It is apparent that including the EBL errors in the analysis increases the contours only marginally. The corresponding errors in \(H_{0}\) and \(\Omega_{\rm m}\) increase respectively by \(\sim 30\) and \(\sim 10\) per cent relative to to the results of the analysis neglecting the EBL errors: \(H_{0}=64.5^{+4.6}_{-3.3}\)\({\rm km\,s^{-1}\,Mpc^{-1}}\) and \(\Omega_{\rm m}=0.19\pm 0.07\). ## 5 Summary and Conclusions This work builds upon a recently published EBL model by S21 that focuses on improving the estimates of the spectral intensity evolution at higher redshifts and the IR region. We derive optical depths that can be used for correcting EBL-attenuated spectra observed with \(\gamma\)-ray telescopes. For instance, given the improvements in the derivation of the mid-to-far IR peak of the EBL and its evolution over redshift from S21, our optical depths are suited for spectra at high redshifts and also up to the highest energies. This makes them optimal for observations with _Fermi_-LAT, the current generation of IACTs, and specially the future Cherenkov Telescope Array (CTA). We find that the optical depths derived from \(\gamma\)-ray attenuation (Abollahi et al., 2018; Desai et al., 2019) agree within \(2\sigma\) with those derived from S21 using galaxy data. A comparison of these optical depths from data and model allows us to measure \(H_{0}\) and \(\Omega_{m}\), finding that \(H_{0}\) is \(1\sigma\) compatible with the value obtained from cosmological probes such as the CMB and BAO, whereas \(\Omega_{m}\) is \(2\sigma\) compatible. This marginally low value of \(\Omega_{m}\) is produced by the attenuation obtained from higher redshift blazars. A careful study on the propagation of the EBL uncertainties to the cosmological parameters is also developed. Finally, we note that a larger EBL intensity in the model will make \(H_{0}\) and \(\Omega_{m}\) derived from \(\gamma\)-ray attenuation larger. Given the recent results by Lauer et al. (2022) of an optical intensity that is approximated 50% larger than the one derived galaxy counts (e.g., Driver et al., 2016), the problem of how much light there is in the Universe remains puzzling. ## Acknowledgements We thank Daniel Nieto for providing computational resources and Jonathan Biteau for helpful comments. A.D. is thankful for the support of the Ramon y Cajal program from the Spanish MINECO, Proyecto PID2021-1265360A-I00 funded by MCIN / AEI / 10.13039/501100011033, and Proyecto PR44/21-29915 funded by the Santander Bank and Universidad Complutense de Madrid. RW was supported by a grant from VILLUM FONDEN (project number 16599). ASL acknowledge support from Swiss National Science Foundation. J.F. was supported by NASA through contract S-15633Y and through the Fermi GI program. PGP-G acknowledges support from Spanish Ministerio de Ciencia e Innovacion MCNN/AEI/10.13039/501100011033 through grant PGC2018-093499-B-I00. Figure 8: Impact of the uncertainties in the EBL model on cosmological constraints. The contours show constraints on the Hubble constant \(H_{0}\) and the matter density parameter \(\Omega_{\rm m}\) obtained in analyses including (the adopted approach in this work) or neglecting the EBL uncertainties (dashed purple line and solid black line, respectively). By neglecting the EBL uncertainties, the final errors in \(H_{0}\) and \(\Omega_{\rm m}\) decrease by \(\sim 30\) and \(\sim 10\) per cent, respectively.
2303.13026
A Cycle-level Unified DRAM Cache Controller Model for 3DXPoint Memory Systems in gem5
To accommodate the growing memory footprints of today's applications, CPU vendors have employed large DRAM caches, backed by large non-volatile memories like Intel Optane (e.g., Intel's Cascade Lake). The existing computer architecture simulators do not provide support to model and evaluate systems which use DRAM devices as a cache to the non-volatile main memory. In this work, we present a cycle-level DRAM cache model which is integrated with gem5. This model leverages the flexibility of gem5's memory devices models and full system support to enable exploration of many different DRAM cache designs. We demonstrate the usefulness of this new tool by exploring the design space of a DRAM cache controller through several case studies including the impact of scheduling policies, required buffering, combining different memory technologies (e.g., HBM, DDR3/4/5, 3DXPoint, High latency) as the cache and main memory, and the effect of wear-leveling when DRAM cache is backed by NVM main memory. We also perform experiments with real workloads in full-system simulations to validate the proposed model and show the sensitivity of these workloads to the DRAM cache sizes.
Maryam Babaie, Ayaz Akram, Jason Lowe-Power
2023-03-23T04:24:30Z
http://arxiv.org/abs/2303.13026v1
# A Cycle-level Unified DRAM Cache Controller Model for 3DXPoint Memory Systems in gem5 ###### Abstract To accommodate the growing memory footprints of today's applications, CPU vendors have employed large DRAM caches, backed by large non-volatile memories like Intel Optane (e.g., Intel's Cascade Lake). The existing computer architecture simulators do not provide support to model and evaluate systems which use DRAM devices as a cache to the non-volatile main memory. In this work, we present a cycle-level DRAM cache model which is integrated with gem5. This model leverages the flexibility of gem5's memory devices models and full system support to enable exploration of many different DRAM cache designs. We demonstrate the usefulness of this new tool by exploring the design space of a DRAM cache controller through several case studies including the impact of scheduling policies, required buffering, combining different memory technologies (e.g., HBM, DDR3/4/5, 3DXPoint, High latency) as the cache and main memory, and the effect of wear-reveling when DRAM cache is backed by NVM main memory. We also perform experiments with real workloads in full-system simulations to validate the proposed model and show the sensitivity of these workloads to the DRAM cache sizes. computer architecture, simulation, dram caches ## I Introduction The last decade has seen significant academic research on DRAM caches, and today these ideas are becoming a reality with CPU vendors implementing DRAM cache-based computer systems, e.g., Intel's Cascade Lake and Sapphire Rapids. Hardware-managed DRAM caches are seen as one way to enable heterogeneous memory systems (e.g., systems with DRAM and non-volatile memory) to be more easily programmable. DRAM caches are transparent to the programmer and easier to use than manual data movement. However, recent work has shown that these transparent hardware-based data movement designs are much less efficient than manual data movement [1]. While the work by Hildebrand et al. [1] and other recent work investigating Intel's Cascade Lake systems provides some insight into real implementations on DRAM caches [2, 3], there is a gap in the community's access to cycle-level simulation models for DRAM caches. This paper describes a new gem5-based model of a unified DRAM cache controller inspired by the Cascade Lake hardware to fill this gap. Previous work has explored many aspects of DRAM cache design in simulation such as the replacement policy, caching granularity [4, 5], dram cache tag placement [6, 7, 8], associativity [9, 10, 4], and other metadata to improve performance [8, 5, 10]. These mostly high-level memory system design investigations can appropriately be evaluated with trace-based or non-cycle-level simulation. However, as shown in recent work, the micro-architecture of the unified DRAM and non-volatile main memory (NVRAM) controller can lead to unexpected performance pathologies not captured in these prior works (e.g., Hildebrand et al. showed that a dirty miss to the DRAM cache requires up to _five accesses_ to memory [1]). Thus, to better understand these realistic DRAM cache systems, it is imperative to build a detailed DRAM cache simulation model which can be used to perform a design space exploration around the DRAM cache idea. The previous research works on DRAM cache design improvements do not provide any (open-source) DRAM cache modeling platform for a detailed micro-architectural and timing analysis. To the best of our knowledge, most research works do not consider systems where the hardware-managed DRAM cache and NVRAM are sharing the same physical interface and are controlled by a unified memory controller (as is the case in real platforms like Intel Cascade Lake). In this work, we describe our unified DRAM cache and main memory controller (_UDCC_) cycle-level DRAM cache model for gem5 [11]. The protocol takes inspiration from the actual hardware providing DRAM cache, such as Intel's Cascade Lake, in which an NVRAM accompanies a DRAM cache as the off-chip main memory sharing the same bus. To model such hardware, we leverage the cycle-level DRAM [12] and NVRAM [13] models in gem5. Our model implements the timing and micro-architectural details enforced by the memory interfaces including the DRAM timing constraints, scheduling policies, buffer sizes, and internal queues. We propose a DRAM cache model that is direct-mapped, insert-on-miss, and write-back to model Intel's Cascade Lake design. Using this model, we present validation data and investigate five case studies. _What is the impact of memory scheduling policies in a unified DRAM cache and memory controller?_ We find that using FR-FCFS is highly impactful when the cache hit ratio is high, but less so when the hit ratio is low and the NVRAM's bandwidth limits performance. _What is the impact of DRAM technology on performance and memory controller architecture?_ We find that higher performing memory technologies require more buffering to achieve peak performance. Moreover, we find that the composition of the memory access patterns and their hit/miss ratio on DRAM cache, can also affect the amount of buffering to achieve the peak bandwidth. _What is the impact of backing "main memory" performance?_ We find that while slower backing memory hurts performance, the performance of the backing memory does not have a significant affect on the micro-architecture of the cache controller. _What is the impact of the UDCC model for full-system applications?_ We find that our model shows similar performance characteristics on real applications as previously shown on real hardware providing further evidence for the importance of cycle-level simulation. _What is the impact of NVRAM wear leveling on memory system performance with a DRAM cache?_ We find that while wear leveling has a very small direct impact, the impact when using NVRAM as backing memory with a DRAM cache can be much higher. Although only 1 in 14,000 requests experience a wear-leveling event, the performance impact is up to an 8% slowdown. Our model is open-source and publicly available for the use of research community [14] and will be integrated into gem5 mainstream. Using this new model which implements the micro-architectural details of realistic DRAM caches on a simulator can help find any potential improvement for the next generation of memory systems. ## II Background ### _Background on heterogeneous memory_ With the growing footprint of large scale applications, commercial products have been emerging in the market to accommodate the memory requirements of the today's demanding workloads. Intel has introduced Optane Data-Center Persistent-Memory-Modules (DC PMM) which is a non-volatile byte-addressable main memory (NVRAM) and can be a replacement for conventional DRAM [1]. Even though NVRAM provides much larger capacity than DRAM, it has 3x longer latency and 60% lower bandwidth than DRAM [2]. Hardware caching through a DRAM cache for NVRAM, has been employed in Intel's Cascade Lake product to hide the long latency of NVRAM. Memory subsystem in Cascade Lake works in two modes, 1LM in which the NVRAM is persistent and directly accessible to the application, and 2LM in which DRAM DIMM is the hardware cache for NVRAM. Recently, Intel announced the release of the new generation of Xeon processor called Sapphire Rapid [15] which includes an HBM working in three modes, HBM-Only, Flat Mode, and Cache Mode. HBM-Only Mode considers HBM as the main memory of the system without any DRAM memory. With the provided DDR5 in the system, HBM and DDR5 both work as the main memory in the Flat Mode. Finally, HBM is used as a cache for DDR5 in the Cache Mode. To accommodate the research lines in this domain, simulators must be able to correctly implement the real behavior of these memory technologies. Wang et al. [3] have studied the behavior of NVRAM in Intel's Cascade Lake and provided the behavior seen in VANS model. However, they do not provide any support for the DRAM cache cooperating with NVRAM in the 2LM mode. In this work we seek to model the DRAM cache protocol and implement it in gem5 simulator. ### _Background on gem5's memory subsystem_ The gem5 simulator is based on an event-driven simulation engine. It supports models of many system components, including a memory controller, a detailed DRAM model, an NVM model, and a model for different CPUs, caches, and others. The memory controller module added to gem5 by Hansson et al. [12] focuses on modeling the state transitions of the memory bus and the memory banks. While it is not "cycle accurate", it is _cycle level_, and the memory controller is designed to enable fast and accurate memory system exploration. Hansson et al. [12] showed that their memory controller model in gem5 correlates well with a more detailed model DRAMSim2 [16], but leads to a 13% reduction in simulation time on average. The controller is responsible for address decoding (into a row, bank, column, and rank) and keeping track of the DRAM timings. The memory controller is also configurable with different front-end and back-end latencies, which can model different memory controller designs and the impact of physical interfaces. The memory controller in gem5 was refactored recently [13]. Now, it relies on two components to simulate a real memory controller. (1) The _memory controller_ receives commands from the CPU and enqueues them into appropriate queues and manages their scheduling to the memory device. (2) The _memory interface_ deals with device/media-specific timings, manages the device-specific operations and communicates with the memory controller. The unified DRAM cache memory controller (_UDCC_) presented in this work implements a new memory controller module in gem5 and requires only minor changes in the memory interface. Like gem5's current memory controller, _UDCC_'s goal is for cycle-level simulation to enable micro-architectural exploration and flexibility, not cycle-by-cycle accuracy to a single design. ## III Unified DRAM cache controller We model a unified DRAM cache controller (_UDCC_) in gem5 to control a DRAM device (which acts as a cache of the main memory) and an NVM device (which serves as the main memory in the system). _UDCC_ is largely based on Intel Cascade Lake's DRAM cache management strategy. Figure 1 Fig. 1: Hardware abstraction of _UDCC_ provides a high-level overview of the modeled controller's layout, and Figure 2 shows the states that the memory packets transition through while residing in _UDCC_. We designed _UDCC_ to be a flexible model to explore different memory controller design parameters. Thus, instead of modeling one specific micro-architecture with buffers for every device (e.g., separate DRAM and NVM read and write queues), we model the controller with a single large buffer called the _Outstanding Request Buffer (ORB)_. We also have a _Conflict Request Buffer (CRB)_ to store requests that are to the same cache line and must be serialized with current outstanding requests. These buffers are shown in Figure 1. All incoming memory requests reside in the _ORB_ unless a request conflicts with an already existing request in the _ORB_. Two requests are considered to be conflicting if they both map to the same DRAM cache block. The conflicting request goes to the _CRB_ until the request it was conflicting with has been serviced and is taken out of the _ORB_. Each entry in these buffers contains other meta data in addition to the address of the request, as shown in Figure 1. This metadata provides helpful information about the request, e.g., the memory request's current state and relative arrival time. In addition to these two buffers, we also model an _NVM WrBack_ queue for the DRAM cache dirty lines that are to be written back to NVM main memory. Figure 2 presents the state machine followed by the memory packets while they are in _UDCC_. Since gem5 is an event driven simulator, the simulation model relies on scheduled events to transition between these states. To model the realistic buffers in the system, we can constrain the number of requests at one time in any state. The states these memory packets can be in at any point are as follows: **Recv_Pkt**: Every memory request received to the memory controller is in _Recv_Pkt_ state initially. **DRAM_Read:** Since every request must first check the tag in the cache, a memory packet moves to the _DRAM_Read_ state when the time arrives that the DRAM can service this request. This transition is achieved by scheduling an event for the ready time of this packet. The device-specific memory interface provides the ready time. At this point, the request is also scheduled to the DRAM itself for a read. **DRAM_Read_Resp:** Once the response time of an already scheduled (to DRAM) packet arrives, the packet is moved to the _DRAM_Read_Resp_ state. At this point, the DRAM cache tags are checked to ascertain the hit/miss and clean/dirty status of the request and different actions are performed accordingly. **DRAM_Write:** A memory packet can go to the _DRAM_Write_ state in two scenarios: (1) if its tag read from DRAM cache is done (and found to be a hit) when it originally was a write packet, or (2) if we get a response from the NVM that is going to fill in DRAM cache (initiated by 'a missing in DRAM cache' request). Once in _DRAM_Write_ state, packets are eventually written to the DRAM. **NVM_Read_Wait_Issue:** Packets missing in DRAM cache (found out by a tag read in DRAM cache), are moved to this state so that they can eventually be scheduled to NVM. **NVM_Read:** Once the ready time of packets in _NVM_Read_Wait_Issue_ arrives (NVM interface is ready to accept the packet), they are scheduled to NVM and moved to _NVM_Read_ state. **NVM_Read_Resp:** On getting a response back from the NVM device (when the NVM interface is done with the processing of the packet), the packet moves to _NVM_Read_Resp_ state where it will be eventually moved to the _DRAM_Write_ state so that it can be written to the DRAM. If the packet was originally a read request (and missed in DRAM cache), at this point the response for this packet will also be sent to the requestor. **NVM_Write:** The packets belonging to a dirty cache line in DRAM are moved to the _NVM_Write_ state, so that they can be written to the main memory (NVM). **Done:** If a (read) packet was found to be a hit in the DRAM cache, the response will be sent back to the requestor and the packet will move to the _Done_ state. Similarly, when a (write) packet is finally written to the DRAM cache, it moves to the _Done_ state. Packets in _Done_ state are removed from the _ORB_. Since we are modeling a memory-side cache that does not participate in the on-chip coherence protocol, we do not model the stored data in the cache. We use the protocol described above to model the timing of the system, and for data we use the backing store of the backing memory (NVM in our case). Similarly, while we model the timing of accesses to DRAM to check the tags, we functionally store the tags in a separate structure in our model. ## IV Validation In this section, we present the validation of our DRAM cache model in gem5. Any simulation model needs to be functionally correct, as well as provide accurate performance statistics. To ensure functional validation, we successfully performed Linux kernel boot tests in gem5 full system mode, and we ran some commonly used HPC class benchmarks in gem5 full-system mode, which are presented in Section IX of this paper. The evaluation of the accuracy of the performance model is generally not straightforward. Still, reasonable confidence can be established in the performance model by relying on the evaluation of performance numbers in controlled experiments. Fig. 2: State machine followed by packets in _UDCC_ We performed a comparison of the effective memory bandwidth observed with gem5's default memory controller (_DMC_) and the unified DRAM cache controller (_UDCC_). We rely on a synthetic read memory traffic pattern such that (nearly) _all requests will be hits in the DRAM cache_ in the case of a system where we use _UDCC_. We use both a linear and a random traffic pattern. Figure 3 provides both controllers' read bandwidth and compares these numbers to the theoretical peak bandwidth possible with the DDR4 device used in this experiment. As expected, we observe that the bandwidth attained by _DMC_ and _UDCC_ is very close. The scheduling policy implemented in both controllers (details in Section VI) can explain the slight difference. The effective bandwidth in both cases is less than the theoretical peak bandwidth. The access amplification (total number of memory accesses divided by the demand requests) can be significant in DRAM caches, especially in the case of writes. We compare the access amplification of our model with that of the actual hardware. We use the access amplification numbers presented by Hildebrand et al. [1] on an Intel Cascade Lake system. We calculate the access amplification values for _UDCC_ by dividing the effective bandwidth (seen by the LLC) by the sum of the average bandwidth of DRAM and NVRAM devices for a particular run. The comparison of the two access amplification values is shown in Figure 4. Our results match the actual hardware in all cases with only one exception. The reason for smaller access amplification for write misses is that our implementation slightly optimizes the write requests. In actual hardware, on a write miss in DRAM cache, the block is first allocated by reading it from NVRAM and then writing it into DRAM. The actual data is then written into the DRAM. We merge these two DRAM writes, and thus, our implementation leads to one less access for a write miss compared to the actual hardware. Finally, we manually stepped through the operations of different kinds of accesses (reads/writes and hits/misses) to ensure they behaved as expected. ## V Methodology For all case studies except real workloads under full system simulation, we used gem5's traffic generators instead of execution-based workloads. Using traffic generators allows us to explore the behavior of the DRAM cache design more directly and more clearly understand the fundamental trade-offs in its design. We use gem5's traffic generator to generate two patterns of addresses: linear or random. If not specified, the tests are run with a linear traffic generator. To implement the DRAM cache we used the DRAM models ("interfaces") provided by gem5. We also used the NVM interface provided in gem5 with its default configuration unless otherwise specified (timing parameters are shown in the 'Base' column in Table I). We extended gem5to model DDR5 (based on DDR5 data-sheets) and HBM (based on a performance comparison) for our studies. In the case studies presented below, we are not concerned with the detailed configuration of the _UDCC_ (e.g., the size of DRAM cache). Instead, we are interested in the study of the behavior of the _UDCC_ through specific scenarios which enables us to evaluate the best, worst, or in between performances of the system. For this purpose, we have used patterns which are either Read-Only (RO), Write-Only (WO), or a combination of reads and writes (70% reads 30% writes). The other traffic pattern characteristic we varied is the hit (or miss) ratio of the generated pattern. This factor was enforced by two different parameters (1) the size of the DRAM cache and (2) the range of addresses requested by the traffic generator. In all of the case studies in which the traffic generator was involved, we used a DRAM cache of 16 MB size backed-up by NVRAM as the main memory. Unless otherwise specified, we used DDR4 to implement the DRAM cache in _UDCC_ with a total buffer size of 256 entries. In order to get 0%, 25%, 50%, 75%, and 100% hit ratios, we set the range of addresses to be 6GB, 64MB, 32MB, 20MB, and 6MB. For instance, to study the behavior of a write intensive application with a memory foot print larger than the DRAM cache capacity on our proposed model, we set the traffic generator to give WO accesses within the range of 6GB for a DRAM cache of 16MB capacity. In this way, we were able to test the _UDCC_ in a reasonable simulation time. We used a cache block size of 64B in all tests to match current systems. All the tests were simulated for 1 second. Fig. 4: Access amplification observed with _UDCC_ in gem5 and on the real hardware (Hildebrand et al. [1]). Fig. 3: Read bandwidth comparison between gem5 default memory controller (_DMC_), unified DRAM cache controller (_UDCC_), and the theoretical peak bandwidth on DDR4. In case of _UDCC_, all accesses hit in the DRAM cache. Two different traffic patterns (linear, random) are used. Read buffer size in _MC_, and _ORB_ size in _UDCC_ is 256 entries. ## VI Case Study 1: Impact of Scheduling Policy on DRAM Caches Performance ### _Background and Methodology_ The policy to choose which memory request to service has a significant impact on the performance of the DRAM memory system. In this work we consider two scheduling policies: (i) first-come, first-serve (_FCFS_), and (ii) first-ready, first-come, first-serve (_FRFCFS_). _FCFS_ is the naive policy which processes the memory requests in the order they are received by the memory controller. While it is simple and easy to implement, _FCFS_ adds many row-switching delay penalties, leading to lower bus utilization. Rixner et al. [17] proposed _FRFCFS_ trying to take advantage of maximizing row-buffer hit rate. _FRFCFS_ reorders the received requests so the ones that hit on the currently-opened row would be serviced earlier than any other requests which map to the other rows that are currently closed. In heterogeneous memory systems, Wang et al. [3] reported that Intel's Cascade Lake's NVRAM interface deploys _FCFS_. The question that arises here is, in a memory system with two cooperative devices, one as DRAM cache and the other as main memory, with several internal requests buffers to arbitrate from, how important is the choice of scheduling policy? We extended _UDCC_ with _FCFS_ and _FRFCFS_ scheduling policies to answer this question. _UDCC_ employs the same scheduling policy for both DRAM cache and main memory. We tested _UDCC_ with each scheduling policy to measure the improvement of bandwidth observed by LLC with _FRFCFS_ over _FCFS_. We have run tests with different hit ratios and different read and write request combinations to test the sensitivity of bus utilization and the system performance to the scheduling policy. The results are based on request patterns of 0% hit ratio and 100% hit ratio for RO and WO. We also ran patterns containing 70% read and 30% write requests, with 100%, 75%, 50%, 25% and 0% hit ratios. for 70% read and 30% write case, for both DRAM and NVRAM devices. In Figure (a)a, DRAM bus utilization of _FRFCFS_ is higher than FCFS in all hit ratios. Besides, as the hit ratio increases, the improvement of DRAM bus utilization by _FRFCFS_ compared to _FCFS_ also increases. In Figure (b)b, the NVRAM bus utilization decreases for both _FRFCFS_ and _FCFS_, as the hit ratio increases since there are less misses to be handled by NVRAM. Overall, DRAM interface bus utilization has benefited more from _FRFCFS_ (compared to _FCFS_), than NVRAM. Moreover, where the hit ratio is higher, the improvement of bus utilization by _FRFCFS_ over _FCFS_ is higher. Based on the proposed DRAM cache model, there are several internal buffers at the controller per interface for reads and writes. The fair arbitration of requests in each of these buffers can highly impact the utilization of the resources, affecting the performance of the DRAM cache. We showed _FRFCFS_ can achieve up-to 2.85x bandwidth improvement over _FCFS_ (in WO with 100% hit ratio) and that the improved DRAM bus utilization is the main factor contributing to this improvement. Overall, even though implementation of _FRFCFS_ would be more costly than _FCFS_ due to more complex and associative circuits, the performance gap between these two policies is significant. ## VII Case Study 2: Performance Analysis of Different DRAM Technologies as DRAM Cache ### _Background and Methodology_ Different commercial products have adapted different DRAM technologies for their DRAM caches (e.g., HBM in Sapphire Rapids and DDR4 in Cascade Lake). These technologies have different characteristics (e.g., peak bandwidth) that gives different applicability to each. In this study we want to answer these questions: how many total buffers are required for each DRAM technology to fully utilize it as DRAM cache? And, what is the peak performance of each device when the hit ratio and the percentage of read-write in access patterns change? To address these questions, we configured _UDCC_ to use DDR3, DDR4, DDR5 and HBM models implemented for gem5, as DRAM cache. The theoretical peak bandwidth of DDR3, DDR4, DDR5 and HBM are 12.8 GB/s, 19.2 GB/s, 33.6 GB/s, and 256 GB/s, respectively. The results are based on request patterns of RO 100% hit ratio, RO 100% miss ratio, WO 100% hit ratio, and WO 100% miss ratio. We ran these patterns across all DRAM technologies for buffer sizes from 2 to 1024 by powers of 2. For each case we looked for the buffer size where the observed bandwidth by LLC reached to its maximum state and would not get improved by increasing the size of the buffers. To separate out the total number of buffers from the specific micro-architectural decisions (e.g., how to size each buffer), we only constrained the "outstanding request buffer" and allowed all of the internal buffers to be any size. ### _Results and Discussion_ Figure (a)a shows the maximum achieved bandwidth for the DRAM cache, using DDR3, DDR4 and DDR5, for the case of RO 100% hit ratio, RO 100% miss ratio, WO 100% hit ratio, and WO 100% miss ratio. Moreover, Figure (b)b shows the amount of total buffers required in order to reach to the maximum bandwidth shown in Figure (a)a for each case. In RO 100% hit ratio access pattern, each request requires only a single access to fetch the data along with tag and metadata from DRAM. Thus, it is expected to achieve to a bandwidth very close to the theoretical peak bandwidth, and Figure (a)a shows this is the case. Moreover, the Figure (b)b shows that DDR3, DDR4, and DDR5 have reached to this bandwidth at buffer size of 128, 256, 256, respectively. Thus, increasing the buffer size would not help for RO 100% hit traffic pattern. In WO 100% hit ratio access pattern, each request requires two accesses to fetch tag and metadata from DRAM and then writing the data to the DRAM. The peak bandwidth in this case for each device is lower than the theoretical peak by about a factor of two (5.58 GB/s, 8.2 GB/s, and 16.47 GB/s for DDR3, DDR4, and DDR5, respectively). Since these requests require two DRAM accesses, the latency of each request is higher and more buffering is required to reach the peak bandwidth. Specifically, Figure (b)b shows that DDR3, DDR4, and DDR5 have reached to this bandwidth at buffer size of 256, 512, 1024 1, respectively. Comparing this case with RO 100% hit ratio shows this case has gotten some bandwidth improvement from increasing the buffer size. Footnote 1: DDR5 shows bandwidth improvement of 8% going from 512 to 1024 entries In RO 100% miss ratio access pattern, each request requires three accesses: one for fetching tag and metadata from DRAM, then fetching the line from NVRAM, and finally writing the data to the DRAM. The peak bandwidth in this case for each device is 3.47 GB/s, 4.45 GB/s, and 6.21 GB/s for DDR3, DDR4, and DDR5, respectively. Figure (b)b shows that DDR3, DDR4, and DDR5 have reached to this bandwidth at buffer size of 256, 256, 1024, respectively. It suggests DDR5 has gotten some bandwidth improvement from increasing the buffer size. However, the bandwid Fig. 8: Buffer size needed to achieve the maximum bandwidth seen by LLC for **DDR3**, **DDR4**, and **DDR5** as DRAM cache. The traffic patterns shown in the figure include read-only (RO) and write-only (WO), each having 100% and 0% hit ratio. to 1024 buffer entries is 3.8% (and 2% for 256 to 512 buffer size change). Finally, for WO 100% miss ratio access pattern, each request requires four accesses: one for fetching tag and metadata from DRAM, then fetching the line from NVRAM, writing the data to the DRAM, and a write to NVRAM for dirty line write back. The peak bandwidth in this case is 1.91 GB/s, 1.90 GB/s, and 1.90 GB/s for DDR3, DDR4, and DDR5, respectively. The Figure (b)b shows that DDR3, DDR4, and DDR5 have reached to this bandwidth at buffer size of 16, 16, 32, respectively. Comparing this case with the previous cases shows that, in this case the bandwidth gets saturated in smaller buffer sizes than the other cases. The main takeaway from this case study is that the buffer size required to achieve peak bandwidth largely depends on the composition of memory traffic. Secondly, the DRAM cache controller might need a large number of buffers to reach the peak bandwidth, primarily if the device can provide a large bandwidth (e.g., over 1000 in the case of DDR5). More memory controller buffers are needed because of the increased latency of accesses since each write request and read miss request requires multiple accesses to the memory devices. ### _Hbm_ We separate out HBM from the discussion of other DDR memory technologies as its memory controller design may be different from the _UDCC_ design described in this paper and used in the Intel's Cascade Lake. For instance, in Intel's Knight's Landing, the DRAM cache was implemented with a four-state coherence protocol and the HBM interfaces were physically separated from the DRAM interfaces through the on-chip network [18]. We modeled an HBM interface in gem5 such that it can provide the theoretical peak bandwidth of 256 GB/s in a single memory channel (equivalent to the sum of all pseudo channels in an HBM2 device). Figure 9 shows the bandwidth that is achieved for different total buffer sizes, for two cases: 1) when all accesses hit in the DRAM cache, and 2) when all accesses miss in the DRAM cache. In case of all hits, we see close to the maximum possible bandwidth with an 2048 buffers. In case of all misses, the achievable bandwidth is limited by the main memory used in the experiment (a NVRAM device) and does not get better after buffer size of 1024 entries. This data implies that the buffer sizes required for HBM are not that much higher than for high-performance DDR DRAMs (e.g., DDR5). The main reason that HBM does not require significantly more buffering is because even for a small miss rate (13%), we are limited by the NVRAM's performance and saturate bandwidth at much less than the theoretical peak as shown in Figure 9. However, if the HBM cache was backed by higher performance memory (e.g., DDR4 or DDR5 DRAM) the total number of buffers to achieve the maximum performance would likely be much higher. VIII Case Study 3: Performance Analysis of DRAM Caches While Backed-up by Different Main Memory Models ### _Background and Methodology_ In this section, we focus on the question how much does the performance (such as latency) of the main memory affect the performance of the memory system as observed by the LLC in DRAM cache-based systems? We extended the baseline NVRAM interface model of gem5 to implement a faster NVRAM and slower NVRAM compared to the baseline performance. The baseline, fast, and slow models of gem5 NVRAM provide 19.2 GB/s, 38.4 GB/s, and 9.6 GB/s bandwidth, respectively. Table I shows the timing constraints of all three cases. The results are based on an access pattern with a high miss ratio and large number of dirty line evictions from DRAM cache, to highly challenge the performance of the main memory for both read and write accesses to it. For this purpose, we used a WO with 100% miss ratio access pattern which generates dirty lines along the way, requiring write-backs to the main memory. This access pattern highly engages the NVRAM during miss handling to fetch the missed lines and write-back of dirty lines, enabling us to evaluate the performance of the system in all three cases of slow, fast, and baseline NVRAMs. Moreover, we have tested them for a pattern consisting of RO 100% miss ratio, since this case also requires interaction with NVRAM (for fetching the missed line). In both patterns, the results are based on a total buffer size of 512 entries. ### _Results and Discussion_ First we investigate the effect of the different NVRAMs from the UDCC-external point of view. Figure 10 compares the bandwidth seen by the LLC for three different NVRAMs, slow, baseline, and fast, for RO 100% miss ratio and WO 100% miss ratio requests patterns. Note that NVRAM has a dedicated write buffer in _UDCC_ whose size is enforced by the NVRAM interface (128 entries in gem5 NVRAM models). \begin{table} \begin{tabular}{|l|l|l|l|} \hline **Parameter** & **Slow** & **Base** & **Fast** \\ \hline **HEAD** & 300ns & 150ns & 75ns \\ \hline **tWRITE** & 1000ns & 500ns & 250ns \\ \hline **tSEND** & 28.32ns & 14.16ns & 7.08ns \\ \hline **tBURST** & 6.664ns & 3.332ns & 1.666ns \\ \hline \end{tabular} \end{table} TABLE I: NVRAM Interface Timing Parameters Fig. 9: Buffer size impact on read bandwidth for an HBM based DRAM cache (theoretical peak bandwidth : 256 GB/s). Our results showed, the average queuing latency for this buffer is 76.19 \(\mu\)s, 38.49 \(\mu\)s, and 19.52 \(\mu\)s for slow, baseline, and fast NVRAMs, respectively. In other words, the write queuing latency at the NVRAM interface, gets shorter once the speed of NVRAM increases, as expected. In the WO 100% miss ratio, the highest achieved bandwidth is 0.96 GB/s, 1.91 GB/s, and 3.8 GB/s for slow, baseline and fast NVRAMs, respectively. This interprets to 49.7% bandwidth degradation once the slow NVRAM was used as the main memory, and 98.9% bandwidth improvement when the fast NVRAM was used as the main memory, compared to the baseline NVRAM. Note that in this request pattern there are two NVRAM accesses (one read and one write). In the RO 100% miss ratio, the highest achieved bandwidth is 3.39 GB/s, 4.55 GB/s, and 5.1 GB/s for slow, baseline and fast NVRAMs, respectively. This interprets to 25.49% bandwidth degradation once the slow NVRAM was used as the main memory, and 12.08% bandwidth improvement when the fast NVRAM was used as the main memory, compared to the baseline NVRAM. Note that in this request pattern there is only one NVRAM (read) access. These results suggest that if an access pattern requires more interaction with NVRAM (e.g., where the DRAM cache miss ratio is higher, or when there are more dirty line evictions), the improvement of bandwidth observed by LLC with faster NVRAM is higher, for a system consisting of DRAM cache. Moreover, these results are based on a total buffer size of 512 entries as our further investigations showed that none of the devices gains more bandwidth by providing larger buffer size. Thus, even though slowing down the NVRAM device hurts performance, it does not affect the microarchitectural details of the controller. ## IX Case Study 4: Evaluating DRAM Caches for Real Applications In this case study, we performed an evaluation of DRAM caches for real-world applications, i.e., NPB [19] and GAPBS [20]. The DRAM caches are expected to perform similar to a DRAM main memory if the working set of the workload fits in the cache. Therefore, a case of particular interest for us is to evaluate the performance of the DRAM cache-based system when the workload does not fit in the DRAM cache. To accomplish the previously mentioned goal, we model a scaled-down system with 64MB DRAM cache and run NPB and GAPBS workloads for one second of simulation time. We run all workloads in three different configurations. The first two configurations (_NVRAM_, _DRAM_) model a system without a DRAM cache and the main memory as NVRAM or DRAM. The third configuration (_DCache_64MB_) uses a 64MB DRAM cache and NVRAM as main memory. Figure (a)a shows a million instructions per second (MIPS) values for NPB in three configurations. In most cases, _DCache_64MB_ performs the worst, with the most prominent performance degradation for _lu.C_ and _bt.C_. The only exception is _is.C_, where _DCache_64MB_ performs better than _NVRAM_. The performance of _DCache_64MB_ correlates with the DRAM cache misses per thousand instructions (MPKI) values shown in Figure (b)b. For example, _is.C_ shows the smallest and _lu.C_ shows the largest MPKI values. Figure (a)a shows a million instructions per second (MIPS) values for GAPBS in the previously mentioned three configurations. Figure (b)b shows the associated MPKI values. The graph workloads on a DRAM cache perform mostly similarly to NVRAM alone in our simulation runs. The only exception is _bfs_ which shows a significantly lower MIPS value than _NVRAM_. The DRAM cache MPKI value for _bfs_ alone does not explain the reason for this behavior. _bfs_ has the highest fraction of writes than reads in the requests that hit the DRAM cache (29% more writes than reads in case of _bfs_ in contrast to 36% less writes than reads for the rest of the workloads) Since a write hit also leads to access amplification (tag read before a write hit), the impact of this amplification is then seen in the performance degradation with a DRAM cache. We conclude from this observation that the extra DRAM accesses (for tag reads) can also impact the workload's performance on a DRAM cache. Fig. 11: NAS Parallel Benchmarks on DRAM Cache Fig. 12: GAPBS on DRAM Cache Fig. 10: Observed bandwidth by the LLC for read-only (RO) 100% miss ratio and write-only (WO) 100% miss ratio. ## X Case Study 5: NVRAM Write Wear Leveling Effect ### _Background and Methodology_ Non-volatile storage and memory devices, such as Phase Change Memories (PCM) used as NVRAM and flash memory, are known to have a limited write endurance. Write endurance is defined as the number of writes to a block in the device, before it becomes unreliable. One of the techniques used for such devices to prolong their lifetime, is wear leveling. Usually NVRAM wear-leveling technique tries to evenly distribute wear-out by moving data from one highly-written location to a less worn-out location. Wang et al. [3] measured the frequency and latency overhead of data migration due to wear leveling in NVRAMs through probing and profiling. They reported a long tail latency of 60 \(\mu\)s for almost every 14000 writes to each 256B region of NVRAM. Wear leveling can affect the performance of DRAM caches. This effect is more noticeable while running write-intensive workloads which their memory footprint is larger than the DRAM cache capacity. In such situation, many dirty lines eviction will happen that are needed to be written back to NVRAM; thus, an overall increase on NVRAM writes can be expected. As a result, frequent data migration for wear-leveling (with a long tail latency) will be done by NVRAM interface and a performance degradation can be expected in this case. In this section we investigate the effect of this long latency on DRAM cache performance. We extended NVRAM interface model of gem5 so for every 14000 write accesses the interface adds 60 \(\mu\)s extra latency, delaying the service time of the next request. The access pattern was set to be all writes and 100% miss and 8x bigger than the DRAM cache size. This will increase the number of write-backs of dirty lines from DRAM cache to the NVRAM, to pressure the NVRAM write buffer and see the effect of wear leveling delay on the overall system performance. ### _Results and Discussion_ Table II compares the overall bandwidth seen by LLC in two cases; with wear leveling and without wear leveling. Without wear leveling the peak bandwidth is 1.92 GB/s while it drops to 1.77 GB/s when wear leveling is activated. These results show a 7.8% performance degradation which directly comes from wear leveling overhead. Table II also shows the average queuing latency measured for NVRAM write buffer. This latency is 42.84 \(\mu\)s for NVRAM-with-wear-leveling, and 39.71 \(\mu\)s for NVRAM-without-wear-leveling. Thus, the 60 \(\mu\)s latency considered for data migration during wear leveling, has caused 7.8% latency overhead on NVRAM write buffer as well. This 7.8% overhead is larger than expected given the rarity of wear leveling events showing that these rare events have an outsized impact when the system is configured with a DRAM cache. ## XI Related work Most of the prior research work in DRAM cache organization do not provide detailed methodologies required to model a DRAM cache during the simulation. In terms of modeling of a DRAM cache, Gulur et al. [21] presented an analytical performance model of DRAM cache for in-SRAM and in-DRAM tag storage organizations. Their model considers parameters such as DRAM Cache's and off-chip memory's timing values, cache block size, tag cache/predictor hit rate and workload characteristic, to estimate average miss penalty and bandwidth seen by the last level on chip SRAM cache (LLSC). Their work is based on a prior model called ANATOMY [22] which is a trace-based analytical model estimating key workload characteristics like arrival rate, row-buffer hit rate, and request spread, that are used as inputs to the network-like queuing model to statistically estimate memory performance. Even though this work accounts for the timing constraints of DRAM for the cache management, it is agnostic of the microarchitectural and timing constraints of main memory technologies cooperating with DRAM cache, and still leaves a gap for a full system DRAM cache simulation for detailed architectural analysis. Wang et al. [3] presented VANS, a cycle-level NVRAM simulator which, models the microarchitectural details of Optane DIMMs. However, their simulation model does not support using an NVRAM device as a backing memory of a DRAM cache. VANS can be integrated with other simulators like gem5. In our work, we rely on gem5's default NVM model, but we plan to use the VANS detailed NVRAM model in the future. ## XII Conclusion In this work, we described our detailed cycle-level DRAM cache simulation model implemented and validated on gem5, which can enable performing a design space exploration in DRAM cache research. This new tool can be used to explore new unified DRAM cache and memory controller designs as part of an agile and full-system simulation platforms. The tool we presented in this work can enable many interesting research work in the domain of heterogeneous memory systems. For instance, using _UDCC_ we can address questions such as what is the efficient data movement or data placement in systems composed of fast and slow memories. Since, our tool provides full system simulation platform, also it can address the hardware and software co-design ideas to explore design space of heterogeneous memories. Moreover, gem5 is highly modular and allows composing a simulated system based on a variety of components. _UDCC_ can enable experimenting with different and new memory device models which might have features to be a better fit to be used as a cache to a backing memory. \begin{table} \begin{tabular}{|l|l|l|} \hline **Parameter** & **With wear-leveling** & **Without wear-leveling** \\ \hline **Bandwidth** & 1.77 GB/s & 1.92 GB/s \\ \hline **NVM write latency** & 42.84 \(\mu\)s & 39.71 \(\mu\)s \\ \hline \end{tabular} \end{table} TABLE II: Maximum write bandwidth (and average NVM write latency) achieved with and without wear-leveling
2308.11543
Representational differences in how students compare measurements
Measurement uncertainty plays a critical role in the process of experimental physics. It is useful to be able to assess student proficiency around the topic to iteratively improve instruction and student learning. For the topic of measurement uncertainty, we developed an assessment tool called the Survey of Physics Reasoning on Uncertainty Concepts in Experiments (SPRUCE), which aims to assess students' knowledge, and use of, a variety of concepts related to measurement uncertainty. This assessment includes two isomorphic questions focused on comparing two measurements with uncertainty. One is presented numerically and the other pictorially. Despite the questions probing identical concepts, students answer them in different ways, indicating that they rely on distinct modes of representation to make sense of measurement uncertainty and comparisons. Specifically, students score much higher on the pictorially represented item, which suggests possible instructional changes to leverage students' use of representations while working with concepts of measurement uncertainty.
Gayle Geschwind, Michael Vignal, H. J. Lewandowski
2023-08-22T16:20:59Z
http://arxiv.org/abs/2308.11543v1
# Representational differences in how students compare measurements ###### Abstract Measurement uncertainty plays a critical role in the process of experimental physics. It is useful to be able to assess student proficiency around the topic to iteratively improve instruction and student learning. For the topic of measurement uncertainty, we developed an assessment tool called the Survey of Physics Reasoning on Uncertainty Concepts in Experiments (SPRUCE), which aims to assess students' knowledge, and use of, a variety of concepts related to measurement uncertainty. This assessment includes two isomorphic questions focused on comparing two measurements with uncertainty. One is presented numerically and the other pictorially. Despite the questions probing identical concepts, students answer them in different ways, indicating that they rely on distinct modes of representation to make sense of measurement uncertainty and comparisons. Specifically, students score much higher on the pictorially represented item, which suggests possible instructional changes to leverage students' use of representations while working with concepts of measurement uncertainty. Introduction & Background All measured quantities have associated uncertainties, making measurement uncertainty a crucial aspect of experimental physics. Using measurement uncertainty correctly is essential for interpreting measurements, presenting results, and drawing reliable conclusions based on those results. The Effective Practices for Physics Programs (EP3) Guide [1] also emphasizes the significance of learning measurement uncertainty techniques as taught in physics laboratories. Despite its critical role, students frequently struggle with concepts and practices surrounding measurement uncertainty, including propagation of error, comparison of measurements, calculating standard deviations and standard errors, and taking several measurements to get a distribution of results, even after taking a course emphasizing these areas [2, 3, 4, 5, 6, 7]. As part of efforts to improve student learning of measurement uncertainty, we have developed a new research-based assessment instrument (RBAI) called the Survey of Physics Reasoning on Uncertainty Concepts in Experiments (SPRUCE) [8, 9]. SPRUCE is an online assessment intended to be utilized in a pre-post format allowing instructors to measure the impact of a course on students' proficiency with concepts and practices of measurement uncertainty. We developed SPRUCE using the framework of Evidence-Centered Design (ECD) [10], a robust method of creating and validating an RBAI. Although validation is an ongoing project (with a future paper in progress), SPRUCE still offers a wide variety of insights into how students handle measurement uncertainty. Its design provides instructors with their students' progress along 14 dimensions referred to as Assessment Objectives (AOs) [11] after one term of a laboratory class. AOs are _"concise, specific articulations of measurable desired student performances regarding concepts and/or practices targeted by the assessment [11]."_ AOs are similar to course learning goals and are essentially the constructs the assessment aims to measure. We developed the SPRUCE AOs with input from introductory laboratory instructors to determine which aspects of measurement uncertainty they find important and want their students to learn in their courses [8]. These AOs then aided in writing the SPRUCE assessment items: each item on SPRUCE addresses at least one of these objectives. In this way, we focused the scope of SPRUCE to topics instructors frequently deem important to their introductory laboratory courses. Here, we examine one particular SPRUCE AO: _Determine if two measurements (with uncertainty) agree with each other_. SPRUCE has two isomorphic questions for this objective. First, the assessment presents students with numerical measurements and asks about agreement between these measurements. Then, later in the assessment, with several questions in between, a similar question appears with the data represented pictorially, as symbols with error bars. Students are not explicitly informed about the relationship between these two items. This allows us to probe how students are able to compare measurements when presented with the same data with two different representations. Existing literature has explored the use of multiple representations while students problem solve [12, 13, 14, 15, 16]. For example, Kohl et al. found that students frequently view a mathematical problem and a pictorial problem as 'opposites,' where students consider pictorial problems as more aligned with "concepts," which are frequently treated distinctly from numerical problems. Further, they found statistically significant differences in performance based on different representations of isomorphic problems on homework and quizzes. Students tended to perform worse on problems in a mathematical or numerical format than with problems in other formats (e.g., pictorial, verbal, or graphical)[12]. The work presented here aims to identify whether student performance in comparing measurements similarly depends on representation. To do this, we will answer the following research questions. * Do students respond differently to questions about comparing measurements when presented with different representations? * How do students reason about comparing measurements when presented with different representations? ## II Methodology We use a mixed methods approach, as the data collection and the analysis involve qualitative and quantitative components. To study students' handling of measurement uncertainty, we administered SPRUCE in a pre-post online format during the Fall 2022 semester in 12 courses at eight institutions (See Table 1). We received 670 valid post-instruction responses after we removed responses from students who did not consent to have their data used for research, did not correctly answer the filter question, or did not answer both items of interest. We also conducted interviews during the Fall 2022 semester. These interviews aimed to determine whether students interpreted all of the items on SPRUCE as intended, as well as to probe student reasoning for each answer option on the assessment. Students were recruited from seven courses at four institutions (2-year, Master's and PhD granting) already participating in the administration of SPRUCE during this semester. Each of the 27 interviews conducted lasted approximately one hour and students were compensated for their time. Interviewers (two of the authors) observed as students completed SPRUCE and inquired about students' reasoning for each answer selected, as well as about why they did not select certain answer options. The interviews were audio/video recorded for future reference. Analysis of these \begin{table} \begin{tabular}{c c c} \hline \hline Number of Institutions & Institution Type & Number of post responses \\ \hline 1 & 2 Year & 7 \\ 1 & 4 Year & 7 \\ 1 & Master’s & 39 \\ 5 & PhD & 617 \\ \hline \hline \end{tabular} \end{table} Table 1: Institutions and student responses in the dataset after removing student answers for incorrect filter/nonconsent to research interviews consisted of taking notes during interviews and transcribing student quotes as needed. For the analysis, we focused on the responses to two isomorphic multiple-response items, focusing on both the difficulty [17, 18] of these items and student reasoning for their answers to both. The first item, as shown in the upper half of Fig. 1, presents students with their 'own' numerical data (with uncertainty) for a measurement of a spring constant; they are then asked to select all answer choices of numerical data (means with uncertainties) that agree with their measurement. The second item presents these similar data in pictorial form, as shown in the lower half of Fig. 1. For brevity, we will refer to the numerically represented item as NRI and the pictorially represented item as PRI for the remainder of this paper. Students receive credit on these multiple response items by answering with the combination 'ABCD' or 'ABCDF,' based on expert responses. The uncertainties in both items represent the standard error; therefore, overlap or near overlap of the error bars is required for agreement. No other answer combinations earn credit, and no partial credit is awarded for these items. Note that we changed the order of answer options for the PRI for this paper to make discussion of the items easier. ## III Results & Discussion ### Overall difficulty scores While laboratory instruction commonly focuses on measurement comparison [8], low scores on both of these items at the end of the term indicate persistent student difficulties in handling comparison with uncertainties. Students score an average of (25 \(\pm\) 3)% on the NRI and an average of (40 \(\pm\) 4)% on the PRI on the post-test, with the error indicating 95% confidence interval. These scores indicate that, while not many students answered these items correctly, students answered the PRI correctly more often. We conducted a Mann-Whitney U test (a nonparametric test for independent measures) [19] to determine if this represents significant statistical difference, and found the p-value for these items as \(p=2.1\times 10^{-8}\), indicating a statistically significant difference in student performance on these items.Additionally, we calculated the effect size to compare these two items using Cohen's \(d\)[20, 21], finding \(d=0.31\pm 0.05\), showing a moderate effect size. We also calculated the Pearson coefficient to determine the correlation between the two items. The Pearson coefficient varies between \(r=-1\) and \(r=1\), where a more positive coefficient indicates a stronger positive correlation [17]. Anything above \(r\approx 0.30\) indicates a fairly significant positive correlation. For these items, we find \(r=0.45\pm 0.04\), which shows a fairly significant correlation in that if a student correctly answered one item, they are more likely to have correctly answered the other. However, the correlation is not perfect (\(r=1\)): many students correctly answer only one of these items. The number of students who answered each question correctly is presented in Table 2. Only about half of the students who correctly answered the PRI also correctly answered the NRI, but about 75% of the students who correctly answered the PRI. This suggests that students who are able to reason through the numerically presented data seem better equipped to handle the pictorially presented data, but the reverse is not true on average. We turn to the qualitative interview data to help to un Figure 1: Two Isomorphic Items on SPRUCE. These items probe student understanding of measurement comparisons with uncertainty by presenting the same data in two different representations - a numerically represented item (NRI) and a pictorially represented item (PRI). The students first encounter the NRI and then, after answering several unrelated questions, they encounter the PRI. Note that the answer options on the PRI are in a different order when presented to students (DAEBFCG) than shown here; we present them in the same order as the answer options for the NRI in this paper for ease of understanding. derstand these results. During interviews, some students describe mentally switching from a numeric to a pictorial representation easily and using this skill to solve the numeric item: _I just looked at the values and saw it - like I kind of picture if they have that little bar with their error bars to see if they overlap._ This student essentially converted the numeral data into pictorial data in their mind and then used that representation to reason about the comparisons. Using this skill of mentally changing representations, they were able to answer both items correctly. This finding is similar to ones from Kohl et al. [12] and Weliweriya et al. [22], in which students were often able to switch between different representations when forming mental models of data. ### Individual answer analysis In addition to comparing how well students scored on each question, we want to look at which answer options students choose to gain more insight into student reasoning. We determined how many students selected each of the seven answer options (due to the multiple response nature of the question, students could select multiple options, hence we do not expect these numbers to add up to 100%). Table 3 shows these data with 95% confidence intervals. For both the PRI and the NRI, students most commonly select B, in which the means of both measurements lie within each other's error bars. The second most common choices were A and D, in which the error bars of only one of the measurements overlaps with the mean of the other measurement. This shows that, frequently, students require one of the means to be within the error bars of another measurement, as opposed to accepting error bar overlap as agreement between two measurements with uncertainty. Again from Table 3, many more students selected answer option E for the NRI than the PRI; this answer option is the only one where the two measurements definitely do not agree. Students identify this disagreement more frequently when presented with the data pictorially, where it is clear that the error bars are very far from one another, rather than when presented with this same data numerically. During interviews, one student selected all answer options (aside from "None of the above") on the NRI, and said: _Honestly I would just say all of them... that's still at the end of the day what they got... We don't have enough data to say like 'no yours are all wrong because they don't exactly match ours' because there are a lot of factors that could have altered their numbers and their uncertainty. I know that's a very idealized way of thinking about science._ However, this student provided expert-like reasoning regarding overlap of the full range of each measurement when correctly answering the PRI, showing a clear difference in thinking about measurement comparison between the two representations. Knowing the most commonly selected answer options allows us to delve further into common incorrect answer _combinations_ and reasonings for these choices. Figure 2 shows a heat map of the most common answer combinations to each of the two questions (representing 409 of the 670 total student answers). The diagonal represents students who chose the same answer options for both the NRI and the PRI; the off-diagonal elements are students who selected different answers for each of these items. One of the more common incorrect combinations on both items is 'AB' [NRI: 54/670 = (\(8\pm\) 2)%, PRI: 79/670 = (\(12\pm\) 2)%]. This incorrect response aligns with students who consider their measurement more important in some sense, and therefore believe that the other groups' mean must be within their own error bars in order the measurements to agree with one another as compared to the other way around (requiring their mean to be within the other measurement's error bars), as would be indicated by the selection of 'BD' [NRI: 45/670 = (7\(\pm\) 2)%, PRI: 63/670 = (\(9\pm\) 2)%]. For example, one student who selected only 'AB' on the numeric item gave said: _For the other four groups, the uncertainties for their values did not put them in the same range as my values \begin{table} \begin{tabular}{l l l l} & Only NRI & Only PRI & Both Correct \\ \hline Number of Students & 38 & 134 & 131 \\ Percent of Students & 6 \(\pm\) 1 & 20 \(\pm\) 3 & 20 \(\pm\) 3 \\ \hline \end{tabular} \end{table} Table 2: Number of students who answered the NRI, PRI, or both correctly [N = 670]; error shown as 95% confidence interval Figure 2: Heat map showing the most common answer combinations for the NRI and PRI [N = 409]. Answer combinations ABCD and ABCDF were marked as correct; no other combinations earned credit. Diagonal elements indicate students who answered identically to both the NRI and PRI, and off-diagonal elements indicate students who answered the items differently. with its uncertainty so I don't believe they agree with my value._ In other words, when comparing numeric measurements with uncertainty, they placed more weight on their own measurement -- in order for agreement to occur, the uncertainty of the other measurement had to encompass their own mean. When solving this problem, they only added and subtracted their uncertainty to their own value and then selected the two answers whose means fell within that range; they ignored the uncertainties in the measurements in the answer options. However, we note that When answering the PRI, this same student selected a correct response of 'ABCD', and provided expert-like reasoning. Thus, their reasoning changed with representation. This theme of placing more importance on their own measurements frequently appeared in student interviews. Occasionally it was present when students provided reasoning for the PRI, but it was more typically found in student answers to the NRI. Another common incorrect answer for both items is ABD [NRI: 57/670 = (9 \(\pm\) 2)%; PRI: 60/670 = (10\(\pm\)2)%]. In this line of incorrect reasoning, students did not consider answer option C to be correct, in which just the error bars overlap: they required at least one of the means to be within the error bars of the other measurement in order for agreement between measurements to occur. Student reasoning from interviews supports this interpretation. For example, one student interviewed selected 'ABD' on the PRI because: _Not only do a large portion of their error bars overlap, it also contains the measurement itself,_ when referring to answer option B and D. They then chose A because: _I would include [A] because now that measurement is included in mine, but [C] I am not sure about because... I don't necessarily know for sure they agree._ In this example, the student did not consider answer option C to show agreement despite the error bar overlap - instead, they placed additional emphasis on requiring the mean to be included in at least one of the uncertainties of the other measurement. Figure 2 also shows that very few students chose 'ABCDF' for the PRI [4/670, or (0.60\(\pm\) 0.06)%], but many more students chose this for the NRI [the heat map shows 32 of the 35/670 = (5 \(\pm\) 2)% students who chose this option]. In the PRI, answer option F is one in which the error bars do not overlap, but are very close to each other, showing that agreement might be possible, hence why selection of F was not considered when scoring this item - this option's correctness largely depends on which guidelines instructors teach students. Additionally, interview data showed mixed reasoning for students who selected this option. ## IV Conclusions & Takeaways Overall, students performed better on the PRI than the NRI, showing a more expert-like understanding of measurement comparison when presented with a pictorial format. However, students' did not perform as well as desired on either item, indicating room for improvement in teaching this important skill to students. Only about 40% of students correctly identified whether measurements with uncertainties agree with one another in a pictorial format, and this drops to only about 25% when presented numerically instead. Since many scientific papers generally provide numbers with uncertainties for measurements, this is a valuable skill needed in their future scientific careers to interpret experimental results. It is also vital for students to be able to work with many representations of data and convert between them. This study suggests that having students work with multiple representations, and convert between them, could be beneficial for developing expertise with measurement uncertainty and comparing measurements. In future work, we will examine pre-post gains across this objective by examining scores prior to, and after, instruction in introductory laboratory courses. Additionally, we will explore other research directions using SPRUCE data, such as students' ideas around accuracy and precision and their ability to propagate errors to obtain an uncertainty in a calculated quantity. Finally, we will examine the alignment of student performance on SPRUCE with a variety of variables, including race, gender, institution type, and instructional methods. ###### Acknowledgements. This work is supported by NSF DUE 1914840, DUE 1913698, and PHY 1734006.. The authors wish to thank Marcos D. Caballero, Rachel Henderson, and Benjamin Pollard for their help in developing SPRUCE, as well as the students and instructors who participated in this study. \begin{table} \begin{tabular}{l l|l l l} \hline \hline Numeric Representation (NRI) & Percent of Students & \multicolumn{2}{l}{Pictorial Representation (PRI)} & Percent of Students \\ & N = 670 & & N = 670 \\ \hline **A.** & \(3.71\pm 0.06\) & 58 \(\pm\) 4 & **A.** & 72 \(\pm\) 3 \\ **B.** & \(3.71\pm 0.17\) & 66 \(\pm\) 4 & **B.** & 79 \(\pm\) 3 \\ **C.** & \(3.76\pm 0.06\) & 45 \(\pm\) 4 & **C.** & 46 \(\pm\) 4 \\ **D.** & \(3.76\pm 0.17\) & 55 \(\pm\) 4 & **D.** & 69 \(\pm\) 4 \\ **E.** & \(3.91\pm 0.06\) & 10 \(\pm\) 2 & **E.** & 1.3 \(\pm\) 0.9 \\ **F.** & \(3.91\pm 0.17\) & 15 \(\pm\) 3 & **F.** & 7 \(\pm\) 2 \\ **G.** & None of these agree with my data & 6 \(\pm\) 2 & **G.** None of these agree with my data & 1.5 \(\pm\) 0.9 \\ \hline \hline \end{tabular} \end{table} TABLE III: Percentage of students who selected each answer option with 95% confidence interval
2310.04462
Management strategies for hydropower plants a simple dynamic programming approach
We use a dynamic programming approach to construct management strategies for a hydropower plant with a dam and a continuously adjustable unit. Along the way, we estimate unknown variables via simple models using historical data and forecasts. Our suggested scheme achieves on average 97.1 % of the theoretical maximum using small computational effort. We also apply our scheme to a Run-of-River hydropower plant and compare the strategies and results to the much more involved PDE-based optimal switching method studied earlier by the authors in (Optimization and Engineering (2021): 1-25); this comparison shows that our simple approach may be preferable if the underlying data is sufficiently rich.
Marcus Olofsson, Niklas L. P. Lundström
2023-10-05T11:01:14Z
http://arxiv.org/abs/2310.04462v1
# Management strategies for hydropower plants ###### Abstract We use a dynamic programming approach to construct management strategies for a hydropower plant with a dam and a continuously adjustable unit. Along the way, we estimate unknown variables via simple models using historical data and forecasts. Our suggested scheme achieves on average 97.1 % of the theoretical maximum using small computational effort. We also apply our scheme to a Run-of-River hydropower plant and compare the strategies and results to the much more involved PDE-based optimal switching method studied in [8]; this comparison shows that our simple approach may be preferable if the underlying data is sufficiently rich. keywords: production planning; optimization; flow model; dynamic programming ## 1 Introduction There is currently a need for improving management of hydropower. As a green alternative for balancing volatile energy sources, small hydropower plants are increasingly important for a sustainable and stable electricity grid and effective management strategies are therefore necessary to increase their economic appeal. As an example, [3] conclude that small hydropower is one of the most important impetuses for the development of China's power industry, and that the dispatching of their many small hydropower plants is lagging behind the development of the power grid and other power sources. Managing hydropower is however a nontrivial task, even in case of only a single power plant with one reservoir. When running a unit (a turbine connected to a generator), the reservoir is naturally drained of water and, if the inflow is insufficient, the head is lowered leading to less electricity generated per \(m^{3}/s\) of water used. Intuitively, one would therefore argue to keep the reservoir as close to full as possible whilst minimizing the risk that it overflows, loosing water without producing any electricity. However, as the cost of moving between production state is often non-negligible, it might actually be optimal to allow some spillage of water to avoid paying this cost. Non-negligible costs for switching production state comes from the fact that starting and stopping units induces wear and tear on the machines and may also require intervention from personnel. Each start and stop also involves a small risk, e.g., the major breakdown in the Akkats hydropower plant (Lule river, Sweden) 2002 was caused by a unit being stopped too quickly, resulting in rushing water destroying the foundation of both the turbine and the generator [11, 12]. It is a classical technique in optimization to use dynamic programming and backwards induction to determine optimal decision sequences, so also in hydropower management, see, e.g., [1, 14, 15, 6, 5, 4] and the references therein. The key idea is to answer the question: "What is the best decision at this point, assuming that all my future actions will be optimal?" This method is extremely useful when all parameters are deterministic or to find the optimal decision _in hindsight_, when the outcome of any random event is determined. In reality, however, systems typically involve randomness and there is not enough information available to determine the best decision without knowledge of the future. In this paper we overcome this lack of information by replacing the unknown random variables with estimates based on appropriate models, historical data, and forecasts, and apply a dynamic programming technique to produce management strategies for a hydropower plant with a dam and a continuously adjustable unit. We use our estimates to construct _approximate_ optimal strategies and base our decisions on these approximations. When paired with short-term forecasts, this turns out to be a very efficient way that achieves close to optimal strategies with small modelling and computational effort. This method eludes deep mathematical theory that might be out of reach to practitioners and completely circumvents the need for simulations and numerical solutions of differential equations as used by the authors in [8]. There is an extensive literature on how to improve and rationalize dynamic programming algorithms for managing hydropower, see, e.g., [15, 6] and the references therein, as well as other optimization techniques and models [2, 5, 8, 13, 7]. We contribute to this literature by showing how rudimentary optimization techniques together with simple mathematical models for river flow can yield very good results; the suggested production schemes performs remarkably well, averaging 97.1 % of the theoretical maximum over the years 2015-2022 when studying management of an example (fictitious) hydropower plant using real flow data from the northern parts of Sweden. The rest of the paper is organized as follows. Section 2 outlines our suggested optimization scheme, including the dynamic programming approach, river flow model, and the modeling of our power plant. The results from testing our scheme is presented in Section 3, and in Section 4 we apply our scheme to a Run-of-River hydropower plant to compare the method with the more involved PDE-based method of [8]. We end in Section 5 with a discussion of our findings and suggestions for further research. ## 2 Problem setup and method The objective in our problem is to manage a production facility to maximize its profit. More precisely, the manager must continuously choose between different modes of production, each with different profitability depending on some random dynamic factors \(X_{t}\). However, each change in production induces a cost and these costs are deducted from the total profit. This means that the optimal strategy is **not** to always switch to the state with momentarily highest payoff. In the context of hydropower, this amounts to maximizing the profit from the electricity generated over a specific period \([t,T]\). More precisely, we want to maximize \[\int_{t}^{T}\phi_{\mu_{s}}(s,Q_{s},H_{s},P_{s})\,ds-\sum_{t\leq\tau_{i}\leq T}c _{\xi_{i-1},\xi_{i}}(\tau_{i},Q_{\tau_{i}},H_{\tau_{i}},P_{\tau_{i}}), \tag{2.1}\] by finding an optimal sequence of (random) time points \(\tau_{i}\) at which we move from production mode \(\xi_{i-1}\) to \(\xi_{i}\). It is convenient to associate to each such sequence a (random) function \(\mu_{s}\), indicating the current state of production at time \(s\) and we will move between these notations throughout the text without further notice. (In fact, we use both notations already in (2.1).) In the display above \[\phi_{\mu_{s}}(s,Q_{s},H_{s},P_{s})\] is the running payoff of the plant at time \(s\) when in state \(\mu_{s}\), with water flow \(Q_{s}\), reservoir head \(H_{s}\), and electricity spot price \(P_{s}\). The variables \((q,h,p)\) indicate the current value of these stochastic processes, i.e., \(Q_{t}=q,H_{t}=h\), and \(P_{t}=p\). The cost of moving from production mode \(i\) to production mode \(j\) is denoted \(c_{ij}\). These costs occur due to, e.g., wear and tear of the components or the risk of failure when changing production state. ### Dynamic programming for hydropower production The strategies constructed in this paper are based on dynamic programming paired with historical estimation and forecasts of water flows. To put this method of optimization into the current setting, label the different modes of production \(i\in\{1,2,\ldots,m\}\). Let \(V_{i}(t,x)\) be the optimal profit at time \(t\) given that we are currently in production mode \(i\) and that \(X_{t}^{x,\alpha}:=(Q_{t}^{q},H_{t}^{h,\alpha},P_{t}^{p})=(q,h,p)=:x\). The superindex \(\alpha\) is used to stress that the process \(X_{t}\) is controlled in the sense that the reservoir level \(H\) depends on the amount of water used for production. We will typically drop this superindex in favour of an easier notation. When in state \(i\) at time \(t\), the total payoff from staying in state \(i\) until \(t+\Delta t\) is \[\phi_{i}(t,x)\cdot\Delta t+V_{i}(t+\Delta t,X_{t+\Delta t}^{x})\] whereas switching to state \(j\) gives total payoff \[\phi_{j}(t,x)\cdot\Delta t+V_{j}(t+\Delta t,X_{t+\Delta t}^{x})-c_{ij}(t,x). \tag{2.2}\] Therefore, the optimal decision is to choose whatever action that maximizes this output, i.e, to maximize (2.2) over \(j\in\{1,2,\ldots,m\}\). With the value of acting optimally in the future \(V_{i}(t+\Delta t,x)\), \(i\in\{1,2,\ldots,m\}\), given, the optimal value \(V_{i}(t,x)\) must therefore satisfy \[V_{i}(t,x)=\max_{j\in\{1,2,\ldots,m\}}\phi_{j}(t,q,h,p)\cdot\,\Delta t+V_{j}(t +\Delta t,X_{t+\Delta}^{x})-c_{ij}(t,x). \tag{2.3}\] If the terminal value \(V_{i}(T,x)\) is known, we can thus work recursively backwards to find \(V_{i}(t,x)\) for all \((t,x)\) and with \(\{V_{1},\ldots,V_{m}\}\) known, the optimal decision \(j^{*}\) in (2.2) is given by \[j^{*}=\operatorname*{arg\,max}_{j\in\{1,2,\ldots,m\}}\phi_{j}(t,x)\cdot\Delta t +V_{j}(t+\Delta t,X_{t+\Delta t}^{x})-c_{ij}(t,x), \tag{2.4}\] where we assume \(c_{ii}\equiv 0\). In applications, the assumption that \(X_{t}\) is deterministic typically fails and the value of \(X_{t+\Delta t}\) is not known at time \(t\). Indeed, in the application considered here, the flow of water \(Q\) and electricity price \(P\) are stochastic. Therefore, we do not have sufficient information to determine the optimal choice in (2.2) or the value in (2.3). To remedy this lack of information we create _approximately_ optimal strategies based on historical estimates using (2.4). To be more precise, let \(\bar{X}=(\bar{Q},\bar{H},\bar{P})\) denote a known deterministic estimate of the underlying processes, possibly respecting forecasts. At each time \(t\), we now proceed as outlined above, with the difference that we replace the unknown stochastic variable \(X_{t+\Delta t}^{x}\) with its _deterministic_ counterpart \(\bar{X}_{t+\Delta t}^{x}\). Given a terminal value \(\{\bar{V}_{1}(T,x),\ldots,\bar{V}_{m}(T,x)\}\) we can thus recursively construct an _approximate_ value function \(\{\bar{V}_{1}(t,x),\ldots,\bar{V}_{m}(t,x)\}\) by mimicking (2.3), i.e., \[\bar{V}_{i}(t,x)=\max_{j\in\{1,2,\ldots,m\}}\phi_{j}(t,x)\cdot\Delta t+\bar{V }_{j}(t+\Delta t,\bar{X}_{t+\Delta t}^{x})-c_{ij}(t,x),\qquad V_{i}(T,x)=g_{i} (x). \tag{2.5}\] The function \(\bar{V}(t,x)\) does not coincide with \(V(t,x)\) in our original problem as it is based merely on estimates on \(X\). However, one suspects that moving from state \(i\) to state \(j\) whenever \[\bar{V}_{i}(t,X_{t})=\bar{V}_{j}(t,X_{t})-c_{ij}(t,X_{t})\] should be close to optimal in the original optimization problem, regardless of the actual value of the function \(\bar{V}\). Indeed, we expect the stochastic process \(X_{t}\) to behave in a similar fashion to \(\bar{X}\), so the optimal strategy should not be too different either. In particular, if the process \(\bar{X}\) includes short-term forecasts, the decision in the short run should be close to optimal since the estimate of the nearby future is then very good, and the long term effects should be respected by the estimate \(\bar{X}\). ### River flow model We will denote the flow in the river at time \(s\) by \(Q(s)\) and we let \(q(s)\) be an estimate of the flow at that time. (We will take the latter as a moving average of the flow from several preceding years.) Note that \(Q(\cdot)\) is known only up to the present time \(t\) whereas the historical estimate \(q(\cdot)\) is known for all \(s\). In case no forecast is available, we assume that the actual flow \(Q(s)\), \(s>t\), reverts towards \(q\) so that the difference, \(\lambda(s)=Q(s)-q(s)\), satisfies \[\frac{d\lambda(s)}{ds}=-\kappa\lambda(s)\quad\text{i.e.}\quad\ \lambda(s)=\lambda(t)e^{-\kappa(s-t)}\quad\text{for}\quad s\geq t.\] Thus \[Q(s)=(Q(t)-q(t))e^{-\kappa(s-t)}+q(s)\quad\text{for}\quad s\geq t.\] This means that an initial difference from the historical flow \(q(t)\) vanishes exponentially. Writing this in terms of the half life \(T_{1/2}\) we have \[Q(s)=(Q(t)-q(t))2^{-\frac{s-t}{t_{1/2}}}+q(s)\quad\text{for}\quad s\geq t. \tag{2.6}\] The flow data used in our numerical examination is from Savaran in the northern parts of Sweden and is gathered from the Swedish Meteorological and Hydrological Institute.1 The average flow \(q(t)\) is a 7 day running average based on data from 1980-2014 while the data 2015-2022 is used for testing our optimization method in Section 3. The mean flow \(q(\cdot)\) together with the flow of the bench-marking years are shown in Figure 1. We set the half time without further investigations to \(T_{1/2}=10\) days and investigate the sensitivity of our results due to this choice later on, see Remark 1 in Section 3. A direction field of our simple flow model in (2.6) is shown in Figure 2. Footnote 1: Flow data was downloaded from [http://vattenwebb.smhi.se/station](http://vattenwebb.smhi.se/station) (station number 2236) on September 1, 2023 We extend the flow model with a forecast by replacing the first \(M\) days of \(Q(s)\) with the corresponding forecast. After these days, the flow \(Q\) is assumed to return to the mean flow \(q\) exponentially as outlined above. To avoid forecast modeling, which is not the topic of the current paper, we simply assume the forecasts are perfect and use the actual flow as prediction in our numerical investigation below. We briefly comment on the impact of the specific flow model constructed here in Remark 1. Figure 1: Mean flow \(q(t)\) (dashed) based on data from 1980-2014 together with actual flows from 2015-2022. Flow data is from Sãovarán in the northern parts of Sweden. ### Power plant modeling When considering hydropower plants with a dam, it is natural to model the power output of each unit, i.e., turbine and generator pair, as a function of the head \(H_{t}\) of the reservoir and the flow of water \(F_{t}\) through the turbine. We thus assume that the payoff from the power plant depends on the controlled processes \(H(t)=H^{\alpha}(t)\) and \(F(t)=F^{\alpha}(t)\) in which \(\alpha\) is the control. The head is given by \[\frac{dH^{\alpha}}{dt}=g_{H}(F_{t}^{\alpha},Q_{t},H_{t}^{\alpha}),\qquad H_{0}=h \tag{2.7}\] where \(Q_{t}\) is the inflow to the reservoir (i.e. the river flow as above), \(\alpha\) indicates the current production mode, and \(g_{H}\) is a function given by the shape of the reservoir. Note in particular that the chosen strategy \(\alpha\) has a direct impact on the dynamics of the water head \(H_{t}^{\alpha}\). For the sake of our numerical example, we assume that the dam has the simple shape of a cone with maximum height \(H_{max}\) and that it can hold enough water to supply the power plant with water for its design speed \(F_{d}\)\(m^{3}/s\) during \(N\) days. Simple arithmetic then gives that \[H_{t}^{\alpha}=H_{max}\left(\frac{V_{t}^{\alpha}}{V_{max}}\right)^{1/3}\] where \(V_{t}^{\alpha}\) is the amount of water in the reservoir at time \(t\), and \(H_{max}\) and \(V_{max}\) the height and capacity of the reservoir, respectively. We assume that the plant consists of a single unit which can generate electricity for all flows between \(F_{min}\) and \(F_{max}\). We normalize all data so that the payoff when production is completely shut down (\(i=0\)) is \(0\). When in productive mode, we let the payoff be given by \[\phi(F_{t}^{\alpha},H_{t},P_{t})=-c_{run}+\begin{cases}-c_{low}&\text{if }H_{t}=0\\ \rho gH_{t}\,\eta(F_{t}^{\alpha})\,F_{t}^{\alpha}\,P_{t}&\text{if }H_{t}>0,\end{cases} \tag{2.8}\] where \(F_{\alpha}\) is the amount of water run through the generator, \(\rho=10^{3}kg/m^{3}\), \(g=9.82m/s^{2}\), \(c_{run}\) and \(c_{low}\) are constants, and \(\eta\) an efficiency curve \[\eta(F)=\alpha-\beta\left(\frac{F}{F_{d}}-1\right)^{2} \tag{2.9}\] Figure 2: The flow field of our model, showing how the model flow reverts towards the mean flow with \(T_{1/2}=10\) days. specific for the unit under consideration, see Figure 3. The condition \(H_{t}=0\) in (2.8) implies a penalty if the dam runs empty, thereby failing to meet the minimum requirements of the unit. Note that the running cost may exceed the possible profit from generating electricity if the water head it too low, so the dam may be effectively "empty" before \(H_{t}=0\) (but production is nevertheless possible without penalization as long as \(H_{t}>0\)). The switching costs are set as a fraction of the profit generated by the plant if it works at maximum capacity for a full year at unit electricity price and without interruptions. In this particular case, this maximum is given by \[D=\phi(F_{max},H_{max},1)\cdot 365=\phi_{11}(H_{max},1)\cdot 365. \tag{2.10}\] We assume the cost of starting/stopping the generator is 25 times that of adjusting an already running generator to a new state, i.e., \[c_{ij}=\begin{cases}0&\text{if }i=j\\ \gamma D&\text{if }i\neq j\text{ and }(i=0\text{ or }j=0)\\ \frac{\gamma D}{25}&\text{if }i\neq j\text{ and }i,j\neq 0,\end{cases} \tag{2.11}\] where \(\gamma\in[0,0.01]\) is a parameter showing, in general, how costly it is to make changes in the production. ### The numerical procedure Our optimization is based on the discretization \[T=[0:\Delta t:T],\qquad Q=[0:\Delta q:\hat{Q}],\qquad H=[0:\Delta h:\hat{H}_{ max}] \tag{2.12}\] \begin{table} \begin{tabular}{|l|l|l|l|} \hline \(P_{0}\) & \(1\) [\(m.u./kWh\)] & \(c_{low}\) & \(1000\) [\(m.u./h\)] \\ \hline \(H_{max}\) & \(5\) [\(m\)] & \(c_{run}\) & \(100\) [\(m.u./h\)] \\ \hline T & \(365\) [days] & \(F_{min}\) & \(5\) [\(m^{3}/s\)] \\ \hline \(\alpha\) & \(0.92\) & \(F_{max}\) & \(13\) [\(m^{3}/s\)] \\ \hline \(\beta\) & \(0.45\) & \(F_{d}\) & \(10\) [\(m^{3}/s\)] \\ \hline \(T_{1/2}\) & \(10\) [days] & & \\ \hline \end{tabular} \end{table} Table 1: Parameter values used in our numerical investigation. Figure 3: The efficiency curve in (2.9), with \(\alpha=0.92,\beta=0.45\), together with the available production modes of the unit. where \(\Delta t=1\) day, \(\Delta q=1/4\)\(m^{3}/s\), \(\dot{Q}\) exceeds the largest flow in the data, and \(\Delta h\) corresponds to 0.1 % of the total dam size. For computational ease, all quantities are calculated on a grid point of (2.12) by rounding to the nearest point. A finer (or coarser) grid can therefore alter the payoffs and corresponding strategies slightly but not enough to change the qualitative results. To capture the natural seasonality of the problem we consider an optimisation horizon of \(T=365\) days2 and allow the manager to change the state of production once per day. The electricity price is taken to be constant \(P_{t}\equiv P_{0}=1\), corresponding roughly to maximizing the output of electricity rather than the monetary profit3. For simplicity of presentation, we require the plant to end up in the same production state as it started in (i.e., \(i=0\), "off"). Footnote 2: Leap days are excluded for simplicity of presentation. Footnote 3: Time-dependent electricity price can be handled without any additional complications but requires slightly longer computational time and obstructs the interpretation of the results. For our calculations we discretize the running mode to 12 different production states (\(i\in\{0,1,2,\ldots,11\}\)), \(i=0\) meaning no production and the remaining modes having \(F^{\alpha}\) spanning from \(F_{min}\) to \(F_{max}\) in steps of 10 %. We use the notation \[\phi_{0}=0\qquad\mbox{and}\qquad\phi_{i}(H_{t},P_{t}):=\phi(F_{i},H_{t},P_{t}),\] where \[F_{i}=F_{min}+\frac{(i-1)}{10}(F_{max}-F_{min})\quad\mbox{for}\quad i\in\{1,2,\ldots,11\}.\] The efficiency curve \(\eta(F)\) used is depicted in Figure 3 with the corresponding allowed production flows \(F_{i}\) marked with dots. As water in the reservoir has value we must take any change in the reservoir from the beginning to the end of the optimization period into account in the final result. This is done by establishing the value of water in the reservoir as the profit this water would generate if used to run the generator at design speed \(F_{d}\), disregarding running costs. Any change in the reservoir from the initial level \(H_{0}=H_{max}\) (which corresponds to the dam being full) is adjusted in the final profit. Note that the assumption of design speed and no running cost implies a larger penalty for missing water than what could be gained from using it for production, thereby forcing an optimal strategy to end with the dam full. Naturally, the numerical values to be used vary with the specific problem, river, and power plant under consideration. The parameter values applied here are summarized and presented in Table 1; we refer to [8] for details and motivations. The exact values should have little impact on the qualitative nature of our results. For our model and the sake of this paper, the most significant parameters are the forecast length, dam size, and switching cost. If nothing else is specified we consider \(M=10\) days forecast, a dam size of \(N=30\) days at design speed, and switching cost parameter \(\gamma=0.0025\). In the next section we vary these parameters one-by-one to highlight their impact on the optimal strategies and the end result. ## 3 Results The suggested production schemes performs remarkably well, averaging 97.1 % of the theoretical maximum over the years 2015-2022 for parameters as above (see also Figures 5 and 8). A detailed view of the optimal and suggested production scheme for 2022 is as presented in Figure 4 in which we observe the following: the strategies typically avoid running the plant at full capacity (which is natural because of the efficiency of the unit, recall Figure 3) and keep the the reservoir head above 80% (which corresponds to about 60% of the dam capacity) and at about 90% on average over the year. Concerning differences and similarities between the DPP strategy and the optimal strategy, we observe that the main differences occur during and after the spring flood, which is likely due to the fact that the flow fluctuates more during this period. ### Dam size The presence of a dam significantly increases the management options and therefore also the payoff of the power plant. Another benefit of a dam is that it reduces the importance of accurate forecasts, as can be seen from Figure 5. This is in line with what should be expected as the storage of water can be used to manually counteract sudden changes in the river flow to keep production efficient. Our strategy has no difficulties finding these adjustments. The need of a forecast vanishes as the dam grows as the momentary flow then becomes insignificant in comparison with the long term average. With a small dam one must use a wider range of the operating modes and turn the plant on/off more often whereas with a large dam, a fairly decent result can be achieved using a single mode of operation, see Figure 6. Our method performs well in both cases. ### Switching cost Clearly, the total profit decreases as the cost of changing production states increases. As with the dam size, the cost of changing production also affects how many switches should be made in an optimal strategy; smaller costs leading to more active management and vice versa, see Figure 7. Our strategy is robust to these changes and adjusts the suggested strategy as necessary to find an efficient production plan in all cases, see Figure 8. ## 4 Managing a run of river power plant - comparison to optimal switching The objective function (2.1) investigated here falls into the framework of optimal switching theory. This theory was used in [8] for production planning of a Run-of-River (RoR) power plant with two units which could be regulated and switched on and off depending on the natural flow of the river. In essence, this corresponds to having three different states of production and no dam to store water. For comparison of methods, we here mimic that setup and apply our much simpler method to the same data set. We present and compare Figure 5: The benefits of a forecast become less explicit as the value of the dam grows. Data from year 2022 with \(\gamma=0.0025\). Figure 6: Optimal strategy for dam size \(N=\{5,30,100\}\) days. The need for active management increases with a dam but vanishes as the dam grows. Data for year 2022 with \(\gamma=0.0025\). Figure 7: Optimal strategies for \(\gamma=\{0.00125,0.0025,0.005\}\). Lower switching cost naturally leads to more changes in the production. Data from year 2022 and dam size \(N=30\) days. the results of the method outlined in Section 2 and that of [8] (named OSP in the below) for the years 2019-2022. The parameters are as in Section 3 and [8], as applicable. The plant can be run in three different modes; shut down (mode \(i=0\)), 1 unit running (\(i=1\)), or 2 units running (\(i=2\)). Both generators have the efficiency \(\eta\) as in Figure 3. When in mode \(i=2\), the now uncontrolled flow of water can be split between the two units at no cost to maximize the combined output of the pair. The data is normalized so that the payoff from mode 0 (shut down) is 0, i.e., \(\phi_{0}\equiv 0\) and in productive state the payoff is given by \[\phi_{1}(F_{t},P_{t}) =-c_{run}+\begin{cases}-c_{low}&\text{if}\quad F_{t}<F_{min},\\ c\,\eta(F_{t})\,F_{t}\,P_{t}&\text{if}\quad F_{min}\leq F_{t}<F_{max},\\ c\,\eta(F_{max})F_{max}\,P_{t}&\text{if}\quad F_{max}\leq F_{t},\end{cases} \tag{4.13}\] \[\phi_{2}(F_{t},P_{t}) =\max_{\delta\in[0,1]}\left\{\phi_{1}(\delta F_{t},P_{t})+\phi_{ 1}((1-\delta)F_{t},P_{t})\right\}, \tag{4.14}\] where \(c=\rho gH_{t}\). Note that the water head \(H_{t}\equiv H_{max}\) is fixed and \(F_{t}=Q_{t}\) as the plant cannot store or control the flow of water. As above, \(P_{t}\equiv 1\) and the cost of switching is defined via (2.10) as \[c_{ij}=\begin{cases}0&\text{if}\ |i=j|=0\\ \gamma D&\text{if}\ |i-j|=1\\ 1.5\cdot\gamma D&\text{if}\ |i-j|=2\end{cases}.\] As indicated already by the results in Figure 5, accurate forecasts are of great importance for RoR power plants when sudden changes cannot be counteracted by stored water. However, the efficiency of the OSP method is less sensitive to forecasts as can be seen in Figures 8(a) and 8(b). The OSP method is stable but rarely finds the true optimal strategy, i.e., the performance ratio is typically \(<1\), see Figure 8(a), while the DPP method in many cases finds the true optimum. On the other hand OSP, avoids major pitfalls and performs close to optimal on most occasions, both with and without forecast, while the DPP method is more prone to make costly sub-optimal decisions. This is especially true when the information on future flow is limited, see Figure 8(b). For year 2019 and parameters \(\gamma=0.0075\), and \(M=5\) days the performance ratio is 0.992 for the OSP strategy and 1.000 for the DPP strategy. The corresponding strategies are shown in Figure 10. To give an intuition of how the different schemes work, we consider two particular decisions of these strategies below. Figure 8: Our method is robust w.r.t. switching cost. Figure 9: The strategies developed in this paper often performs better than the more complicated approach of [8] when the forecast is sufficiently rich. When information is scarce, the stochastic model of [8] typically outperforms our method. At time \(t=110\), the OSP strategy chooses to open production at the intermediate level (\(i=1\)) although the optimal strategy is to wait and open at full capacity later on at \(t=112\). The situation at the time of decision is depicted in Figure 11. The OSP model forecasts a flow below that of DPP and, in addition, anticipates deviations from this forecast, resulting in the safer choice of an intermediate step in the production. Note however that the OSP takes this action _before_ the optimal strategy opens to mode \(i=2\), so some of the loss due to the extra switch is regained. Figure 12 examines the point \(t=325\) where both strategies optimally refrain from turning production on despite going into the profitable region, \(F=Q>F_{min}\). The forecast is sufficiently long for the DPP model to detect the decrease in flow comping up and avoid turning production on. The reason for the OSP model to refrain from switching mode is different; it projects a flow _above_ the critical level \(Q_{min}\) for a sufficiently long time for a change in production(back and forth) to be profitable but anticipates deviations from this forecast and therefore requires more margins before taking action. With a slightly shorter forecast of 4 days, the projected flow is sufficiently above the critical level for the DPP strategy to sub-optimally turn production on at \(t=325\) (and then off again at \(t=333\)) while the OSP strategy with its built in stochastic features remains unchanged and optimal even with this shorter forecast. **Remark 1**: _Note that the flow model of [8], building on stochastic differential equations, is more complicated than the deterministic approach used in this paper. However, it is a simple task to adapt that model to meet our requirements: simply set \(\sigma=0\) in equation (3.1) of [8]. This alternative is deterministic and can be used as outlined above. When combining that flow model with the DPP-approach suggested in this paper some minor differences can be observed in the results, but the general observations made in Section 4 remain valid. This indicates that the simple flow model (2.6) is sufficiently rich to tackle the problem at hand. This stays true also when decreasing or increasing the half-life \(T_{1/2}\) in (2.6). In particular, performing the calculations of Section 3 for \(T_{1/2}=5\) and \(T_{1/2}=20\) gives an average of \(97.2\) % and \(97.5\) % of the theoretical maximum, respectively, for the years 2015-2022._ Figure 10: 2019 with 5 days forecast and \(C/D=0.0075\). There are discrepancies between the strategies suggested by OSP and DPP; the OSP takes a ”safer route” whereas the DPP strategy is more offensive and finds the true optimum. Figure 11: The OSP forecasts a lower flow than the DPP, therefore sub-optimally starting production in the intermediate mode \(i=1\) at \(t=110\). Figure 12: Both methods ignore opening production at \(t=325\) with a 5 day forecast, despite the flow being over the critical threshold \(F_{min}=5\,m^{3}/s\). With a shorter forecast of 4 days, the projected flow is slightly higher and the DPP sub-optimally starts production while the OSP method still makes the correct decision and avoids costly opening and closing of the plant. Discussion The major upside with the scheme presented here is its simplicity, both from a mathematical and modeling perspective. The method can easily be adjusted and expanded to more complicated power plants with a large number of different production states. In comparison, expanding the optimal switching based model of [8] to include dams as in Section 2 would require treatment of interconnected PDEs with Neumann boundary values and computational proficiency to solve these explicitly. Moreover, the addition of further underlying processes in that setting increases the dimensionality of the underlying PDE and quickly requires explicit solutions of high-dimensional PDEs. On the contrary, the computational resources needed here are relatively small and the method can cope with a larger number of underlying processes, at least as long as these are truly exogenous, e.g. wind, water flow, electricity price, etc. Moreover, the number of production states can be increased without slowing down the process notably, i.e., at computational cost \(\mathcal{O}(m)\), where \(m\) is the number of production modes. The number of processes that are affected by our actions must however be in the low single digits for the method to be tractable as we must keep track of all possible choices for these in the backward recursion. Relying on a coarse discretization and interpolation could push this limit a bit but at the risk of loosing accuracy. The downside of the method is primarily its lack of stochastic features, meaning that all uncertainty must be considered in the respective models for the underlying processes, possibly aggravating the modeling at that stage. Moreover, a deterministic approach can lead to too many actions when the underlying process fluctuates around key values (e.g., \(F_{min}\) in the example of Section 4). This does not seem to be the case in our examples, but it must nevertheless be considered and observed closely in applications. When it comes to comparison of the results of the DPP and the optimal switching based method, the OSP is more stable and performs better with less accurate data, but rarely finds the true optimal strategy. This is due to a conceptual difference between the DPP method suggested here and that based on stochastic differential equations in [8]; the former takes the input flow model as a 'fait acompli' while the latter expects stochastic deviations and tries to maximize the _expected_ profit. The OSP schemes therefore "wait and see", thereby missing the true optimum slightly at the benefit of minimizing the risk of costly switches back and forth. With reliable forecasts and/or low costs of switching it therefore seems reasonable to opt for deterministic methods such as that suggested here while stochastic features are advantageous when information is scarce or insecure. hydropower is often presented as a clean and renewable energy source that is environmentally preferable to fossil fuels or nuclear power. However, it often transforms rivers by, e.g., reducing flow velocity and disrupting sediment dynamics, and by extension, it therefore also alters riverine biodiversity. Freshwater ecosystems are in fact among the world's most threatened ecosystems [9, 13]. Therefore, an important challenge for river management is to identify situations where measures involving relatively small production losses can have major ecological advantages. This calls for an extension of the present work towards a dual objective optimization approach in which one imposes restrictions on, e.g., the reservoir level and the output flow from the power plant. A suggested strategy would in that case not consist of a single action but rather a Pareto-front consisting of efficient strategies where the manager can make a choice depending on the desired degree of environmental friendliness. In such multi objective optimization, the simplicity of the current scheme can be a great advantage as it eases the addition of further traits for consideration.
2301.12541
Supervised and Contrastive Self-Supervised In-Domain Representation Learning for Dense Prediction Problems in Remote Sensing
In recent years Convolutional neural networks (CNN) have made significant progress in computer vision. These advancements have been applied to other areas, such as remote sensing and have shown satisfactory results. However, the lack of large labeled datasets and the inherent complexity of remote sensing problems have made it difficult to train deep CNNs for dense prediction problems. To solve this issue, ImageNet pretrained weights have been used as a starting point in various dense predictions tasks. Although this type of transfer learning has led to improvements, the domain difference between natural and remote sensing images has also limited the performance of deep CNNs. On the other hand, self-supervised learning methods for learning visual representations from large unlabeled images have grown substantially over the past two years. Accordingly, in this paper we have explored the effectiveness of in-domain representations in both supervised and self-supervised forms to solve the domain difference between remote sensing and the ImageNet dataset. The obtained weights from remote sensing images are utilized as initial weights for solving semantic segmentation and object detection tasks and state-of-the-art results are obtained. For self-supervised pre-training, we have utilized the SimSiam algorithm as it is simple and does not need huge computational resources. One of the most influential factors in acquiring general visual representations from remote sensing images is the pre-training dataset. To examine the effect of the pre-training dataset, equal-sized remote sensing datasets are used for pre-training. Our results have demonstrated that using datasets with a high spatial resolution for self-supervised representation learning leads to high performance in downstream tasks.
Ali Ghanbarzade, Hossein Soleimani
2023-01-29T20:56:51Z
http://arxiv.org/abs/2301.12541v1
Supervised and Contrastive Self-Supervised In-Domain Representation Learning for Dense Prediction Problems in Remote Sensing ###### Abstract In recent years Convolutional neural networks (CNN) have made significant progress in computer vision. These advancements have been applied to other areas, such as remote sensing and have shown satisfactory results. However, the lack of large labeled datasets and the inherent complexity of remote sensing problems have made it difficult to train deep CNNs for dense prediction problems. To solve this issue, ImageNet pre-trained weights have been used as a starting point in various dense predictions tasks. Although this type of transfer learning has led to improvements, the domain difference between natural and remote sensing images has also limited the performance of deep CNNs. On the other hand, self-supervised learning methods for learning visual representations from large unlabeled images have grown substantially over the past two years. Accordingly, in this paper we have explored the effectiveness of in-domain representations in both supervised and self-supervised forms to solve the domain difference between remote sensing and the ImageNet dataset. The obtained weights from remote sensing images are utilized as initial weights for solving semantic segmentation and object detection tasks and state-of-the-art results are obtained. For self-supervised pre-training, we have utilized the SimSiam algorithm as it is simple and does not need huge computational resources. One of the most influential factors in acquiring general visual representations from remote sensing images is the pre-training dataset. To examine the effect of the pre-training dataset, equal-sized remote sensing datasets are used for pre-training. Our results have demonstrated that using datasets with a high spatial resolution for self-supervised representation learning leads to high performance in downstream tasks. Contrastive Self-supervised Learning, Dense Prediction, Object Detection, Remote Sensing Imagery, Representation Learning, Semantic Segmentation, Transfer Learning ## I Introduction Remote sensing images are becoming one of the most prominent research fields, with various applications including change detection[1-3], land cover classification[4, 5], wildfire detection[6, 7], and climate change[8]. Combining this field with recent advancements in computer vision and deep learning leads to significant performance improvements for each of the mentioned applications. Dense prediction problems are well-known tasks in computer vision that produce outputs at the pixel level, such as semantic segmentation[9, 10], or box level, such as object detection[11]. Many applications of remote sensing images rely on the high performance of dense prediction tasks. Deep neural networks have demonstrated their capability and effectiveness for dense prediction tasks. However, they require a large amount of labeled training data to learn complex visual features[12, 13]. Labeling images at the box and pixel levels is extremely time-consuming[14]. In the case of satellite images, which are captured from a long distance using various sensors, many different concepts appear, and the object shapes are also different. These differences necessitate domain knowledge, which is rare and inefficient in terms of cost and time[13, 15]. In computer vision, transfer learning is a technique in which a pre-trained convolutional model on a large dataset, such as ImageNet, is used as a starting point for solving other related tasks. This technique reduces the need for labeled data and computation time[15]. In most cases, ImageNet pre-trained models have been utilized as a starting point to solve remote sensing problems[17]. Although ImageNet pre-trained models have often performed well when compared to solving dense prediction problems from scratch, due to the domain gap between ImageNet and remote sensing images, the performance of this type of transfer learning is limited[13]. The first solution which comes into mind is to create a satellite imagery dataset similar to ImageNet, train different convolutional models on it, and then use the obtained in-domain weights to solve other remote sensing tasks. This solution may result in a good performance, but it has some flaws. Firstly, creating a large-scale labeled dataset containing satellite images is extremely expensive and time-consuming. Secondly, pre-training multiple models on such a big dataset requires significant computing resources. One better solution is hierarchical pre-training which has been proposed in previous studies[17, 18] for image classification problems in remote sensing. Hierarchical Pretraining refers to models that have been pre-trained on datasets that progressively resemble the target task. It includes two pre-training stages: generalist pre-training and specialist pre-training. In generalist pre-training, the convolutional neural network is initially trained on a large data set, such as ImageNet. In the second stage, the weights of the general model are utilized as initial weights for pre-training in other domains, such as remote sensing images. The obtained model can now be used for solving target tasks. The effectiveness of such pre-training is demonstrated by solving some land cover classification problems[18, 19]. In this paper, we aim to apply this method to semantic segmentation and object detection problems, which are significantly more complex than image classification. Recently, self-supervised learning methods have emerged as the most promising candidates for learning visual representations without the need for large amounts of labeled data. A group of these methods known as contrastive self-supervised learning [20-27] has outperformed other methods for learning general visual representations from natural images. Contrastive learning strategies learn the similarity function between various image perspectives. In these methods, the cost function is defined so as to bring different views of the same image closer in feature space, while separating views originating from different images as much as possible. In computer vision, the pre-trained features from most contrastive self-supervised learning methods have a high degree of generalizability. In this paper, we use the idea of hierarchical pre-training to solve dense prediction problems in remote sensing images. We select the ResNet50 model pre-trained on ImageNet as the base model, and then train it on the Resisc45[28] and PatternNet[29] datasets, which are created for land cover classification. We have used both supervised and the SimSiam for pre-training in-domain features. We have selected the SimSiam[21] because, despite its simplicity, it does not require negative sampling, momentum encoder, large batch size, or online clustering, so that it requires fewer computational resources. In addition, we have used Resisc45 and PatternNet datasets which have almost equal number of samples to determine the characteristics of the ideal dataset for in-domain pre-training. The generalizability of obtained supervised and self-supervised in-domain features are examined by solving the DeepGlobe Land Cover Classification[30] for semantic segmentation and the Oil Storage Tanks and CGI Airplane problems for object detection. Our main contributions are as follows: * We have extracted in-domain visual representations from remote sensing images using both supervised learning and the SimSiam algorithm, one of the most recent contrastive self-supervised learning techniques. * During the feature extraction phase, ImageNet pre-trained weights are utilized to eliminate the need for larger datasets. * We have used different ResNet50 models obtained in previous steps as a pre-trained encoder in the DeepLabv3[31] and Faster-RCNN[32] models and we have solved the semantic segmentation and object detection problems respectively. * We have examined the effect of the pre-training dataset by utilizing equal-sized remote sensing datasets for pre-training. The remainder of this paper is organized as follows: In section \(\Pi\), we review the related works. Section III presents the statistics of the selected datasets for each step. Section IV examines the methods used for pre-training in-domain features from remote sensing images. In section V, we solve the downstream tasks, describe the selected models, and demonstrate the results. Finally, we conclude the paper. ## II Related works ### Self-Supervised Representation Learning Self-supervised learning is a sub-branch of unsupervised learning that attracts the attention of researchers all over the world. These methods provide supervisory signals based on the characteristics of the dataset and without the need for human supervision [19]. The self-supervised pre-trained representations are then transferred to the other related supervised downstream tasks to assess their generalizability. The first approach to self-supervised learning includes the design of pretext tasks such as relative position [33], colorization [34], etc. Pretext tasks can be considered proxies for learning the intrinsic patterns and structures of the dataset. Although researchers have spent a great deal of time and effort designing pretext tasks, these techniques have not yielded significant success in computer vision. For an overview of various pretext tasks designed for different computer vision and natural language processing applications, we refer readers to [19]. In computer vision, pre-trained features from different contrastive self-supervised learning methods [20-27] have recently demonstrated the highest performance in solving various downstream tasks. Most of these methods use data augmentation techniques, such as flipping and cropping, to generate multiple image views [23]. If the inputs stem from the same image, they form positive pairs, and the cost function attempts to bring these views as close in feature space as possible. If the inputs originate from different images, they constitute negative pairs, and the cost function separates them as much as possible in the feature space [35]. Some contrastive learning algorithms require enormous negative samples to capture high-generalizable features [23]. For providing a large number of negative pairs, the PIRL [36] algorithm maintains a memory bank containing the extracted features of every image in the dataset. Therefore, its scalability for real-world applications is limited. To overcome the former issue, MoCo [24, 25] uses the momentum encoder, and SimCLR [23] uses larger batch sizes. The need for substantial computing resources in the primary contrastive learning algorithms prompted researchers to develop methods that do not need negative samples. For example, SwaV [22] is an example of an online clustering-based algorithm that predicts the code of one view based on the representation of another view of the same image. BYOL [26] is another contrastive learning algorithm that predicts the representation of one view based on the other view and vice versa. Unlike all these methods, SimSiam [21] has eliminated some crucial elements, such as large batch size, momentum encoder, and online clustering. In this paper, we have used the SimSiam algorithm to capture in-domain visual representations from remote-sensing images. To review different contrastive self-supervised learning methods, we refer the readers to [35, 37, 38]. In the mentioned works, the pros and cons of each method have been analyzed from various perspectives, making them useful for comparing different approaches. ### Representation Learning for Remote Sensing Imagery The domain difference between remote sensing and ImageNet images can be addressed by in-domain pre-training. Reviewing recent literature makes it possible to imagine two distinct approaches for obtaining general visual representations from remote sensing images: supervised and self-supervised learning. Therefore, using supervised or self-supervised learning strategies, one can pre-train the models on satellite images before transferring the pre-trained weights to other similar tasks. As an example of the supervised approach, in [17], after pre-training the ResNet50 model on satellite imagery datasets, the obtained weights are utilized to solve various land cover classification tasks. Similar to previous work, in [15], the ResNet50 model is pre-trained from scratch on satellite images, and the authors transferred the obtained weights to similar tasks. The results of these papers demonstrated that the in-domain pre-trained models have comparable or even better results than the ImageNet pre-trained counterparts on various remote sensing tasks and datasets. Recently, some researchers have examined the quality of pre-trained features obtained from different self-supervised learning methods on overhead imageries. For instance, in [41], various ways of self-supervised pre-training, such as image inpainting, content prediction, and instance discrimination, have been investigated. In [40], a new pretext task was defined which uses the information of invisible channels from muti-spectral remote sensing images to predict the contents of the RGB channels. For the first time, the authors of Tile2Vec [42] have utilized the concept of contrastive self-supervised learning to capture in-domain features from remote-sensing images. This algorithm employs a form of the triplet loss function to bring the anchor tile closer to the neighbor tile while moving away from the distant tile in the feature space simultaneously. In [18], the authors introduced hierarchical pre-training which significantly reduced the convergence time, the need for computing resources, and the number of pre-training samples. Hierarchical Pretraining refers to models pretrained on datasets that are progressively more similar to the target data. The authors utilized the ImageNet pre-trained MoCo [24] algorithm as the base model. Then, using the MoCo algorithm, the obtained weights are optionally pre-trained on another dataset similar to the target dataset. Finally, the resulting model has been fine-tuned on the target datasets. Recently, [16] has reviewed various self-supervised learning methods used for remote sensing imagery analysis, including RGB, Hyper-spectral, etc. For additional information, please refer to [16]. In this paper, for the first time, we have investigated the effectiveness of hierarchical pre-training to address dense prediction problems in remote sensing images. Firstly, as the base model, we have used the ResNet50 model, which is pre-trained on the ImageNet dataset. Secondly, we fine-tuned the base model by supervising and self-supervised approaches on different overhead datasets. Finally, we have used the resulting model as a pre-trained backbone in semantic segmentation and object detection models to solve high-level dense prediction tasks. ### Semantic Segmentation of DeepGlobe Land Cover Classification Deep CNNs have shown superior performance in solving various semantic segmentation problems. For semantically segmenting objects, models such as UNet, SegNet, DeepLab series, etc., have been developed [43]. Since DeepGlobe Land Cover Classification is one of the tasks addressed in this article, we provide a brief overview of the prior research related to this dataset. Models such as FCN and other encoder-decoder-based structures have been widely used to accomplish the task [44, 45, 46, 47, 48, 49, 50, 51, 52]. For instance, in [45], remote sensing images were semantically segmented using U-Net architecture. Similarly, in [47], researchers employed the DeepLabv3+ model to solve the DeepGlobe Land Cover Classification task. In [13], the image inpainting pretext task is applied to the flow dataset to learn in-domain visual representations. Different satellite image semantic segmentation problems, such as scene parsing, road extraction, and land cover estimation, have been assigned pre-trained weights. ## III Datasets We have used two types of remote sensing datasets for our experiments. The first category consists of medium-sized land cover classification datasets, which have been utilized to pre-train in-domain visual representations. The second category consists of datasets specific to dense prediction problems in satellite images and is used to evaluate the pre-trained features. All of our experiments are conducted on RGB remote-sensing datasets. A. Pre-training Datasets Choosing an appropriate remote-sensing dataset for pre-training general features is one of the most vital factors in the performance of models on downstream tasks. To examine the impact of this factor, we have selected two datasets with comparable sample sizes and the number of classes. In the subsequent section, we have presented the details of each dataset: **PatternNet[29]:** PatternNet consists of 38 classes with 800 images per class. Therefore, this dataset contains 30,400 samples. The image resolution of this dataset is 256 by 256 pixels. In addition, this dataset has a spatial resolution between 0.06m and 4.96m. **NWPU-RESISC45[28]:** This dataset consists of 31,500 images classified into 45 categories. This dataset contains images with a spatial resolution between 0.2m and 30m per pixel for many samples. The image resolution of this dataset is 256 by 256 pixels. Table I summarizes the characteristics of both datasets: ### Dense Prediction Datasets Semantic segmentation and object detection are selected as downstream tasks to evaluate pre-trained features. The selected datasets are as follows: **DeepGlobe Land Cover Classification**[30]: This dataset consists of 803 RGB images with pixel-level labels. The size of each image is 2448x2448 pixels, and the spatial resolution is 0.5m. In addition, there are seven distinct classes, including background. Each category is color-coded with a unique color. We have demonstrated the complete information of the dataset in Table II. \begin{table} \begin{tabular}{c|c|c|c|c} \hline **Classes** & **Pixel** & **Proportion** & **Color code** \\ & **count** & & & \\ \hline Urban\_land & 642.4M & 9.35\% & [0, 255, 255] \\ \hline \end{tabular} \end{table} TABLE II: Distributions of the different classes in the DeepGlobe land cover classification dataset(completely imbalanced) [30]. In the above table, The Pixel count and Proportion columns show the number and percentage of pixels of each class in the dataset, respectively. Accordingly, the dataset is class imbalanced, and identifying unknown, water, barren land, and urban land are extremely arduous. **Oil Storage Tanks (OST):** It contains 10,000 images, of which 1,829 have box-level labels in three classes. Each image has 512x512 pixels. It was recently proposed for one of the Kaggle competitions but has not appeared in recent publications. In our experiments, we have used 1500 images for fine-tuning the pre-trained features and the remaining 329 images for evaluation. In Table III, we have listed the number of labeled samples for each of the three classes for either training or evaluation sets: According to the above table, tank cluster samples are scarce, and identifying the tank clusters in images is tedious work. **CGI planes in satellite imagery with bounding box:** It consists of 500 synthetic overhead images and contains 1000x700 pixels per image. In our experiments, we have used 400 samples to train the object detection model and the remaining 100 images to evaluate models. ## IV method Our objective is to extract meaningful features from satellite images using both supervised and self-supervised techniques, and then to use the learned weights as initial weights for semantic segmentation and object detection. Consequently, the employed method has two phases, which are: 1. Pre-training ResNet50 model both supervised and self-supervised on Resisc45 and PatternNet datasets 2. Transferring pre-trained weights in the previous step to the backbone of semantic segmentation and object detection models, and solving dense prediction problems. In the pre-training phase, the ImageNet weights are adjusted; thus, the pre-trained model is also utilized in the pre-training. All codes were written utilizing the PyTorch and PyTorch Lightning[53] frameworks and executed on an Ubuntu system with a QuadroP6000 GPU. ### A. Pre-Training of In-Domain Visual Representation To obtain visual representations from remote sensing images, we have considered the ResNet50 model as a backbone and pre-trained this model both in a supervised and by using the SimSiam algorithm, which is based on contrastive self-supervised learning, on Resisc45 and PatternNet datasets. During pre-training, we use ImageNet pre-trained ResNet50. The pre-training datasets have almost the same number of samples, but they are different in terms of spatial resolution. We have selected these two datasets because our second goal is to examine the effect of the pre-training dataset on the final performance. **Supervised pre-training of in-domain visual representations:** To obtain supervised in-domain visual representations, we trained the ResNet50 model with 90% of the samples from both the Resisc45 and PatternNet datasets and evaluated the learned features with the remaining 10%. The objective of this article is not to solve classification problems. Therefore, we have used only 10% of the data as a test set to determine the right direction during pre-training. In our experiments, with a batch size of 120, we have pre-trained the model for 100 epochs. Additionally, we have utilized OneCycleLR as a learning rate scheduler and Adam as an optimizer. In table IV, we have reported the global accuracy: By conducting this experiment, we have produced two pre-trained models called Sup-Resisc45 and Sup-PatternNet, which were pre-trained on Resisc45 and PatternNet, respectively. **Self-supervised pre-training of in-domain visual representations using the SimSiam algorithm:** We have selected the SimSiam algorithm for pre-training the general features in a self-supervised way because its pre-trained features for natural images are highly generalizable. Furthermore, since it does not require a negative sample, a larger batch size, and a memory bank, it is optimal in terms of computation and resource requirements. We have used all images of Resisc45 and PatternNet for pre-training in-domain features by SimSiam. The encoder of the SimSiam algorithm consists of a backbone, a projection head, and a prediction head [21]. In our experiments, ResNet50 serves as the network's backbone. We have trained this model on two datasets, PatternNet and Resisc45, for 400 epochs. During pre-training, we utilized the SGD optimizer with a batch size of 128 and a base learning rate of 0.05, along with the MultiStepLR scheduler. We have set the weight decay to 0.00001 and the SGD momentum to 0.9. By conducting this experiment, we have produced two additional pre-trained models called Sim-Resisc45 and Sim-PatternNet, which were pre-trained on Resisc45 and PatternNet, respectively. ## V. Dense Prediction Tasks ### A. Semantic Segmentation \begin{table} \begin{tabular}{c|c} \hline **Dataset** & **Global Accuracy** \\ **(\%)** \\ \hline PatternNet & 99.97 \\ \hline Resisc45 & 97.20 \\ \hline \end{tabular} \end{table} TABLE IV: Accuracy (%) of ResNet50 model on validation set (10% of each dataset) \begin{table} \begin{tabular}{l|c|c|c} \hline **Class** & **Train_Samples** & **Test_samples** \\ **(\%)** & **(\%)** \\ \hline Tank & 2446(32.9\%) & 596(36.5\%) \\ \hline Tank Cluster & 158(2.2\%) & 31(1.9\%) \\ \hline Floating Head Tank & 4815(64.9\%) & 1005(61.6\%) \\ \hline Total samples & 7419 & 1632 \\ \hline \end{tabular} \end{table} TABLE III: The number(percentage) of annotated samples for each class in the train and test sets. tank cluster is a rare concept. **Semantic segmentation using the DeepLabV3 algorithm:** We have re-implemented the DeepLabV3, a well-known semantic segmentation model, to make supervised and self-supervised transfer learning feasible. To reduce the semantic segmentation model's convergence time, we have set the overall output stride to 32. Additionally, the last fully-connected layers of the decoder part of the DeepLabv3 are modified, reducing the number of parameters by approximately 0.6m compared to the default model. We have transferred supervised and self-supervised pre-trained weights obtained in the previous section to the semantic segmentation task. In one of our experiments, as a base model, we replaced the DeepLabV3's backbone with the ImageNet pre-trained ResNet50 and named it Base-Model. Under these circumstances, we have obtained five different DeepLabV3 models, including Base-Model, Sup-Resisc45, Sup-PatternNet, Sim-Resisc45, and Sim-PatternNet. Therefore, ImageNet pre-trained, supervised Resisc45 pre-trained, supervised PatternNet pre-trained, self-supervised Retinaed models serve as our backbones. **Evaluation Metrics:** **mIOU:** The mean Intersection Over the Union, also known as the Jaccard Index, is used to evaluate semantic segmentation models. This metric is very explicit and fully determines segmentation model performance. If the value of mIOU for a model is equal to one, it indicates that the model predicts accurately [54]. **Pixel Accuracy (PA):** Pixel accuracy is another metric used to evaluate semantic segmentation models. It is the proportion of correctly classified image pixels. This metric is defined and calculated independently for each class, and the resulting values are then averaged [54]. **F1-score (f1):** Another metric to evaluate the semantic segmentation model is the f1-score. This metric is calculated based on precision and recall, which are defined as follows: \[\text{precision}=\frac{\texttt{TP}}{\texttt{TP+FP}}\quad,\quad\quad\text{ Recall}=\frac{\texttt{TP}}{\texttt{TP+FN}} \tag{1}\] In the above equations, TP, FP, and FN represent True Positive, False Positive, and False Negative, respectively. The combination of Precision and Recall is called the f1-score and is defined as follows: \[\text{f1}-\text{score}=2\text{ x Precision x Recall} \tag{2}\] The closer the f1-score of an algorithm is to one, the better the performance of the segmentation model[19, 54]. **Semantic segmentation with supervised and self-supervised pre-trained models:** To evaluate the performance of the pre-trained models, we have fine-tuned them using the DeepGlobe Land Cover Classification dataset. In all our experiments, we used 80% of the data (642 images) for training and 20% (161 images) to evaluate the segmentation model. During training, we utilized the Adam optimization algorithm with a batch size of 4 and a weight decay of 0.0001. As data augmentation, we have applied random horizontal and vertical flipping and cropping from 2448\(\times\)2448 to 1024\(\times\)1024. We trained each of the models on the train set of the DeepGlobe Land Cover Classification dataset for only five epochs. The obtained results for each of the pre-training methods can be seen in Table V: According to the above table, we can draw several significant conclusions regarding in-domain pre-training on remote sensing images and its fine-tuning on semantic segmentation tasks. 1) For pre-trained models using the SimSiam algorithm, even though the class diversity and the number of Resisc45 pre-training samples are more than PatternNet, the pre-trained model on the PatternNet performs better. Pixel spatial resolution is the primary distinction between these two datasets. Table 1 indicates that Resisc45 and PatternNet are both multi-resolution datasets. Assuming that the spatial resolution distribution of all pixels is uniform, the average pixel resolution of Resisc45 and PatternNet is 15.1m and 2.51m, respectively. Therefore, on average, PatternNet spatial resolution is significantly higher than Resisc45. This factor makes the pre-trained features from PatternNet more precise than those from Resisc45. 2) In the case of supervised in-domain pre-trained models, the PatternNet model outperforms the Resisc45 model, confirming the former conclusion. 3) Inspecting the performance of the resisc45 pre-trained models reveals that the supervised model outperforms the self-supervised counterpart. In contrast, for PatternNet pre-trained models, the SimSiam pre-trained model performs better. Generally, in self-supervised pre-training from remote sensing images, it is preferable to select datasets with higher spatial resolutions. In Table VI, we compare the results obtained from \begin{table} \begin{tabular}{l|c|c|c} \hline \multicolumn{3}{c}{base-model.} \\ \hline **Method** & **PA (\%)** & **f1(\%)** & **mIOU** \\ & & & **(\%)** \\ \hline **Base-Model** & 95.32 & 85.12 & 77.56 \\ \hline **Sup\_Resisc45** & 95.68 & 85.73 & 77.75 \\ \hline **Sup\_PatternNet** & 95.90 & 86.34 & 78.50 \\ \hline **Sim\_Resisc45** & 95.78 & 85.45 & 76.37 \\ \hline **Sim\_PatternNet** & **96.20** & **87.38** & **79.66** \\ \hline \end{tabular} \end{table} TABLE V: Comparison of the different pre-training methods. The Sim-PatternNet model has improved the MIOU by 2.1% compared to the Figure 1. Predictions of the model on some of the challenging test samples. First column contains the original images, second column contains the ground truth masks, and third column contains Sim-PatternNet predictions. our best Sim-PatternNet model to those of other relevant works: According to the above table, hierarchical pre-training on ImageNet and then on relevant satellite datasets produces features with high generalization power. Compared to previous works, transferring the Sim PatternNet pre-trained weights to the DeepLabV3 for solving the DeepGlobe Land Cover Classification yielded a state-of-the-art result. Figure 1 demonstrates predictions made by the Sim-PatternNet model on challenging test samples. The first column in this figure contains the test image, the second column contains the corresponding ground truth, and the third column displays the prediction of the DeepLabV3 model with the Sim-PatternNet pre-trained backbone. According to Figure 1, the presented model can accurately classify pixels comprising a small percentage of the dataset, such as water, barren land, and urban land. ### Object Detection **Object Detection with Faster-RCNN model:** We have used the Faster-RCNN with the FPN implemented in Detectron2 to solve object detection problems. Again, we have distinguished different models with Base-Model, Sup-Resisc45, Sup-PatternNet, Sim-Resisc45, and Sim-PatternNet. Various evaluation metrics, including AP, AP50, AP75, APs, APm, and API [55] are used for comparing the performance of different models. **Oil Storage Tanks:** First, we have fine-tuned each model with 5000 iterations and a batch size of 2 on the training part of the dataset. The initial learning rate and warm-up iterations are 0.01 and 500, respectively. Then, the obtained object detection models are evaluated on the test samples. Finally, we have reported different metrics to illustrate the advantages and disadvantages of each pre-training method. Table VII, demonstrates the obtained results for each of the models: According to the results in the above table, in-domain pre-training of features using supervised and self-supervised approaches surpass the ImageNet pre-trained model. Considering AP as a representative candidate for evaluating the performance of object detection models, the same conclusions for the semantic segmentation task can be valid here. Again, Sim-PatternNet outperforms all other models. Table IV reveals that a large portion of the labeled samples in the dataset belongs to the Tank and Floating Head Tank. In contrast, the Tank Cluster is less frequent. By taking this concept into account, we have shown the AP of various models for each class in Table VIII: \begin{table} \begin{tabular}{c|c|c} \hline **References** & **mIOU (\%)** & **PA (\%)** \\ \hline [45] & 42.80 & — \\ \hline [47] & 51.00 & — \\ \hline [50] & 65.2 & — \\ \hline [52] & — & 80.49 \\ \hline [51] & 66.1 & 84 \\ \hline [44] & 67.87 & 86.58 \\ \hline [49] & 75.6 & — \\ \hline **Ours** & **79.66** & **96.20** \\ \hline \end{tabular} \end{table} TABLE VI: Comparison of our best results with other methods. Self-supervised pre-training on PatternNet leads to state-of-the-art results. \begin{table} \begin{tabular}{l|c|c|c|c|c} \hline **Method** & **AP** & **AP50** & **AP75** & **APs** & **APm** & **API** \\ \hline **Base-Model** & 37.60 & 46.37 & 42.77 & 30.87 & 41.15 & 75.58 \\ \hline **Sup-Resisc45** & 38.02 & 46.95 & 42.74 & 29.24 & 41.11 & **78.04** \\ \hline **Sup-PatternNet** & 39.46 & 49.92 & 45.49 & 25.46 & **44.67** & 77.41 \\ \hline **Sim-Resisc45** & 37.84 & 48.35 & 43.23 & 29.9 & 42.10 & 74.38 \\ \hline **Sim-PatternNet** & **40.25** & **50.31** & **46.83** & **34.04** & 40.89 & 75.16 \\ \hline \end{tabular} \end{table} TABLE VII: Comparison of different pre-training methods in the oil storage tanks detection (a Kaggle competition). The Sim-PatternNet model has improved the AP by 2.65% compared to the base-model. Tank Cluster is only 2.1% of all labeled objects in the dataset. Therefore, it is difficult to identify the bounding boxes of this class. As shown in Table VIII (second column), self-supervised pre-trained models on either PatternNet or Resisc45 perform better than other models when the number of labeled samples is limited. In Figure 2, the first column shows challenging images of the test set and their corresponding label. The second and third columns illustrate the prediction of the Faster-RCNN model with the Base-Model and Sim-PatternNet as the backbone, respectively. We can conclude that by utilizing the pre-trained Sim-PatternNet as the backbone of the Faster-RCNN, the object detection model predicts more precise bounding boxes for each object in all images. **CGI Planes Detection:** This dataset is selected because all its images are synthetic (computer generated), so comparing different models' performance on this dataset is worthwhile. We used 80% (400 images) of the dataset for training and 20% (100 images) for model evaluation. We have fine-tuned each model on the train part for 2,000 iterations with a batch size of two. The initial learning rate and the warm-up iterations are 0.015 and 300, respectively. The obtained results for different models can be seen in Table IX. \begin{table} \begin{tabular}{l|c|c|c|c|c|c} \hline **Method** & **AP** & **AP50** & **AP75** & **APs** & **APm** & **API** \\ \hline **Base-Model** & 78.66 & 97.00 & 92.51 & 50.19 & 78.34 & 86.80 \\ \hline **Sup-Resisc45** & 78.48 & 97.02 & 92.65 & **68.33** & 78.41 & 82.88 \\ \hline **Sup-PatternNet** & 78.96 & 97.01 & **93.61** & 65.92 & 78.39 & 86.35 \\ \hline **Sim-Resisc45** & 77.99 & 97.00 & 93.60 & 66.78 & 77.78 & 82.64 \\ \hline \end{tabular} \end{table} TABLE IX: Comparison of different pre-training methods in the CGI-airplane detection (a Kaggle competition), the Sim-PatternNet model has improved the AP by 1.47 compared to the base-model. Figure 2: Predictions of the models on some of the challenging test samples. First column contains the original images, second column contains the Base-Model results, and third column contains the Sim-PatternNet predictions. According to the second column of the preceding table, the self-supervised pre-trained model on the PatternNet dataset outperforms other models when fine-tuning the pre-trained weights on the airplane detection dataset. Therefore, the previous conclusions are consistent with this experiment. In one of our experiments, we used the Faster-RCNN pre-trained on the COCO2017 dataset (COCO-Model) as the initial model, then we fine-tuned COCO-Model weights on CGI airplane detection. In Table X, we have compared the results of the Base Model, Sim-PatternNet, and COCO-Model. According to the above table, the pre-trained model on COCO2017 performs better than other models in all reported evaluation metrics. However, the Sim-PatternNet model has its merits because we must consider that the SimSiam algorithm only pre-trains the backbone of the Faster-RCNN. In contrast, the COCO-Model is an entire Faster-RCNN pre-trained model. In addition, during pre-training, self-supervised pre-training does not need human supervision, while COCO-Model uses precise human labels at the box level. In Figure 3, we have demonstrated the performance of pre-trained ImageNet and Sim-PatternNet models on some challenging test samples. Figure 3 reveals that the Sim-PatternNet model outperforms the Base Model for synthetic aerial images. ## VI Conclusions ImageNet pre-trained models have long been dominant for transfer learning in solving various remote sensing tasks. However, the domain difference between ImageNet and remote sensing images limits the performance of this type of transfer learning, especially for dense prediction problems. One possible solution is to pre-train in-domain features from remote sensing datasets. In this paper, we investigated the generalizability of pre-trained in-domain features using supervised and contrastive self-supervised learning approaches. During pre-training, we pre-trained the ResNet50 model on Resisc45 and PatternNet datasets. Although many previous works have used image classification problems as downstream tasks, we used in-domain pre-trained models from remote sensing images to solve semantic segmentation and object detection problems. We solved the semantic segmentation task on the DeepGlobe Land Cover Classification dataset using the DeepLabV3 model. We also used Faster-RCNN with the FPN model to solve the oil storage tanks and airplane detection. We have obtained state-of-the-art results in all dense prediction tasks using Sim-PatternNet. Although the number of training samples for Resisc45 and PatternNet are almost equal, the supervised and self-supervised pre-trained features of PatternNet have greater generalizability for solving dense prediction problems. Both datasets are multi-resolution, but assuming a uniform distribution of different spatial resolutions between pixels, the average spatial resolution of PatternNet is significantly higher than Resisc45. In addition to high-class diversity, a large sample size, and similarity with the downstream dataset, the dataset used for pre-training features from remote sensing images must also possess a high spatial resolution.
2306.06593
Dynamics of Polymer Ejection from a Nano-sphere
Polymer ejection from nano-confinement has been of interest due to its relation to various fundamental sciences and applications. However, the ejection dynamics of a polymer with different persistence lengths from confinement through a nanopore is still poorly understood. In this manuscript, a theory is developed for the ejection dynamics of a polymer with the total length $L_0$ and persistence length $l$ from a sphere of diameter $D$. These length-scales specify different regimes, which determine the polymer dynamics and its ejection rate. It is seen that the polymer undergoes between two to three confinement regimes, in some cases. The total ejection time $\tau$ depends on the polymer dynamics in various relevant regimes that the polymer experiences. Dependence of the ejection time on the system parameters is discussed according to the theory. The theory predicts that $\alpha$ in $\tau \sim L_0^{\alpha}$ changes between 1 and 1.7, $\beta$ in $\tau \sim D^{\beta}$ changes between 3 and 5, and $\gamma$ in $\tau \sim l^{\gamma}$ is often smaller than 1, in the studied range of the parameters.
Farzaneh Moazemi, Samaneh Ghanbari-Kashan, Narges Nikoofard
2023-06-11T05:22:59Z
http://arxiv.org/abs/2306.06593v1
# Dynamics of Polymer Ejection from a Nano-sphere ###### Abstract Polymer ejection from nano-confinement has been of interest due to its relation to various fundamental sciences and applications. However, the ejection dynamics of a polymer with different persistence lengths from confinement through a nanopore is still poorly understood. In this manuscript, a theory is developed for the ejection dynamics of a polymer with the total length \(L_{0}\) and persistence length \(l\) from a sphere of diameter \(D\). These length-scales specify different regimes, which determine the polymer dynamics and its ejection rate. It is seen that the polymer undergoes between two to three confinement regimes, in some cases. The total ejection time \(\tau\) depends on the polymer dynamics in various relevant regimes that the polymer experiences. Dependence of the ejection time on the system parameters is discussed according to the theory. The theory predicts that \(\alpha\) in \(\tau\sim L_{0}^{\alpha}\) changes between 1 and 1.7, \(\beta\) in \(\tau\sim D^{\beta}\) changes between 3 and 5, and \(\gamma\) in \(\tau\sim l^{\gamma}\) is often smaller than 1, in the studied range of the parameters. ## I Introduction Advances in nano-technology have enabled confinement of polymers in various nano-structures [1; 2; 3; 4]. This has new efforts to improve theories for describing the behavior of confined polymers [5; 6; 7; 8]. On the other hand, polymer confinement is a ubiquitous phenomenon in nature. In a eukaryotic cell, biopolymers are often confined in different organelles, including the nucleus [14]. The genetic material is also confined in bacterias [15] and viruses [16]. Confinement of polymers has also notably important applications, such as long DNA sequencing [9], data storage [10] and polymer separation [11]. Previous studies show that the polymers have different properties when they are confined in nano-scale geometries [17; 18; 19]. The effect of confinement is more complicated when the confined polymer has a persistence length comparable to the size of the confining geometry [22]. This is due to the existence of several length scales in the system. Among different natural and synthetic polymers, double-stranded DNA has the largest persistence length. Compaction and ejection of a polymer from the closed geometry of a sphere is related to natural phenomena, such as viral genome packaging and ejection [20; 21]. Ejection of a confined polymer from a sphere has been elucidated theoretically, by using computer simulations and experimentally [23; 24; 25]. These studies consider how the system parameters such as the size of the confining geometry, the polymer length, the packaging history or the triggering force affect the ejection process. The effect of persistence length and DNA cholesteric interactions are considered using computer simulations [26; 27]. However, a theory that describes dependence of the ejection process on the persistence length of the polymer is lacking, to the authors' knowledge. Initial theories for polymer ejection from confinement assumed that the ejection time is proportional to the inverse of the free energy of the polymer in confinement [28]. Sakaue and Yoshinaga presented another theory, in which the rate of change of the free energy of confinement is balanced with the rate of energy dissipation in the system [29]. Later, this theory was improved by revising the final stage of the polymer ejection, when there is no free energy of confinement [23]. In this manuscript, the theory is revisited to describe the ejection dynamics of a polymer, which has an arbitrary persistence length. It is shown that considering the persistence length changes the theory of polymer ejection, significantly. Predictions of the theory for dependence of the ejection time on the total length of the polymer, size of the confining geometry and the persistence length are discussed. ## II Free energy of a semi-flexible polymer in a nano-sphere In this section, the theory of a semi-flexible polymer confined inside a sphere is reviewed briefly, as an integral part of the study. A semi-flexible polymer is effectively described by cylindrical Kuhn monomers. For a polymer with persistence length \(P\), the Kuhn monomers have length \(l=2P\) and width \(b\). Number of the Kuhn monomers is equal to \(N=\frac{L}{l}\), where \(L\) is contour length of the polymer. Radius of gyration of a free polymer is described by \(R_{g}\approx v^{\frac{1}{2}l^{\frac{2}{2}}N^{\frac{3}{2}}}\); in which \(v=l^{2}b\) is the second virial coefficient [30]. A polymer with persistence length \(P\) and contour length \(L\) confined in a closed cavity of size \(D\) (\(<R_{g}\)) is below described in five different regimes (Fig. 1) [22]. ### Fluctuating Semi-dilute Regime In this regime, the polymer is divided into confinement blobs. Inside each blob, statistics of the monomers is not perturbed by the confinement (Fig. 2(a)). Thus, depen dence of the size of the blobs \(\xi\) on the contour length inside each blob \(L_{b}\) is \[\xi\approx l^{\frac{1}{6}}b^{\frac{1}{4}}L_{b}^{\frac{3}{6}}. \tag{1}\] The blobs are closely packed inside the sphere; \(\frac{L_{b}}{\xi^{3}}\approx\frac{L}{D^{3}}\). The two former equations give \[L_{b}\approx\frac{D^{\frac{16}{4}}}{l^{\frac{3}{4}}b^{\frac{3}{4}}L^{\frac{3}{4 }}}. \tag{2}\] The free energy of confinement is equal to thermal energy times the number of confinement blobs [22]; \[F_{c}\approx k_{B}T\frac{L}{L_{b}}\approx k_{B}T\frac{l^{\frac{3}{4}}b^{\frac{3 }{4}}L^{\frac{3}{4}}}{D^{\frac{16}{4}}}. \tag{3}\] The sphere size should be smaller than the radius of gyration of the polymer in the bulk, \(D<R_{g}\). Besides, the confinement blob should be larger than thermal blob, \(\xi>\xi_{T}\). Thermal blob, \(\xi_{T}\) is a length scale in the system. On smaller length scales, the excluded volume interaction is smaller than the thermal energy. Indeed, the excluded volume interactions between the confinement blobs are assumed in the derivation of Eq. 2. In the above conditions, substituting \(R_{g}\), \(\xi\) and \(\xi_{T}\approx\frac{l^{2}}{b}\) results that the regime I is true within certain boundaries [22] \[l\left(\frac{L}{b}\right)^{\frac{1}{6}}<D<l^{\frac{1}{6}}b^{\frac{1}{6}}L^{ \frac{3}{6}}. \tag{4}\] ### Mean-field Semi-dilute Regime As the density of the polymer in the sphere increases, the blob size becomes smaller than the size of the thermal blob, \(\xi<\xi_{T}\), and there is a crossover to the regime II. In this regime, the polymer behavior is Gaussian at all length scales (Fig. 2(b)). Besides, the mean-field-approximation is valid and the free energy is calculated from the second term of the virial expansion [22] \[F_{c}\approx vc^{2}\approx k_{B}T\frac{L^{2}b}{D^{3}}. \tag{5}\] Here, \(c=\frac{L}{L^{p}}\) is concentration of the Kuhn monomers inside the sphere. The length scale at which the concentration fluctuations can be ignored is called the correlation length, and is obtained from the random phase approximation; \(\xi\approx\left(\frac{l^{2}}{12cv}\right)^{\frac{1}{2}}\approx\left(\frac{D^{3 }l}{Lb}\right)^{\frac{1}{2}}\)[22]. The contour length related to the correlation length is obtained using \(\xi\approx(lL_{b})^{\frac{1}{2}}\); \[L_{b}\approx\frac{D^{3}}{Lb}. \tag{6}\] Each correlation length should contain several monomers and be smaller than the size of the sphere; \(l<\xi<D\). These conditions besides the initial equation for this regime, \(\xi<\xi_{T}\), give the boundaries for the regime II Figure 1: Left: Phase diagram shows the different confinement and the attachment regimes for a polymer of length \(L_{0}\) and Kuhn length \(l\) confined in a nano-sphere of size \(D\). During ejection, the polymer length inside the sphere decreases and the system moves along the x-axis in the phase diagram. The arrows show some typical paths in which the polymer experiences several regimes, during its ejection. Right: The polymer starts ejection, from the regime III with an ordered state. Then, it experiences the regimes II and afterwards the regime IV, before complete ejection from the sphere. The polymer in the sphere is shown in the regimes III, II and IV, from left to right, respectively. [22] \[(Lbl)^{\frac{1}{3}}<D<\min\{l\left(\frac{L}{b}\right)^{\frac{1}{3}},\frac{Lb}{l}\}. \tag{7}\] ### Liquid Crystalline Regime As the density increases, the correlation length becomes comparable to the Kuhn length, \(\xi<l\). The system responds by a transition and the coexistence of isotropic and nematic phases (Fig. 2(c)). The free energy of confinement is due to the loss in the orientational entropy, and is estimated by the number of Kuhn segments [22]; \[F_{c}\approx k_{B}T\frac{L}{l}. \tag{8}\] In this regime, the Kuhn segments have arranged inside the sphere, and the bending energy of the Kuhn segments is not relevant yet. Thus, the size of the sphere should be larger than the Kuhn length, \(D>l\). Obviously, the volume of the sphere should be larger than the total volume of the monomers, \(D^{3}>Lb^{2}\). Overall, the boundaries for the regime III are [22] \[\max\{\left(Lb^{2}\right)^{\frac{1}{3}},l\}<D<(Lbl)^{\frac{1}{3}}. \tag{9}\] ### Ideal Chain Regime When the correlation length of the regime II becomes larger than the size of the system, \(\xi\sim\left(\frac{D^{3}l}{Lb}\right)^{\frac{1}{2}}>D\), the polymer enters the regime IV [22]. The correlation length gives a description of fluctuations in the polymer configuration (Fig. 2(d)). Also, in this regime, the polymer behavior is Gaussian at all length scales. The polymer is divided into blobs, inside which the confinement is not felt by the monomers; \(\xi\approx(lL_{b})^{\frac{1}{2}}\). These blobs completely overlap each other and their size is equal to the size of the sphere; \(\xi\approx D\). The contour length of the polymer inside the blobs is \[L_{b}\approx\frac{D^{2}}{l}. \tag{10}\] The free energy of confinement results from the loss of the degrees of freedom for each blob; \[F_{c}\approx k_{B}T\frac{L}{L_{b}}\approx k_{B}T\frac{lL}{D^{2}}. \tag{11}\] In this regime, the sphere size is smaller than the correlation length, however, the bending energy is not determining yet. Thus, the sphere size should be larger than the Kuhn length, \(D>l\). Besides, the size of the sphere should also be smaller than the radius of gyration of a free _Gaussian_ polymer, in order to have confinement effect. These conditions result in [22] \[\max\{\frac{Lb}{l},l\}<D<(lL)^{\frac{1}{2}}. \tag{12}\] ### Bending Regime When the size of the sphere becomes smaller than the Kuhn length \(D<l\), the behavior of the confined polymer is dominated by bending energy (Fig. 2(e)) [22]. In this regime, the Odijk length, \(\lambda\) is the contour length of the polymer between two deflections from the sphere surface [31]; \[\lambda^{3}\approx D^{2}l. \tag{13}\] Figure 2: Configurations of a semi-flexible polymer inside a nano-sphere. (a) In the regime I, the polymer is divided into confinement blobs. The polymer length inside each blob does not feel confinement. (b) In the regime II, the polymer density inside the sphere increases. This results in an ideal behavior for the polymer, in this regime. (c) The density further increases in the regime III. Thus, entropic forces between different parts of the polymer cause an ordered state in the polymer configuration. (d) In the regime IV, the radius of the sphere is smaller. The system is no longer uniform, unlike a polymer in the bulk. (e) In the regime V, bending energy governs the polymer behavior and results in another ordered state. The free energy is obtained from the bending energy of a polymer of length \(L\) and bending modulus \(\kappa=k_{B}Tl\) with a radius of curvature of order \(D\); \[F_{c}\approx\kappa\frac{L}{D^{2}}\approx k_{B}T\frac{lL}{D^{2}}. \tag{14}\] This approximation is valid at low volume fractions, \(\phi<0.2\). At higher volume fractions, interaction between the Kuhn segments becomes determining. In this regime, the volume of the sphere should be larger than the total volume of the monomers. The boundaries for the regime V are [22] \[\left(Lb^{2}\right)^{\frac{1}{3}}<D<l. \tag{15}\] ## III Ejection dynamics of a semi-flexible polymer from a nano-sphere In this section, we develop a theory for the ejection dynamics of a semi-flexible polymer from a sphere. The contour-length of the polymer remained inside the sphere at time \(t\) is shown by \(L(t)\). The rate of change of the free energy is balanced by the rate of energy dissipation [29], \[\dot{F}(t)\approx-\eta l\left[\dot{L}(t)\right]^{2}. \tag{16}\] The right hand side is the rate of energy dissipation; where \(\eta\) is the viscosity of the solvent. Similar to previous studies [22; 23], it is assumed that one Kuhn length \(l\) near the nano-pore contributes to the dissipation. This is because nearly a Kuhn length of the polymer beside the nano-pore is moving and the rest of the polymer does not feel the ejection. At first, the ejection is driven by the confinement free energy. As the length of the polymer inside the sphere decreases, the free energy of confinement ceases. Then, the free energy of attachment of the polymer to the sphere becomes dominant. The free energy of attachment to a surface results from the reduction in the possible configurations of the chain besides the surface; \(k_{B}T(1-\lambda)\ln(\frac{L}{l})\). Here, \(\lambda\) is a constant of order unity and \(\frac{L}{l}\) is the number of Kuhn segments in the attached chain. For a polymer escaping through a nano-pore between two compartments, the free energy of attachment has two terms arising from the two sections of the chain on both sides; \(F_{a}(t)\sim k_{B}T\left[(1-\lambda_{i})\ln(\frac{L(t)}{l})+(1-\lambda_{o})\ln (\frac{L_{0}-L(t)}{l})\right]\)[23]. Using the rate balance (Eq. 16) gives \(l\dot{L}(t)\sim-\frac{b^{3}}{\tau_{0}}\left[\frac{1-\lambda_{i}}{L(t)}-\frac{1 -\lambda_{o}}{L_{0}-L(t)}\right]\), where \(\tau_{0}=\frac{\eta b^{3}}{k_{B}T}\) and \(L_{0}\) is the total length of the polymer. The second term in the right side is negligible, because \(L_{0}\gg L(t)\). Solving the resulting differential equation \(\dot{L}(t)\sim-\frac{b^{3}}{\tau_{0}l(Lt)}\) gives \[L(t)\sim L_{\tau}\left(1-\frac{t-\tau}{\tau_{a}}\right)^{\frac{1}{2}},\qquad \qquad\tau_{a}\sim\tau_{0}\frac{L_{\tau}^{2}l}{b^{3}}. \tag{17}\] Here, \(\tau\) is the crossover time between the confinement and the attachment stages. \(L_{\tau}\) is the length of the polymer inside the sphere at time \(\tau\)[23]. In the following, the polymer dynamics is described in the different regimes of confinement. The polymer behavior is simple in the regimes I, IV and V. However, the polymer experiences a combination of different confinement regimes during its ejection, in the regimes II and III. Accordingly, we first discuss the simple regimes, then, the more complicated ones. ### Regime I At the beginning, the polymer dynamics is determined by the confinement free energy (Fig. 3(a)). Using the rate balance (Eq. 16) and the confinement free energy in the regime I (Eq. 3) gives the polymer velocity; \(\dot{L}(t)\sim-\frac{1}{\tau_{0}}l^{-\frac{1}{4}}b^{\frac{15}{4}}D^{-\frac{15} {4}}L(t)^{\frac{5}{4}}\). By integrating this equation, the time evolution of the number of monomers inside the sphere becomes \[L(t)\approx L_{0}\left(1+\frac{t}{\tau_{1,I}}\right)^{-4},\qquad\qquad\tau_{1,I}\sim\tau_{0}\left(\frac{lD^{15}}{b^{15}L_{0}}\right)^{\frac{1}{4}}. \tag{18}\] This equation is valid until \(t<\tau_{2,I}\), that the sphere has no confinement effect on the polymer. At this time, there has remained one blob inside the sphere (Fig. 3(a)). Using Eq. 2 to find the contour length inside a blob at this time, one has \(L(\tau_{2,I})\approx L_{b}(\tau_{2,I})\approx D^{\frac{5}{3}}\left(lb\right)^{- \frac{1}{3}}\). Substituting in Eq. 18 gives \(\tau_{2,I}+\tau_{1,I}\sim\tau_{0}\left(\frac{D^{10}l}{b^{11}}\right)^{\frac{1}{ 3}}\). The polymer continues its ejection, while it is attached to the sphere (Fig. 3(a)). The polymer dynamics at this stage is obtained by substituting \(L_{\tau}=L(\tau_{2,I})\) in Eq. 17; \(L(t)\approx D^{\frac{5}{3}}\left(lb\right)^{-\frac{1}{3}}\left(1-\frac{t-\tau_{ 2,I}}{\tau_{a,I}}\right)^{\frac{1}{2}}\), where \(\tau_{a,I}\sim\tau_{0}\left(\frac{D^{10}l}{b^{11}}\right)^{\frac{1}{3}}\). The polymer completely leaves the sphere, \(L(\tau_{I})=0\), at the time \(\tau_{I}\approx\tau_{a,I}+\tau_{2,I}\approx\tau_{0}\left[2\left(\frac{lD^{10 }}{b^{11}}\right)^{\frac{1}{3}}-\left(\frac{lD^{15}}{b^{15}L_{0}}\right)^{ \frac{1}{4}}\right]\). The obtained equations are in relative agreement with Ref. [23]. There are subtle differences, because the dissipation rate in that study depends on the total length of the polymer. ### Regime IV At the beginning of the ejection, the confinement free energy determines the polymer dynamics (Fig. 3(c)). Balancing the energy (Eq. 11) and the dissipation terms gives the polymer velocity; \(\dot{L}(t)\approx-\frac{1}{\tau_{0}}\frac{b^{3}}{D^{2}}\). Accordingly, one obtains the instant length of the polymer \[L(t)\approx L_{0}\left(1-\frac{t}{\tau_{1,IV}}\right),\qquad\qquad\tau_{1,IV} \approx\tau_{0}\frac{L_{0}D^{2}}{b^{3}}. \tag{19}\] Again, this equation is valid until \(t<\tau_{2,IV}\), when the polymer length inside the sphere becomes so small that it does not feel the confinement (Fig. 3(c)). At this time, there has remained one blob inside the sphere; \(L(\tau_{2,IV})\approx L_{b}\approx\frac{D^{2}}{t}\). Substituting this condition in Eq. 19 gives \(\tau_{1,IV}-\tau_{2,IV}\approx\tau_{0}\frac{D^{4}}{b^{2}}\). The rest of the process continues under the attachment free energy (Fig. 3(c)). Substituting \(L(\tau_{2,IV})\approx\frac{D^{2}}{t}\) in Eq. 17 gives the polymer dynamics; \(L(t)\approx\frac{D^{2}}{t}\left(1-\frac{t-\tau_{2,IV}}{\tau_{a,IV}}\right)^{ \frac{1}{2}}\), where \(\tau_{a,IV}\sim\tau_{0}\frac{D^{4}}{b^{3}t}\). The total ejection time of the polymer from the sphere is the time at which the polymer length becomes zero, in the above equation; \(L(\tau_{IV})=0\). The result is \(\tau_{IV}\approx\tau_{a,IV}+\tau_{2,IV}\approx\tau_{0}\left(\frac{D^{2}L_{0}} {b^{3}}\right)\). It should be noted that the dynamics is different in the attachment stages of the regimes I and IV. This is because the polymer lengths at the crossover times from the confinement to the attachment stage, \(L(\tau_{2,I})\) and \(L(\tau_{2,IV})\), are different. ### Regime V Substituting the derivative of the confinement free energy (Eq. 14) in the rate balance (Eq. 16) gives the velocity of polymer ejection from the sphere; \(\dot{L}(t)\approx-\frac{1}{\tau_{0}}\frac{b^{3}}{D^{2}}\). Thus, the polymer dynamics is described by \[L(t)=L_{0}\left(1-\frac{t}{\tau_{1,V}}\right),\qquad\qquad\tau_{1,V}\approx \tau_{0}\frac{L_{0}D^{2}}{b^{3}}. \tag{20}\] Indeed, the ejection is dominated by the bending energy of the polymer from the sphere boundaries. Thus, the equation is valid until \(t<\tau_{2,V}\), when the polymer length inside the sphere becomes equal to the sphere diameter (Fig. 3(d)). Substituting this condition, \(L(\tau_{2,V})\approx D\), into Eq. 20 gives \(\tau_{1,V}-\tau_{2,V}\approx\tau_{0}\frac{D^{3}}{b^{2}}\). The above equations for the regime V are only applicable in small volume fractions. The rest of the polymer ejection continues in the attachment regime (Fig. 3(d)). Using \(L(\tau_{2,V})\sim D\) in Eq. 17 gives the polymer dynamics; \(L(t)\approx D\left(1-\frac{t-\tau_{2,V}}{\tau_{a,V}}\right)^{\frac{1}{2}}\), where \(\tau_{a,V}\sim\tau_{0}\frac{D^{2}l}{b^{3}}\). The total ejection time from the sphere is obtained using \(L(\tau_{V})=0\) in the above equation; \(\tau_{V}\approx\tau_{2,V}+\tau_{a,V}\approx\tau_{0}\left(\frac{D^{2}l}{b^{2} }+\frac{D^{2}L_{0}}{b^{3}}-\frac{D^{3}}{b^{2}}\right)\). ### Regime II When the ejection starts, the polymer dynamics is governed by the confinement free energy of the regime II; Balancing the rate of change of the free energy (the time derivative of Eq. 5) with the dissipation rate gives \(\dot{L}(t)\sim-\frac{1}{\tau_{0}}\frac{b^{4}}{lD^{3}}L(t)\). Solving this differential equation results in the polymer dynamics; \[L(t)\sim L_{0}\exp(-\frac{t}{\tau_{1,II}}),\qquad\qquad\tau_{1,II}\sim\tau_{0} \frac{lD^{3}}{b^{4}}. \tag{21}\] This equation is valid until \(t<\tau_{2,II}\), when the density of the polymer in the sphere decreases so much that the Figure 3: The process of polymer ejection, from the nano-sphere. \(\dot{F_{c}}\) and \(\dot{F_{d}}\) are confinement and drag forces. They are parallel to the polymer contour and are applied on the monomer which is inside the nano-pore. (a) Top: polymer starts ejection, in the regime I. Middle: There has remained one blob inside the sphere. Bottom: The nano-sphere has no confinement effect on the polymer. Attachment to the sphere governs the ejection. (b) Top: polymer starts ejection from regime II. Middle: Polymer experiences the regime I, another confinement regime. Bottom: Attachment of the polymer to the sphere governs the ejection. (c) Top: polymer starts ejection, in the regime IV. Middle: There has remained one blob inside the sphere. The blob is different from the blob in the regime I, since the sphere is smaller. Bottom: Attachment to the sphere governs the ejection. (d) Top: polymer starts ejection, in the regime V. Middle: The polymer that is remained inside the sphere is equal to the radius of the sphere. Bottom: There is no bending energy. Attachment to the sphere governs the ejection. polymer enters one of the regimes I and IV. In other word, one of the upper limits for the regime II (Eq. 7) is violated, at this time. **Case 1, \(\frac{Db}{l^{2}}>1\):** When \(L(\tau_{2,II})\sim b\left(\frac{D}{l}\right)^{3}\), the driven process in the regime II ceases and the system enters the regime I (Fig. 3(b)). Substituting this condition into Eq. 21, one has \(\tau_{2,II}\sim\tau_{0}\frac{lD^{3}}{b^{4}}\log(\frac{L_{0}}{Dl})^{3}\)). The polymer dynamics in the regime II is obtained by substituting \(L(\tau_{2,II})\) for \(L_{0}\) in Eqs. 18; \(L(t)\sim b(\frac{D}{l})^{3}(1+\frac{t-\tau_{2,II}}{\tau_{1,I}})^{-4}\), where \(\tau_{1,I}\sim\tau_{0}\frac{lD^{3}}{b^{4}}\). The relation is valid in the time interval \(\tau_{2,II}<t<\tau_{3,II}\), in which \(\tau_{3,II}\) is the time that the polymer ejection is no longer driven by the confinement energy. The required condition is that there has remained one blob inside the sphere, \(L(\tau_{3,II})\approx D\frac{5}{3}\left(lb\right)^{-\frac{1}{3}}\). This relation with the latter equation for polymer dynamics in the regime I gives \(\tau_{1,I}+\tau_{3,II}-\tau_{2,II}\sim\tau_{0}(\frac{lD^{10}}{b^{41}})^{\frac {1}{3}}\). The ejection continues under the dominance of the attachment free energy, similar to the regime I (Fig. 3(b)). Thus, the polymer dynamics is \(L(t)\sim\frac{D^{\frac{5}{3}}}{\left(lb\right)^{\frac{1}{3}}}\left(1-\frac{t- \tau_{3,II}}{\tau_{a,I}}\right)^{\frac{1}{2}}\), in which \(\tau_{a,I}\sim\tau_{0}\left(\frac{D^{10}l}{b^{41}}\right)^{\frac{1}{3}}\). Using \(L(\tau_{II})=0\) in this equation, the total ejection time from the sphere becomes \(\tau_{II}\sim\tau_{3,II}+\tau_{a,I}\sim 2\tau_{0}(\frac{lD^{10}}{b^{41}})^{\frac {1}{3}}+\tau_{0}\frac{lD^{3}}{b^{4}}\)(\(\frac{L_{0}}{b}\)(\(\frac{L_{0}}{b}\)(\(\frac{D}{b}\))^{3}-\(\tau_{0}\frac{lD^{3}}{b^{4}}\). **Case 2, \(\frac{Db}{l^{2}}<1\):** When \(L(\tau_{2,II})\frac{b}{l}\sim D\), the driven process in the regime II ceases and the system enters the regime IV. Substituting this condition into Eq. 21 gives the crossover time \(\tau_{2,II}\sim\tau_{0}\frac{lD^{3}}{b^{4}}\log(\frac{L_{0}}{Dl})\). Using \(L(\tau_{2,II})\) instead of \(L_{0}\) in Eqs. 19 gives the polymer dynamics in the regime IV; \(L(t)\sim\frac{Dl}{b}(1-\frac{t-\tau_{2,II}}{\tau_{1,IV}})\), with \(\tau_{1,IV}\sim\tau_{0}\frac{lD^{3}}{b^{4}}\). This relation is valid in the time interval \(\tau_{2,II}<t<\tau_{3,II}\). At time \(\tau_{3,II}\), the driven process ceases, because there has remained one blob inside the sphere; \(L(\tau_{3,II})\approx\frac{D^{2}}{l}\). Using this condition in the latter equation for polymer dynamics gives \(\tau_{1,IV}-\tau_{3,II}+\tau_{2,II}\sim\tau_{0}\frac{D^{4}}{lb^{3}}\). The rest of the ejection is dominated by the attachment free energy, and the dynamics is similar to that of the regime IV. After minor calculations, the total ejection time from the sphere becomes \(\tau_{II}\sim\tau_{3,II}+\tau_{a,IV}\sim\tau_{0}\frac{D^{3}l}{b^{4}}\left(1+ \log\frac{L_{0}b}{Dl}\right)\). ### Regime III The rate of change of the free energy is obtained by taking the derivative of Eq. 8. Balancing the energy and the dissipation rates gives the polymer velocity \(\dot{L}(t)\sim-\frac{1}{\tau_{0}}\frac{b^{3}}{l^{2}}\). From this relation, the number of remaining monomers inside the sphere at time \(t\) is \[L(t)\approx L_{0}\left(1-\frac{t}{\tau_{1,III}}\right),\qquad\qquad\tau_{1,III} \approx\tau_{0}\frac{l^{2}L_{0}}{b^{3}}. \tag{22}\] This relation is valid until \(t<\tau_{2,III}\), when the polymer density inside the sphere decreases so much that the system experiences the regime II. Indeed, as the contour length of the polymer inside the sphere decreases, the polymer first enters the regime II and then one of the regimes I or IV (Fig. 1). Entering the regime II occurs when the upper boundary for \(D\) (Eq. 9) is violated; \(L(\tau_{2,III})\sim\frac{D^{3}}{lb}\). This condition with Eqs. 22 gives \(\tau_{1,III}-\tau_{2,III}\approx\tau_{0}\frac{D^{3}l}{b^{4}}\). Substituting \(L(\tau_{2,III})\) for \(L_{0}\) into Eqs. 21 for the regime II gives \(L(t)\sim\frac{D^{3}}{lb}\exp(-\frac{t-\tau_{2,III}}{\tau_{1,II}})\), in which \(\tau_{1,II}\sim\tau_{0}\frac{lD^{3}}{b^{4}}\). This relation is valid in the time interval \(\tau_{2,III}<t<\tau_{3,III}\), when the polymer enters one of the regimes I or IV. **Case 1, \(\frac{Db}{l^{2}}>1\):** The polymer enters the regime I and then experiences the attachment stage, until it completely leaves the sphere. The polymer dynamics and the crossover times at these two final stages are similar to those of the case 1 of the regime II. The total ejection time becomes \(\tau_{III}\sim 2\tau_{0}(\frac{lD^{10}}{b^{41}})^{\frac{1}{3}}+2\tau_{0}\frac{lD^{3} }{b^{4}}\left(-1+\log\frac{t}{b}\right)+\tau_{0}\frac{L_{0}l^{3}}{b^{3}}\). **Case 2, \(\frac{Db}{l^{2}}<1\):** The polymer enters the regime IV and then experiences the attachment stage, until its ejection ends. The polymer dynamics and the crossover times Figure 4: Predictions of the theory for the ejection process versus the polymer length, for different persistence lengths. The volume fraction of the polymer inside the nano-sphere is fixed, \(\phi=0.1\). (a) The initial regime of the polymer. (b) Log-log plot of the ejection time versus the polymer length. A linear behavior is observed. (c) The exponent \(\alpha\) in the relation \(\tau\approx L_{0}^{\alpha}\), which describes dependence of the ejection time on the persistence length. It is seen that the exponent changes between 1 and 1.7, for all parameters. in the regime IV and the attachment stage are similar to those of the case 2 of the regime II. The total ejection time becomes \(\tau_{III}\sim 2\tau_{0}\frac{lD^{3}}{b^{4}}\log\frac{D}{l}+\tau_{0}\frac{L_{0}l^{2} }{b^{3}}\). ## IV Predictions of the theory In the previous section, the theory was obtained by using scaling calculations. Besides, the ejection time for each regime contains several terms, so, it requires many fitting parameters. To the authors' knowledge, the effect of persistence length on the ejection time of a polymer confined inside a sphere has been investigated in one previous study, by using computer simulations [27]. Unfortunately, the simulation data is not enough to find all the fitting parameters, to validate the theory. However, predictions of the theory are given here, for comparison with future simulations and experiments. In the following, the volume fraction of the polymer inside the sphere \(\phi=\frac{L_{0}b^{2}}{D^{3}}=0.1\) is kept fixed, according to the related simulations [27]. The predictions of the theory for dependence of the ejection process on the system parameters are summarized in Figs. 4 and 5. Persistence length of the double-stranded DNA is equal to 50nm, while its width is equal to 2.5nm. Thus, we can take \(b=2.5nm\) and \(l=20b\). Other synthetic and natural polymers have smaller persistence lengths [32]. As a result, we study persistence lengths below 20b, in the following. The largest length of the polymer in this study is \(L_{0}=1000b\). Larger polymers are also tested and the same behavior is observed. Figure 4(a) shows initial regime versus the polymer length, for different persistence lengths. On the plots, the values for cases 1 and 2 of the regime II (III) are shown by 2 (3) and 2.5 (3.5), respectively. The zero value for the regime describes the condition in which the sphere has no confining effect on the polymer. It should be noted that the diameter of the sphere changes with the total length of the polymer, to keep the volume fraction fixed. Thus, one cannot deduce the regimes that the polymer experiences during its ejection, from Fig. 4(a). It is seen that both persistence and polymer lengths determine the initial regime of the polymer. The initial regime determines different aspects of the statics and the dynamics of the semi-flexible polymer. Figure 4(b) represents the log-log plot of the ejection time versus the polymer length. The ejection time is an increasing function of the persistence and the polymer lengths. Dependence of the ejection time on the system parameters is simple in e.g. the regime IV, by using the related equation. However, dependence of the the ejection time on the system parameters is not easily deduced from the equations of Sec. III, in the other regimes. Instead, it is possible to use local slope of the curves for the ejection time, in Fig. 4(b). Figure 4(c) shows the exponent \(\alpha\) in the relation \(\tau\approx L_{0}^{\alpha}\). It is calculated by using the local slope of the curves in Fig. 4(b). The value of \(\alpha\) changes between 1 and 1.7 for all studied parameters. It is seen that \(\alpha\) is a decreasing function of the polymer length in the regime I, the regime II case 2 and the regime III case 2. It is constant with respect to the polymer length in the regime II case 1 and the regime IV. \(\alpha\) increases with the polymer length in the regime V. Figure 5(a) shows the exponent \(\beta\) that describes dependence of the ejection time on the sphere diameter, \(\tau\approx D^{\beta}\). The exponent is calculated for various persistence lengths and sphere diameters, in which the polymer experiences different initial regimes (inset of Fig. 5(a)). It is observed that the exponent \(\beta\) takes values between 3 and 5. The range of values for \(\beta\) is different from those of \(\alpha\). Thus, dependence of the ejection time on the sphere diameter is stronger than its dependence on the polymer length. Besides, Fig. 5(a) shows that \(\beta\) is a decreasing function of the sphere diameter in the regime I, the regime II case 2 and the regime III case 2. It is constant with respect to the sphere diameter in the regime II case 1 and the regime IV. \(\beta\) increases with the sphere diameter in the regime V. It should be noted that the polymer length and Figure 5: Dependence of the ejection time on the sphere diameter and persistence length, for volume fraction \(\phi=0.1\). (a) The exponent \(\beta\) that describes dependence of the ejection time on the sphere diameter. It is seen that \(\beta\) takes values between 3 and 5, for different sphere diameters and persistence lengths. Inset: the relevant regime for the studied parameters. (b) The exponent \(\gamma\) that describes dependence of the ejection time on the persistence length. It is seen that \(\gamma\) takes values between 0 and 1, for most of the sphere diameters and the persistence lengths. Inset: the relevant regime for the studied parameters. the sphere diameter depend on each other, through the relation \(\phi=\frac{L_{0}b^{2}}{D^{3}}\). Thus a similar behavior is observed for \(\alpha\) and \(\beta\). Figure 5(b) shows the exponent \(\gamma\) that describes dependence on the persistence length, \(\tau\approx l^{\gamma}\). The relevant initial regimes are shown in the inset, for different sphere diameters. It is seen that the polymer falls in the regime II case 1, in a small interval of persistence lengths. Outside this interval, the exponent \(\gamma\) takes values smaller than 1. So, dependence of the ejection time on the persistence length is generally weaker than its dependence on the polymer length and the sphere diameter. Figure 5(b) shows that the ejection time does not depend on the persistence length, in the regime IV. It is also seen that the exponent \(\gamma\) is constant in the regime II case 1. It decreases with the persistence length in the regime II case 2. An increasing dependence of the persistence length is observed, in the regime V. ## V Conclusions Here, the ejection dynamics of a polymer of finite persistence length from a nano-sphere was studied, theoretically. The dynamics and the ejection time were obtained for different parameters of the system. In this study, it was assumed that one Kuhn length beside the nano-pore contributes in the dissipation. The authors have also investigated the case that one correlation length in each regime contributes to the dissipation. Assuming one correlation length for the dissipation, it is observed that the ejection time is not an increasing function of the persistence length, which is not in agreement with previous simulations [27]. The theory was obtained in the situation that there are no hydrodynamic interactions between the monomers, in accordance with the simulations in Refs. [23; 27]. However, it is straightforward to include hydrodynamics in the present formalism. In this situation, the dissipation would occur in the range of a correlation length, not a Kuhn length. Thus, the rate of dissipation would be \(T\dot{S}(t)\approx\eta\xi(t)\left[\dot{L}(t)\right]^{2}\)[22]. Definition of the correlation length is easy in the regimes I, II and IV, which is equal to the blob size. However, the correlation length in the regimes III and V needs more consideration. As an estimate, it is possible to define the correlation length equal to the persistence length and the Odijk length in the regimes III and V, respectively. Further simulations are needed to check the formalism, more exactly. On the other hand, it would be interesting to extend the present formalism to study the packaging dynamics of a polymer with finite persistence length into a cavity. This problem is already studied by using computer simulations [33].
2304.10588
Detecting Worker Attention Lapses in Human-Robot Interaction: An Eye Tracking and Multimodal Sensing Study
The advent of industrial robotics and autonomous systems endow human-robot collaboration in a massive scale. However, current industrial robots are restrained in co-working with human in close proximity due to inability of interpreting human agents' attention. Human attention study is non-trivial since it involves multiple aspects of the mind: perception, memory, problem solving, and consciousness. Human attention lapses are particularly problematic and potentially catastrophic in industrial workplace, from assembling electronics to operating machines. Attention is indeed complex and cannot be easily measured with single-modality sensors. Eye state, head pose, posture, and manifold environment stimulus could all play a part in attention lapses. To this end, we propose a pipeline to annotate multimodal dataset of human attention tracking, including eye tracking, fixation detection, third-person surveillance camera, and sound. We produce a pilot dataset containing two fully annotated phone assembly sequences in a realistic manufacturing environment. We evaluate existing fatigue and drowsiness prediction methods for attention lapse detection. Experimental results show that human attention lapses in production scenarios are more subtle and imperceptible than well-studied fatigue and drowsiness.
Zhuangzhuang Dai, Jinha Park, Aleksandra Kaszowska, Chen Li
2023-04-20T18:20:26Z
http://arxiv.org/abs/2304.10588v1
Detecting Worker Attention Lapses in Human-Robot Interaction: An Eye Tracking and Multimodal Sensing Study ###### Abstract The advent of industrial robotics and autonomous systems endow human-robot collaboration in a massive scale. However, current industrial robots are restrained in \(\mathbf{c}\)-working with human in close proximity due to inability of interpreting human agents' attention. Human attention study is non-trivial since it involves multiple aspects of the mind: perception, memory, problem solving, and consciousness. Human attention lapses are particularly problematic and potentially catastrophic in industrial workplace, from assembling electronics to operating machines. Attention is indeed complex and cannot be easily measured with single-modality sensors. Eye state, head pose, posture, and manifold environment stimulus could all play a part in attention lapses. To this end, we propose a pipeline to annotate multimodal dataset of human attention tracking, including eye tracking, fixation detection, third-person surveillance camera, and sound. We produce a pilot dataset containing two fully annotated phone assembly sequences in a realistic manufacturing environment. We evaluate existing fatigue and drowsiness prediction methods for attention lapse detection. Experimental results show that human attention lapses in production scenarios are more subtle and imperceptible than well-studied fatigue and drowsiness. Human attention monitoring, eye tracking, industrial robots, Human-Robot Interaction ## I Introduction Human attention is a cognitive process that involves a combination of physiological, psychological, and environmental attributes. Human attention lapses, characterized by the loss of focus or distraction, can significantly compromise task performance and bring about hazards for workers. Costs can be unbearably high when failing to detect attention lapses in circumstances such as assembling electronics and operating machines. In particular, the detrimental impact of attention lapses can result in errors, safety violations, and incurring significant impedance for industry to deploy autonomous robotic systems in context of Industrial 5.0. Attention is a state in which one's cognitive resources are focused on certain aspects of the environment, and the operator is ready to respond to environmental stimuli. It can be inferred via physiological and behavioral indicators but not a physical entity to be measured directly. Attention lapses manifest widely across individuals due to different cognitive abilities, emotional states, and environment variations. These altogether make attention lapses difficult to monitor, nor to measure them accurately. In order to track attention, our intuition is that many sensor modalities should work together to provide useful information. Given the _multimodal_ nature of attention, it can manifest in different ways, such as eye blinking, looking away, moving head or arm, speaking, or signs of drowsiness [26]. In light of thriving multi-modality sensing systems, the combination of data on gaze, head pose, and audio can provide rich information for human attention and attention lapse study. Measuring attention lapses multimodality and automatically underlies a key aspect of promoting perception systems of social robots as well as improving worker safety. Social robots will benefit tremendously from better comprehending human's attention and disengagement, so as to make more appropriate interactions. A multimodal sensing approach to attention lapse detection could assist in understanding the mechanisms of attention and developing effective interventions to improve human-robot collaboration. As can be seen from Fig. 1, a worker who looks in a correct direction with normal head pose and facial expression could be either engaged Fig. 1: An example of distraction (looking at a mobile device) at workplace. Since the worker retains similar gaze, head pose, and posture in (a) dedicated and (b) distracted states, it is difficult to perceive such subtle difference without eye tracking. Vision-based detector, e.g., OpenFace [1] (mid row), fails in contrasting attention lapses to normal states. in working or distracted. Subtle differences in the worker's focus make recognizing such attention lapse a challenge. Without eye tracking to reveal human's cognitive state it is almost impossible to approach effective attention lapse detection. Nonetheless, relying solely on gaze modality will compromise reliability when there is occlusion of pupil or visual degradation. To this end, we first curated a dataset of human operators conducting assembly task in a realistic production environment. A Franka robot played an assistive role in handing over components to the human agent [14]. A Pupil Core eye tracker and a Microsoft Azure Kinect RGB-D camera with microphone are used for data collection. This setup allows multimodally and unobtrusively tracking the human operator's gaze, egocentric world view, as well as head pose, body posture, and environmental stimuli from a third-person perspective [5]. In order to streamline data annotation and pre-processing, we developed a pipeline to label distracted attention states frame-by-frame and visualizing the timelines. We also set up Apriltags [13] in the environment for head pose tracking. We used open-source tools [3, 12, 22] to perform eye state classification (incl. fixation and blink) and fixated object detection. This pilot dataset has been annotated frame-by-frame to mark onsets of human attention lapses and in-operation states. We also investigated the effect of applying existing fatigue and drowsiness detection methods [2, 7, 19] on the pilot dataset. These methods have been widely used in driving safety domain. Unfortunately, they not only fall short in robustness for relying on single modality, but are hardly usable in real-world industrial settings where attention lapses take manifold forms. The results reveal a remarkable gap in accurate human attention lapse detection to allow safe and trustworthy HRI for industry. In this work, our contribution is threefold: (1) We collected two pilot data sequences of multimodal human attention data in realistic industrial environments for benchmarking of human attention lapses detection; (2) We developed a pipeline for labelling multimodal sensory data, namely eye tracking, surveillance camera, and auditory input; (3) We experimented with existing fatigue and drowsiness prediction methods and identified a significant gap of knowledge in human attention lapse detection. The rest of this paper is organized as follows. Section II reviews existing literature around attention lapse detection datasets, methods, and applications. Our proposed multimodal annotation pipeline and benchmarking datasets are expanded in details in Section III. Evaluation outcomes with existing methods are reported in Section IV. Section V summarizes future work. ## II Background ### _Human Attention Lapse Study_ Driving fatigue and drowsiness detection have been actively studied for its oriented application in driver safety domain. PERCLOS [7] is one of the most commonly used metrics for driving fatigue evaluation. An eye state tracking approach to PERCLOS is proposed in [8] to estimate eyelid closure ratio with respect to fatigue level. Although eye closure is agreed to be directly correlated to fatigue, human attention lapse can be quite polymorphic. Driver's eye looking away from the normal direction could be a sign of distraction or fatigue. Head pose [1, 22] which is often associated with eye gaze may also indicate attention shifting. Human attention lapses are of far more complex causes than fatigue and drowsiness. Indeed, a person may be mind wandering whilst having eye fixation at the right direction and objects. The fixation state of eyes may not directly indicate _focused_ or _attention lapse_ but the states of eye are closely correlated to human attention in general [23]. According to human cognition and recognition studies [10], the following gaze patterns are strong indicators of attention lapse: long fixation on a single point, looking away, slow or irregular saccades, eye closure, and a high rate of blinking. Attention lapse is related to environmental distractors (auditory and visual), cumulative fatigue doing repetitive work, internal cognitive state such as motivation. A single-modality approach to determine human attention detection often falls short in accuracy and reliability [28]. Our hypothesis is that attention lapse is multimodal, that is, human's internal cognitive state, environmental stimuli, together with eye state, facial expression, and posture collectively formulate trustworthy attention lapse detection. Existing driving fatigue detectors cannot be generalized for diverse applications [28, 15] in the sense that a driver sits in a confined position with a relatively simple but effective metric of "looking ahead". In this paper, we look into an open and more challenging scenario of assembly tasks in manufacturing. ### _Existing Datasets and Methods_ In the domain of driver fatigue research, detecting drowsiness and distraction using multimodal behavioural features [20] allows applicable solutions to enhance driver safety. Transferring this idea to HRI could mean a big step forward for workers' safety in automated production lines [4]. According to [15], it is found the combination of eye feature (gaze, blinking, and fixation states) and head pose achieve reliable estimations. Methods based on single-modality turn out susceptible to occlusions and hard to generalize well. In [2], the authors proposed handcrafted thresholds of 30\({}^{\circ}\) nodding and 200-frame (about 6.7s) no change in gaze direction to detect drowsiness, which can never produce satisfactory reliability. Yawning makes an effective indicator of fatigues but becomes less useful in distraction detection [28]. Static human attention tracking within a fixed plain (e.g., looking at a flat screen) has been well-studied, such as SNAG dataset [25] and human attention dataset in image captioning [9]. Recently more focuses have been drawn on studying human attention in action such as assembling a camping tent [23] and driver monitoring [8]. Modern eye trackers achieve fairly high accuracy, precision, and effective frequency [11] even wearers performing substantial movements. Nevertheless, eye tracking in unrestrained settings, such as lost track of pupil and long eye closure, will inevitably affect the data quality in accuracy and precision [18]. Furthermore, a robust eye state tracking entails high-fidelity visual monitoring from third-person views [6]. This deteriorates credibility of predicting attention by eye tracking only. It has been widely acknowledged environmental noises, such as those in industrial workplace, have a profound impact on human cognition [24]. Auditory stimuli are also common external stimuli that draw human attention and arouse distraction [16]. We are thus motivated to collect a multimodal human attention tracking dataset tailored for cooperation between human agents and robot manipulators in industrial settings. As aforementioned, human's eye states, head pose, body posture, and sound stimuli shall all be synchronized and recorded. The following sections expands our data collection campaign, annotation pipeline, and evaluation results. ## III Method We collected a pilot dataset of a human worker doing a dummy phone assembly task with the aid of a pre-programmed robot manipulator. Multi-modality sensors, namely an RGB-D camera with microphone (Azure Kinect DK) and an eye tracker with egocentric world camera (Pupil Core), are deployed with multiple artificial distractors injected during the experiment process. An automated annotation pipeline is proposed to streamline the labelling of multiple primitives for human attention lapse research. We also utilized pre-trained object detectors [21] for fixated object recognition and pre-trained head pose detector [22] for head pose calibration. ### _Pilot Dataset_ The data collection was conducted in Aalborg University 5G Smart Production Lab [17]. It provides a realistic manufacturing environment to present real-world industrial scenarios. A Franka robot has been pre-programmed to harvest five components of a dummy phone, shown in Fig. 2(D), and hand them over by releasing the components ahead of human operator's right hand position. The release takes place at slightly varying locations which requires human's attention to catch the components. Note these five dummy components (one front cover, one back cover, one printed circuit board, and two fuses) are made of plastic and would cause no harm if caught unsuccessfully. A software-based stop command and an emergency button are placed next to the human participant for safety. A human experimenter was asked to perform phone assembly tasks while multimodal data of gaze, vision, speech, and environmental stimuli from multiple angles were collected. During the task, the dummy phone parts placed on a flat worktop will be picked up by the Franka robot and the human participant was expected to put these parts together, as is shown in Fig. 2(A). We experimented with a Pupil Core eye tracker as shown in Fig. 2(B). Pupil Core offers binocular eye tracking at 200Hz (highest among mass marketed wearable and mobile products) and a scene camera to shoot world view of \(155^{\circ}/85^{\circ}\) horizontal/vertical field of view with configurable resolution and frame rate [12]. The eye cameras take infrared illuminated images and detects the pupil area with a robust and efficient algorithm and multiple noise filters (an average Fig. 3: Annotated in-operation and distracted states in phone assembly sequence 002 compared with classic estimation methods. Blue bars represent onsets of the worker’s hands-on operation which implies greater hazards upon robotic intervention. Red bars represent onsets when the worker is distracted. Magenta bars indicate eye fixations within the predefined surface (i.e., _worktop_). Grey bars mark head movements with a high angular velocity. These events are annotated against temporal UTC timestamps of millisecond (msec) precision. Fig. 2: Visualization of the data collection environment and annotations. (A)(D) show the assembly worktop defined by QR codes and its 2D projection with gaze in post-hoc processing. Green dots are gaze projections; yellow circle marks the fixation position. (B) shows the Pupil Core eye tracker. Head pose tracking results are shown in (C). gaze detection accuracy of 0.6\({}^{\circ}\) of visual angle and a processing latency of 45ms). Pupil Core stands out for its light-weight design, minimized visual obstruction, and ability to accommodate varying facial geometries in comparison to SMI ETG2 60 or Tobii G-series [11]. More importantly, the open source software of Pupil Core allows customization in data collection, processing, and labelling processes. Data streaming, blink detection, and head pose tracking modules can be integrated as plugins at ease. Eye tracking endows the ability of detecting introverted human attention in a direct and non-obstructive means [13]. A Microsoft Azure Kinect DK with microphone array was used for a third-person perspective recording. This kit contains an RGB-D camera with infrared sensing, 7-microphone array, and integrated body pose tracking backends. We used a Lenovo laptop with an Intel i7 processor and Ubuntu 18.04 operating system installed for data collection. All sensors are synchronized by the host laptop's local clock. QR codes are commonly used for head pose estimation in eye tracking practices. Following the method proposed in [27], we deployed multiple Apriltags [13] in the environment and utilized the post-hoc pose tracking algorithm of the Pupil Core software for head pose estimation. We deployed eight Apriltags from the _tag36h11_ family next to the assembly worktop as shown in Fig. 2(C,D). Three of them are attached to the flat surface of the worktop to allow surface abstraction and gaze tracking in Area of Interest (AOI). ### _Annotation Pipeline_ To achieve multimodal attention detection, we developed a data annotation pipeline for efficient processing of the multimodal data. First, we recognized a list of significant primitives in human attention study as detailed in Table I. Specifically, the eye state, gaze position, and fixation position can be extracted from Pupil Core's software system outputs but require careful calibration as errors are common in default detectors. The fixation object is automatically annotated by a pre-trained object detector [21] with manual corrections. Head pose and surface tracking are derived with the aid of eight Apriltags deployed next to the assembly platform. Of our greatest interest, the distracted states and in-operation states are manually labelled frame-to-frame to identify subtle attention lapses. An example of the annotated sequence 002 is shown in Fig. 3. Note ensuring ad-hoc occurrence of real-world distractors in an assembly task is critical for attention lapse study. We curated two pilot data sequences of a human participant assembling a dummy phone in a realistic production laboratory with manufacturing background noises, moving MiR robots, and workers around. According to K. Holmqvist and R. Andersson [10], the following distractors in industrial settings are identified: * Noise (background/sudden) * Co-workers (talk to "me"/talk to each other) * Malfunction (user mistake/equipment) * Fatigue (drowsiness/mind wandering) * Visual stimuli (environment/moving objects) * Multitasking * Motivation (task difficulty/repetitiveness/reward) During the pilot data collection, we introduced three of the above distractors due to constraints of the environment and safety concerns: clapping hands (sudden noise), walking co-workers at the scene (visual distraction), and looking at a personal smart phone (mind wandering). In the first clip (Sequence 002), we asked the experimenter to be open-minded to attention lapses and look along distractors. In the second clip (Sequence 002), the experimenter was asked to stay as concentrated as possible in which the only distractor is _mind wandering_. The onset of these events have been post-hoc annotated by checking multiple camera views and auditory recordings as shown in Fig. 4. A FasterRCNN-based object detector [21] is leveraged to perform post-hoc fixation object recognition. We applied the object detector on every frame and used fixation point as a filter to only annotate object of attentive interest. If the fixation is more than 15-pixel away from an object's bound box the detected object will be omitted as the fixation lands far apart from the object. Errors are inevitable using such automated annotations. We further validated and corrected wrong labels frame-by-frame. ## IV Evaluation We experimented with existing single-modality methods on the pilot dataset. Classic fatigue and drowsiness detection modalities include ratio of eye closure state (PERCLOS) [7], blink frequency [19], gaze fixation state [10], and head pose tracking [2]. We examined their performance in determining attention lapse upon phone assembly sequences 002 and 003. PERCLOS refers to the percentage of duration of closed-eye state within a given time interval (30 seconds or 1 minute). For a fair comparison, we use EM criteria, which represents 50% of eye closure rate, upon a 30s sliding window to metric attention lapse on the phone assembly data sequences. According to [19], eye blink frequency drops below 10 per minute when sleepy. We adopted this threshold for blink frequency based drowsiness estimation in a 30s sliding window. Workers are expected to pay attention to an AOI, such as a worktop, when focusing on specific tasks. This has been widely accepted as an important attribute in attention detection. We defined a worktop area using Pupil Core's software as shown in Fig. 2(A). The entry and exits from this AOI can be calculated as shown in Fig. 3 "On-surface". I. Choi and Y. Kim [2] contended nodding over 30\({}^{\circ}\) in a short time implies drowsiness. We extend this to head rotation around any axis for distraction detection. We specified the short time equals 2 seconds. Given densely deployed QR codes, pose-hoc head poses can be reconstructed as seen in Fig. 2(C). Thus, we obtained distraction states as shown in Fig. 3 "Head Move". Overall, poor precision, recall, and F1 scores are seen from pilot data sequences as shown in Tables II and III. PERCLOS turns out the most reliable attention lapse detector when open-minded to distraction. Yet its precision and recall are far from satisfactory. Other single-modality methods demonstrate extremely low coherence with true distraction states. In Sequence 003 where experimenter was mentally prepared, all fatigue and drowsiness methods fail. These back up our judgement that attention lapse is multimodal and complex, suggesting simple paradigms of monitoring eye blinks, fixation, or head pose cannot be trusted. Additionally, single or few sensors for detection may suffer in accuracy when occlusion or data discrepancy occur. For instance, we found some _long-lasting_ blinks, i.e., over 5s, in Pupil Core's algorithm for blink detection. This may be due to losing tracks of pupils. We conclude from our pilot study that a multimodal and multi-sensory scheme for worker attention lapse detection becomes imperative in this research area. ## V Future Work In this work, we addressed data primitives of a human attention lapse study and created a pilot dataset to evaluate existing detection methods. The pilot dataset collected falls short in size to support data-driven algorithms for attention lapse detection, such as SVM and deep neural networks. We plan to launch a large-scale data collection campaign Fig. 4: Visualization of audio distractors (1st row), eye tracker’s world view (2nd row), surveillance RGB scene (3rd row), and depth scene (4th row). The second frame (column) shows the worker being distracted by hand claps to her left. The fourth frame (column) illustrates distraction of talking to co-workers. with dozens of human participants accomplishing various assembly tasks. Current experimental setup supports one set of sensors on eye tracker and the other set in front of the workspace. We will expand it with more sets of sensors from multiple angles in the complete dataset. We notice the frame rates of multimodal sensors differ. Eye cameras run at 200Hz, whereas, world camera and Azure Kinect DK run at 30Hz. Furthermore, there is an offset between sound track and RGB-D video. We will elaborate the data collection pipeline to achieve enhanced synchronisation. We also plan to integrate threads of automatic body posture tracking and hand gesture recognition into the annotation pipeline. ## Acknowledgment This work was funded by 2022/23 Aston Pump Priming Scheme and AAU Bridging Project "A Multimodal Attention Tracking In Human-robot Collaboration For Manufacturing Tasks." We thank the Aalborg 5G Smart Production Lab for supporting our data collection campaign.
2303.03235
On the Visualisation of Argumentation Graphs to Support Text Interpretation
The recent evolution in Natural Language Processing (NLP) methods, in particular in the field of argumentation mining, has the potential to transform the way we interact with text, supporting the interpretation and analysis of complex discourse and debates. Can a graphic visualisation of complex argumentation enable a more critical interpretation of the arguments? This study focuses on analysing the impact of argumentation graphs (AGs) compared with regular texts for supporting argument interpretation. We found that AGs outperformed the extrinsic metrics throughout most UEQ scales as well as the NASA-TLX workload in all the terms but not in temporal or physical demand. The AG model was liked by a more significant number of participants, despite the fact that both the text-based and AG models yielded comparable outcomes in the critical interpretation in terms of working memory and altering participants decisions. The interpretation process involves reference to argumentation schemes (linked to critical questions (CQs)) in AGs. Interestingly, we found that the participants chose more CQs (using argument schemes in AGs) when they were less familiar with the argument topics, making AG schemes on some scales (relatively) supportive of the interpretation process. Therefore, AGs were considered to deliver a more critical approach to argument interpretation, especially with unfamiliar topics. Based on the 25 participants conducted in this study, it appears that AG has demonstrated an overall positive effect on the argument interpretation process.
Hanadi Mardah, Oskar Wysocki, Markel Vigo, Andre Freitas
2023-03-06T15:51:30Z
http://arxiv.org/abs/2303.03235v1
# On the Visualisation of Argumentation Graphs to Support Text Interpretation ###### Abstract The recent evolution in Natural Language Processing (NLP) methods, in particular in the field of argumentation mining, has the potential to transform the way we interact with text, supporting the interpretation and analysis of complex discourse and debates. Can a graphic visualisation of complex argumentation enable a more critical interpretation of the arguments? This study focuses on analysing the impact of argumentation graphs (AGs) compared with regular texts for supporting argument interpretation. We found that AGs outperformed the extrinsic metrics throughout most UEQ scales as well as the NASA-TLX workload in all the terms but not in temporal or physical demand. The AG model was liked by a more significant number of participants, despite the fact that both the text-based and AG models yielded comparable outcomes in the critical interpretation in terms of working memory and altering participants' decisions. The interpretation process involves reference to argumentation schemes (linked to critical questions (CQs)) in AGs. Interestingly, we found that the participants chose more CQs (using argument schemes in AGs) when they were less familiar with the argument topics, making AG schemes on some scales (relatively) supportive of the interpretation process. Therefore, AGs were considered to deliver a more critical approach to argument interpretation, especially with unfamiliar topics. Based on the 25 participants conducted in this study, it appears that AG has demonstrated an overall positive effect on the argument interpretation process. Argumentation structure; Walton's classification scheme; Argument mining; Argumentation graphs; Visualisation graphs, Graphs interaction ## 1 Introduction Argumentative texts are a very prevalent type of discourse, and they consist of the process of persuading others to accept a certain attitude or opinion. The term "argument" refers to the subset of sentences that includes two main types of discourse units (sentences): "premises" and "claims" (in some cases called "conclusion"). These sentences are linked to each other in feasible chains of reasons offered in support of or to attack a claim, or for deriving further claims [2]. An argument's final claim is usually called its conclusion, a proposition that can be either true or false, put forward by someone as true, see a Supp. Fig. A.2. The argumentation structure is represented as a defeasible premises-conclusion structure and a set of CQs, which help in the procedure of testing the strength and acceptability of the argumentation, based on the weighting of the pro and con arguments. Automatically identifying an argument and its relevant components (claims and premises) in a text is called argument mining or argument extracting. This is a sub-area within natural language processing (NLP) that is rapidly evolving and has the support of multiple annotated corpora [3, 4]. With the support of these classifiers, arguments can be parsed to build a supporting graph representation, which is called an argument map, a visual representation of the logical argumentation structure, see Supp. example A.1. An argument map or argument graph enables a complex argument to be broken down into isolated argument units, which can potentially facilitate the interpretation and analysis of arguments [5]. This study investigates the impact of visual representations of argumentation graphs (AGs) on the interpretation of arguments. In particular, it focuses on the impact of visualisation and, to a minor extent, of interaction strategies on AG structures. Furthermore, we aim to reduce the barriers that hinder users' understanding, analysis and debating of various existing arguments in a critical manner by proposing an argumentation visualisation model, developed by using a combination of the recent capabilities produced by argumentation mining. The study process framework (Fig. 1 (left)) is based on the framework of the DAGGER2[1] and includes three levels : (i) annotating using argument mining manually; (ii) generating AGs; and (iii) visualising and interacting with AGs. The study provides a critical evaluation of: (1) whether structured AG representations facilitate the interpretation of arguments compared to their textual counterparts; and (2) the impact of proposed AG on an argument interpretation task. Therefore, the study includes an empirical analysis, which compares a pure text-based model (**PTM**) as a baseline model with the proposed model, named 2D argumentation graph model (**2D AGM**). See Fig. 1 (right) and more details in Fig. A.11. Footnote 2: DAGGER is a tool for generating argumentation frameworks from Datalog+/- inconsistent knowledge bases. Pragmatically, this study aims to answer the following research questions (RQs), which are divided into two areas: 1. **Argumentation Representation:** * RQ1: How do discourse-level graph representations support the interpretation of complex arguments and debates? * RQ2: Which categories and relationships positively affect the interpretation process? * RQ3: How do argumentation schemes and argumentation types support the construction of this representation? 2. **Visualisation and Interaction:** Figure 1: (left) The study process, based on DAGGER framework [1] and (right) the abstract empirical experiment workflow, more details Fig. A.11 * RQ4: How does the proposed model perform for intrinsic and extrinsic evaluation metrics when compared to a pure text-based interpretation? * RQ5: What are the users' perceptions of their preference for, and the easiness, effectiveness, and usefulness of the proposed solution? * RQ6: How could the familiarity, strength, and controversy of topics affect the argument critical interpretation process? In summary, the contributions of this study are the following: * We proposed a 2D AGM using a node-link diagram. This is different from and complements the existing tools in visualisation and spatial configuration [6, 7, 8, 9, 10, 11]. * As part of the methodology, we follow Walton's classification schemes to represent the AG, which comprises three primary nodes (with different levels and colours) and four central relationships (with distinct colours). Then, we allow users to decide on the general argument topics based on the weights of the pros(+) and cons(-) of the central claims, using a visual scale tool. This is a contribution over other tools [10, 6] in terms of how the explicit function of final decision mapping can support an intrinsic evaluation of argument components. Our hypothesis is that the 2D AGM will perform better in terms of the critical agreement evaluation of the argument (decision-making). * We compare and contrast the main features of the critical argument interpretation process, such as time and working memory, between the two models. The experiment embeds tasks involving recalling claims and selecting critical questions (CQs). Using an A-BLEU score, we evaluate how similar the recalled claims are to the references in each model. Our hypothesis is that the 2D AGM will yield better outcomes in time and working memory compared to the PTM. To achieve the objective of this study, we have to include the following: * We conducted empirical analysis on 25 users who interacted with two model representations: PTM and 2D AGM. Each model included an agreement evaluation, tasks, and two questionnaires (NASA- TLX and UEQ). The experiment included a survey about the topics' familiarity, controversy, and strength, and whether or not the user liked the model. It also included queries about models and the experiment. * We performed an extrinsic evaluation in terms of workload and user perception via two questionnaires and several queries about the familiarity, controversy, and strength of the argument topics. The queries determine the impact of these characteristics on the interpretation process across the two models. This helps to support a rational evaluation and provides a more intuitive interpretation for users. Our hypothesis is that the 2D AGM will achieve a superior workload and user perception compared to the PTM because the fundamental reason for using AGs is to break down a complex argument and facilitate interpretation. AGs will have a greater influence on the argument's interpretation, depending on the familiarity of the topic, its controversy and its strength, compared to the PTM. ## 2 Background and Literature Review Argumentation aims to justify conclusions and actions through rational and evidence-based beliefs, presenting disagreements, demonstrating truth, and understanding multiple perspectives [12]. Despite advances in automatic argument structuring and visualizations in Natural Language Processing NLP and Argumentation Mining AM, there is a significant gap in understanding and quantifying to which extent this added structure, combined with visualisation methods, can support the human interpretation of complex arguments. For this reason, this study describes visualisation and interaction with argumentation graphs (VIAGs), which is a sub-theme of Computer-Supported Argumentation Visualisation (CSAV) [13]. This section reviews and summarises approaches reported in the literature related to the representation of arguments and argument mapping/visualisation models (Fig. 1 (left)). ### Argumentation Structure and Walton's Classification Schemes There is a challenge in categorizing and structuring arguments [14, 2]). There is no universal definition of a'structured argument', making it a fundamental challenge associated with finding a classification system to identify different argument patterns and produce an argument for various circumstances and purposes[15]. Several attempts have been made to classify schematic structures [16, 17, 18, 19, 20], but the most commonly used scheme is given by Walton [21]. According to a survey by Lawrence et al. [22], arguments can be evaluated based on critical questions corresponding to Walton's scheme. Can et al.[23] also found that Walton's model is useful in supporting argumentation and critical evaluation. Our study will use Walton's classification scheme [21, 2, 24] to represent argumentative text. ### Argument Mining and Annotation Methods Argument mining (AM) is a technique in text mining [25] that involves two main stages: identifying argument discourse and predicting argument relationships [14, 26]. AM can be done fully automated ([27, 28, 29, 30, 31, 20, 32, 33, 34], TARGET[35], MARGOT ARGs[36], or ConToVi [37]) or as a hybrid of manual and automated methods ([38, 39]). However, it lacks standardization [40] and full automation lacks accuracy due to its insufficient incorporation of semantics and domain knowledge. Consequently, experts currently rely on time-consuming manual annotations [41]. This study aims to refine Walton's classification schemes [2] to analyze debate topics from "ProCon.org" [42] using the BRAT annotation tool [43] and the VINA framework, with a focus on enhancing textual interpretation in argument mining. ### Argument Mapping or Visualizing Argument structure visualisation, also known as argument representation, mapping or graphs, is a form of text visualization [44]. Argument graphs can be displayed as node-link diagrams or trees, with node-link diagrams suitable for hierarchical structures[45, 21]. According to [12], boxes/nodes usually contain full, grammatical, declarative sentences: reasons/premises (pieces of evidence in support of some claims); claims (ideas claimed to be true); conclusions (final claims supported by reasons); or objections (pieces of evidence against conclusions). Relationships between nodes are shown with lines/arrows representing reasoning relationships. Therefore, this study constructs and represents Walton's argument structure as a tree (a node-link diagram) with three primary nodes: premises, claims, and conclusions, and four central relationships: attack, support, rebuttal, and undercutting[2], visualised as a 2D graph-based argument model, rendered using the Unity framework 3. Footnote 3: Unity [46] is a cross-platform game engine developed by Unity Technologies, first announced and released in June 2005 at Apple Worldwide Developers Conference as a Mac OS X game engine. The engine has since been gradually extended to support a variety of desktop, mobile, console and virtual reality platforms. Unity has many features that allow researchers to easily play with them and help make the visualisation more flexible. Unity has the following features:1- Unity is a free, flexible, and accessible platform for simple 2D or very complicated visualisation. 2-In Unity, you can peruse an enormous amount of genres, subgenres, and styles. 3-Unity’s realism capabilities are so powerful that many developers use it for other tasks than building games. 4-Unity supports two common programming languages: C# and JavaScript. Anyone with a C# background can quickly jump into Unity and start scripting. 5-One thing about programming in Unity is that you can use the UI intuitively to add functionality to your scripts, versus having to interact with a specific list of variables. 6-Unity has always prioritized building for any platform, and the selection (for iOS, Android, PC, or conses) is constantly expanding. Unity has support for the Oculus Rift, HTC Vive, Microsoft Hololens, and more. 6- Getting assets is easy. Whatever Unity doesn't have built-in; it can be found in the Asset Store. 7- For future needs Unity Services. This is a new set of features that make it easier for you to build, share, and sell your project. Unity Cloud Build and Unity Collaborate are tools for backing up your entire project and building multiple versions without bogging down your system. ### Argument Analysis Methods Several AM systems include analytical methods available to classify, compare, and measure the argument structure, including **non-numerical analysis** for argumentative users. For instance: Neva [47], Carneades [48], and data-set of claims by [49]; for general users, [50], Kialo [51] and PEOPLES [52]; designed to help students and instructors [8], AGORA-net [53], and F2F classroom [54]. Alternatively, systems are based on argument schemes [55], Carneades [56] and Parmenides [57]. **Argument structure numerical analysis**, comparing the argument structure based on the weight of the pros and cons, such as [6] and designVUE [10]. However, these evaluations are complex and lack assessment of argument components, with no supporting intrinsic or extrinsic analysis evaluation. In contrast, this study assesses argument components and uses a simple numerical process (explicit functionality for mapping decisions) based on positive and negative weights. ### Evaluation of the Argument Structure Representation The study's central part evaluates two representations of arguments: a text-based argument and a 2D graph-based argument based on Walton's argument structure. Several previous studies have conducted a similar evaluation process in varying ways: the findings in [8] deepen understanding of how visualisations support logical reasoning. Alternatively, [9]'s qualitative and quantitative evaluation of two visual representations, PCM and HTNM. Alternatively, a case study by [11] that includes four systems were implemented based on two cognitive thinking modes. Furthermore, [13] evaluate the presentation of reasoning in five different visualisation forms. Alternatively, AVISE [7] allows an evaluation of real-world arguments. None of these studies are evaluated based on Walton's argument structure, and they do not support intrinsic or extrinsic evaluation or explicit functionality for mapping decisions. In contrast, the study aims to show the significant differences and impacts of the 2D graph-based argument compared to the text-based argument on argument interpretation, agreement evaluation, performance, working memory, user perception, and user experience. The study assesses the arguments using a visual scale tool (as intrinsic evaluation) and assesses argument models using two questionnaires, NASA TLX and UEQ (as extrinsic evaluation). ## 3 Methods ### Development of a 2D Argumentation Graphs (AGs) Model- 2D AGM. The target model is the 2D AGM, which requires the following three dimensions. **Review of the argumentation mining, corpus selection and annotation.** Each argument text is considered a debate with two sides--pros and cons--with their respective claims and premises. The text is organised so that it begins with the main conclusions, and then each pro statement is followed by its con statement; this structure is followed for all the argument topics. Unstructured argument text was randomly selected4 and extracted from debate topics. The selected structure underlying the argumentation mining schema is based on Walton's classification scheme [2], which was revised and simplified by Milz[24]. We performed configurations on the Brat tool that define entities (claims, premises, and conclusions), relationships (support, attack, undercut, and rebuttal), and attributes (analogy, positiontoknow, exertopinion, popularopinion, causetoeffect, falsification, positiveconsequences, and negativeconsequences), which are based on argument schemes. The argument schemes used are argument from expert opinion; argument from position to know; argument from popular opinion; argument from analogy; argument from positive/negative consequences; argument from cause to effect; and argument from falsification. Indeed, each topic has two conclusions, one pro and one con. As a result, we annotated and linked all pro- and con-side claims, as well as their premises, to the conclusions on both sides. Each premise's node is annotated, connected with its own attribute (scheme type) based on Walton's classification schemes, and linked to its claim/claims. The criteria for each scheme have been described by Walton [21]. This resulted in an annotated corpus, which was parsed to build the AG, (see Fig. 1(left)). Footnote 4: **Building AG structure.** These argument structures are mapped onto a tree-based structure [21] to represent the arguments [45, 12]. In particular, the tree-based structure uses a non-space-filling variation that can be a hierarchical relationship and is represented in a node-link diagram. A node-link diagram is influenced by two factors: an-out and depth, which makes it easier to render a tree that is not excessively deep. The study employed a node-link diagram to build a 2D AGM with some of the categories and relations of the AG structure. Therefore, each line in an annotated corpus provides either a node, a link, or an attribute associated with the type. A coloured rectangular shape represents a node, either a conclusion (purple), a claim (green), or a premise (grey) node. Each premise node is associated with an (attribute) argument scheme (e.g., an argument from expert opinion, an argument from analogy, etc.)[2]. Bright colour nodes mean that they have not been clicked yet, and dark colour means that the participant has clicked on the node. Each coloured link represents the relationship between the nodes, and these either support (green), undercut (pink), rebutt (yellow), or attack (red). In an AG, spatial and colour configurations vary with the supporting categories, (see Figure 2). **Rendering AG structure in 2D AG model.** In Unity, we used a rectangular panel, a common rendering tool, to present an AG on the screen. On the left-hand side is the AG panel, which contains three or two main claims (tabs). Each tab contains two AG subtrees: the pro-AG subtree and its counter con-AG subtree. The AG sub-structure is rendered inside an asset called 'Scroll View' to allow a user to explore the graph by scrolling up and down, and/or left and right. For each **claim node** (green node) two Likert-scale sliders are used (for relevance and agreement), measuring the degree to which a claim is relevant to the conclusion, and to what degree a user agrees with that claim. The score is from 1 to 7, where 1 = disagrees/not relevant, and 7 = strongly agrees/relevant. Each **premise node** (grey node) is associated with a scheme type and related CQs to test the strength and acceptability of the argument. On the right-hand side of the rectangular panel is the evaluation message of the whole argument, which is based on the aggregated score of all the claims (see Fig. A.6). Each time a user adds a score to the claim, the result of the evaluation message is automatically updated. As a result, an arithmetic summation of the scores from all the claims gives the final decision of that argument. Additionally, there is a hidden timer to compute the total time spent by a user on the model. ### Development of Pure Text-based Model-PTM. The baseline model, pure text-based argumentation (PTM), is delivered by a simple renderisation of the text (under the same set of debate topics), without the additional structure. In Unity, we used a rectangular panel to present the argument text on the screen. A component called "TextMeshPro' inside another component called 'Scroll View' is used within the rectangular panel to allow users to scroll over the text. The text is rendered one page long on a dark background for easier reading. Additionally, there is a Likert-scale slider on the left with which the users can evaluate an argument on a scale from -3 to +3, where -3 indicates disagreement, +3 indicates agreement, and 0 means neutral. There is also a hidden timer to calculate the total time spent by a user (see Fig. 3). Figure 2: The 2D Argumentation Graph Model (2D AGM) includes the AG panel (left) that contains three or two main claims (tabs). Each tab contains two AG subtrees: the pro-AG subtree and its counter-con-AG subtree. A coloured rectangular shape represents a node, either a conclusion (purple), a claim (green), or a premise (grey) node. Each coloured link represents the relationship between the nodes, and these either support (green), undercut (pink), rebut (yellow), or attack (red). The right side includes the evaluation message for the overall weights of the relevant agreement scores. ## 4 The Experimental Design The experiment was conducted with a within-subject design. Each participant is presented with two models: first PTM, and then 2D AGM; for each participant, two arguments are randomly selected from a set of 10 arguments (each from a different topic, see Supp 8). In the first part of the experiment, the PTM is evaluated. Initially, the user is asked about the topic's perceived controversy, and their familiarity with and initial position on the debate. Then, the user is presented with an argument essay in the text-based model, presented in the scrolled view text. They indicate their level of agreement with the argument using the Likert-scale controller (Task 1, see Fig. 3). After that, the user is asked about the topic's strength, and whether they liked the topic and liked the model. Then, the user is asked to write the memorised claims by filling in four text-field boxes with the claim statements (later referred to as Task 1: Recall Claims, see Fig. 3(b)). Finally, the user fills in two short close-ended structured questionnaires: NASA-TLX 8.1 and UEQ 8.2 to evaluate the workload, and their experience, and opinions (see Fig. A.9, and A.10). In the second part, the user interacts with the 2D AGM. Similarly to the first part of the experiment, before the 2D AG is shown, the user is asked about their familiarity with the topic, its controversy level and their initial position. The 2D AG is presented inside the scrolled view as a 2D-coloured tree graph (Fig.2). The user indicates their agreement with the argument and the relevance of each main claim, utilising the Likert-scale controller attached to each claim. The final score for agreement is produced as an aggregation of all the scores (see Sec.3.1). In the next step, the user is asked to perform two tasks: Task 1: Recall Claims and Task 2: Select CQs. In Task 1, the same as for PTM, the user writes the memorised claims in four text-field boxes with the claim statements. In Task 2, the user is presented with a set of CQs with check boxes (from Walton's book) and they are asked to click those that were used in the argumentation in the 2D AGM (see Fig. 3(a)). Finally, as for the PTM, the user completes two questionnaires: NASA-TLX and UEQ. For details regarding the experimental environment refer to Supp methods 8, NASA-TLX 8.1, UEQ 8.2. For reproducibility purposes, the full experimental workflow is available at Supp Fig.A.11. ### Participants, Metrics and Settings Data: 10 topics representing different arguments (see Supp. Sec.8 ) were represented in the PTM and the 2D AGM model. Participants: we recruited 25 postgraduate students from the University of [anonymised] School of Engineering and compensated participants with Amazon gift cards(\(\pounds 10\)/participant). Fifteen women and 10 men participated, with 23 Figure 3: Pure text-based model (PTM, baseline model), which includes the argument text on the left-hand side. On the right, the user moves a Likert-scale slider to indicate their agreement with an argument (-3 to +3), where -3 indicates disagreement, +3 indicates agreement, and 0 means neutral. participants being PhD students and 2 being master's students. The participants' ages ranged from 20 to 40. All of the participants had a normal or corrected-to-normal vision, without a colour deficiency. Metrics: The experiment aims to evaluate the following aspects: 1. The user's initial position in the debate (before seeing PTM or 2D AGM). 2. Level of agreement with the argument (as they read PTM or 2D AGM), see section 3.2 and 3.1. 3. Time to complete the task (the total time spent on a model in minutes and seconds). 4. Task 1: Working memory (short-term memory) (A-BLEU was calculated in decimal numbers between 0 and 1, see Supp. Sec. 8.1). Figure 4: Models tasks- Task 1: Recall Claims for (a) the 2D AGM and (b) PTM Models includes four input fields to enter the recall claims. Task 2: Select CQs for the 2D AGM Model. 5. Task 2: Selected CQs. (the total number of selected CQs in integer number (0-11). see Supp. Sec. 8, 8.2 and 4) 6. Scales of participants' familiarity with the topic(prior knowledge about the topic); controversy(complex and wide-range debate among the public), strength (reasons - premises statements - to support the claims and conclusions); and preferences (liking the topic). Calculate the Likert-Scale controller rating 1-7, where 1 is strongly negative and 7 is strongly positive. 7. Workload (NASA-TLX) in terms of temporal demand5, physical demand6, mental demand7, performance8, frustration9, and effort10. The classification result for each of them is an integer (0-500) and the total workload is a decimal (0-100). Footnote 5: Temporal Demand: How much time pressure did you feel due to the rate or pace at which the task elements occurred? Was the pace slow and leisurely or rapid and frantic? Footnote 6: Physical Demand. How much physical activity was required (e.g., pushing. pulling, turning. etc.)? Was the task easy or demanding? Slow or brisk? Slack or strenuous? Footnote 7: Mental Demand: How much mental perceptual activity was required (e.g., thinking, searching, remembering, etc.)? Was the task easy or demanding? Simple or complex? 8. User experience (UEQ) in terms of attractiveness11, perspicuity12, efficiency13, dependability14, and stimulation.15. The rating result for each of them is a decimal number (-3 to 3). Footnote 14: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References References: References: References: References: References: References: References: References: References: References: References: References: References References: References: References: References: References: References: References: References: References: References References: References: References: References: References: References: References: References: References: References: References: References References: References: References: References: References: References: References: References References: References: References: References: References: References: References: References: References: References: References: References References: References: References: References: References: References References: References: References: References: References: References: References: References: References: References: References: References: References: References References: References: References: References: References: References: References: References References: References: References: References: References: References: References: References References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References: References References: References: References References: References: References: References: References: References: References: References: References: References: References: References: References: References: References References: to PTM RQ4. Although some other studies, such as[6] and designVUE [10], compare the argument structure based on the weighting of the pros and cons, they carry out a complex and challenging numerical evaluation of the argumentative \begin{table} \begin{tabular}{l l l l l l} & & **PTM** & **2D AGM** & **p** & **Sig** \\ \hline Time to interpret an argument & Median [Q1,Q3] & 11:13 [5:57, 17:12] & 13:05 [9:33, 23:13] & 0.015 & * \\ & & min & min & min & \\ \hline Task 1 & The A-BLEU Scores & & & & \\ ‘Recall Claims’ & Median [Q1,Q3] & & & & \\ \hline & Controversial Topics, n & 17 & 20 & & \\ & Median [Q1,Q3] & 5 [3, 6] & 6 [4, 7] & 0.148 & ns \\ Opinion of participants & Familiarity Topics, n & 16 & 12 & & \\ & Median[Q1,Q3] & 5 [3, 6] & 4 [2.5, 5] & 0.33 & ns \\ (Likert Scale [1-7]) & Strong Topics, n & 22 & 23 & & \\ & Median [Q1,Q3] & 6 [5, 7] & 6 [5, 7] & 0.84 & ns \\ & Liked Topic, n & 17 & 19 & & \\ & Median [Q1,Q3] & 6 [4, 6.5] & 6 [4.5, 7] & 0.3 & ns \\ & Liked Model, n & 9 & 21 & & \\ & Median [Q1,Q3] & 3 [2, 5] & 6 [4, 7] & 0.005 & ** \\ & Initial Position Change, n & 10 & 13 & - & - \\ & Absolute magnitude of change & 3 [2.75, 4] & 2 [1.5, 3] & 0.29 & ns \\ \hline & Attractiveness, Median [Q1,Q3] & -0.5 [-1.25, 1.5] & 1 [0.63, 1.75] & 0.0026 & ** \\ & Perspicity, Median [Q1,Q3] & 1 [0, 1.67] & 1.33 [0.83, 1.67] & 0.059 & ns \\ UEQ (Rating [-3,3]) & Efficiency, Median [Q1,Q3] & 0.5 [-0.5, 2] & 1.5 [0.75, 2.25] & 0.036 & * \\ & Dependability, Median [Q1,Q3] & 0.1 [-0.5, 1.5] & 1.5 [0, 1.75] & 0.113 & ns \\ & Stimulation, Median [Q1,Q3] & 0.5 [-0.25, 0.88] & 0.75 [0.5, 1.13] & 0.035 & * \\ \hline & Mental Demand, Median [Q1,Q3] & 300 [210, 387.5] & 120 [65, 205] & 0.0002 & *** \\ & Physical Demand, Median & 0 [0, 60] & 0 [0, 95] & 0.50 & ns \\ NASA TLX (Rating [0-500]) & Temporal Demand, Median & 100 [50, 180] & 100 [75, 210] & 0.32 & ns \\ & [Q1,Q3] & 100 [65, 232.5] & 130 [27.5, 232.5] & 0.91 & ns \\ & Performance, Median [Q1,Q3] & 200 [90, 300] & 140 [72.5, 210] & 0.07 & ns \\ & Effort, Median [Q1,Q3] & 120 [7.5, 197.5] & 65 [37.5, 167.5] & 0.51 & ns \\ \hline NASA TLX (Rating [0-100]) & Overall Workload Scores, Median & 64.33 [53.67, 69.5] & 48.33 [39.167, 60.01 & * \\ & [Q1,Q3] & & & 62.67] & \\ \hline \end{tabular} \end{table} Table 1: Comparisons between the PTM and 2D AGM. Sig - significance: *** for p<0.001,** for p<0.01, * for p<0.05, ns (non significant) for p>0.05; p values from Wilcoxon signed-ranks test, and t-test for UEQ and NASA TLX. Figure 5: The paired plot of the Like model scores for both models is based on the Likert-scale value (>3). Differences measured using p-value from Wilcoxon signed-ranks test, Sig – significance: *** for p < 0.001,** for p < 0.01, * for p < 0.05, ns (non-significant) for p > 0.05) process, with no supporting argument agreement evaluation or argument model evaluation. In contrast, 2D AGM assesses the argument components and applies a simple numerical process (explicit functionality for mapping decisions) based on positive and negative weights. Task 1 Recall Claims: Measuring the working memory of participants. 2D AGM and PTM have similar effects in relation to working memory. We compared the effect of the models on the working memory of participants via the A-BLUE score (see Table 1). Both models achieved low values of A-BLUE with no significant differences (p = 0.247), PTM median = 0.07, and 2D AGM median = 0.10. However, we argue that for 2D AGM the A-BLUE score can be improved with more interaction, practice and familiarity with the design configuration. Therefore, the results support RQ1,RQ2, and RQ3 in terms of effecting working memory positively and supporting the interpretation process. This result has been reported in similar studies such as [8], which found that organizing good pedagogical practices around collaborative argument visualization leads to meaningful improvements in students' analytical-reasoning skills; also, [11] found that narrative and graphical representation had no effect on the participants' performance in terms of constructing knowledge. For additional information, Table A.1 presents the candidates for both models that were awarded the maximum A-BLEU score values - one participant in PTM and one in 2D AGM. Table A.2 contains the references for PTM model and Table A.3 contains the references for the 2D AGM model. The A-BLEU scores for all participants are shown in Fig. A.7. ### 2D AGM-Task 2: Number of selected CQs correlated with topic's familiarity We found a negative correlation between the number of selected CQs and familiarity with the topic (measured using spearman correlation, p = 0.038, r = -0.42, see Figure 7(a)). The more familiar topic, the fewer CQs are selected, which relates to RQ1,RQ2, RQ3, and RQ6. This indicates that the scheme types (including CQs) help participants to be more critical of their interpretation, especially with familiar topics. However, there are no other significant correlations between topic strength and number of CQs (p = 0.99, Figure 7(c)), position preference (liking) and number of CQs (p = 0.93, Figure 7(d)), or topic controversy and number of CQs (p = 0.87, Figure 7(b)). The User Experience Questionnaire UEQ (Measuring the users' perceptions): The 2D AGM significantly outperforms PTM for attractiveness, perspicuity, and efficiency, also providing better dependability and stimulation. Two models can be compared relatively easily through a statistical comparison of UEQ answers [59]. In this study, only 15 relative items were selected from 26 items to evaluate the quality of interactive software models; for more details (see Fig. A.10) and 8.2. We found that the 2D AGM was significantly higher than PTM in terms of attractiveness (p = 0.0026), efficiency (p = 0.036), and stimulation (p =0.035). However, we found that there are no significant differences between the two models in terms of perspicuity (p=0.059) or dependability (p=0.113), see Table A.5. This point relates to RQ4 and RQ5. Figure 6: Agreement scores before and after interaction with two models: (a) PTM, and (b) 2D AGM. Dashed lines connect scores for individual participants. Figure 8: The linear regression of the topic’s scales: (a) Familiarity, (b) Controversy, (c) Strength, and (d) Liking vs. the number of selected CQs. Each dot represents a single answer from a single participant. There is a negative correlation between familiarity and the number of CQs (p = 0.038, r = -0.42) but no other correlations. Figure 7: The paired plot of A-BLEU scores in Task 1 ‘Recall Claims’ for both models. Dashed lines connect scores for individual participants. No significant difference (p=0.247) between the scores achieved by participants in Task 1 in PTM and 2D AGM. The User Experience Questionnaire UEQ (Comparing with UEQ benchmark dataset)The 2D AGM provides an average level and PTM provides a below-average level compared to the UEQ benchmark. Figures 9 show the comparison of the results for the two evaluated models against the benchmark. This allows conclusions to be drawn about their relative quality, and they can be compared to other similar models. With regard to the benchmark in terms of mean and median values (from -3 to 3), the UEQ analysis tool (using a t-test) showed that PTM has below-average results in perspicuity (0.79, 1) and efficiency (0.70, 0.5), and it has poor results in attractiveness (0.00, -0.5), dependability (0.56, 0.0), and stimulation (0.42, 0.5). On the other hand, 2D AGM has below-average results in attractiveness (1.18, 1), dependability (1.12, 1.5), and stimulation (0.83, 0.75), but it is above average in perspicuity (1.37, 1.33), and efficiency (1.48, 1.5), see Table A.5 and Table 1. Consequently, overall, 2D AGM provides an average level of user experience compared to the UEQ benchmark dataset, related to RQ4 and RQ5. Completion Time Differences: PTM requires significantly less time to interpret an argument than 2D AGM We found that the PTM requires significantly less time to interpret an argument than 2D AGM (median 11:13 min vs 13:05 min; Wilcoxon signed-rank test: p=0.015; see Fig. 9(b) and Table 1). This relates to RQ4. However, 2D AGM introduces a newly proposed design and configuration (a new mechanism for interpreting an argument). Therefore, increasing interaction and addressing the familiarity gap with the model could reduce this performance gap. This result is consistent with [13], who found that model performance is influenced by familiarity and practical use. ### Time spent and data size/argument length: Time spent is proportional to argument length We use linear regression between the time spent on PTM and 2D AGM and the length of the AG representation, which will be the number of words in PTM and the number of nodes in 2D AGM (see Fig. 10(a)). We observed that time spent on PTM is proportional to the number of words in the argument (p=0.01, \(R^{2}=0.258\)), see Fig. 10(a). On the other hand, we observed that no relation between time spent on 2D AGM and the number of nodes in the graph representation (p=0.44, \(R^{2}=0.0263\)), see Fig. 10(b). This is related to RQ3 and RQ4. Figure 10: The two paired plots (a) a plot of NASA TLX overall workload for two models. A significantly lower workload score (p = 0.01) was achieved by participants in the 2D AGM. (b) a plot of time spent on two models. A significantly lower time spent (p = 0.015) was achieved by participants in the PTM. Dashed black lines connect scores for individual participants. ### Time spent and workload: No correlation between workload and time spent on both models We used linear regression between the time spent on PTM and 2D AGM and the overall workload. We found that there is no significant correlation between the workload and the time spent on PTM (p=0.15, \(R^{2}=0.089\)) or 2D AGM (p=0.05, \(R^{2}=0.157\)), see Fig. 12. This is related to RQ3, RQ4, and RQ5. ### Feedback on the experiment: **The participants found the experiment helpful17 and valuable 18** Footnote 17: It helps and supports the interpretation process and shows various arguments from two different perspectives. Footnote 18: It has pieces of important, untried information and methods that are interesting to know. Table 2 shows the overall feedback on the experiment from the participants. The participants found the experiment was considered to be valuable (N=23, 92%), helpful (N=23, 92%), and liked (N=21, 84%) by the participants, assessed under a Likert-scale value (>4). These results relate to RQ1 - RQ6. See 6, which contains a sample of feedback comments. ## 6 Discussion ### Overall results discussion and synthesis We set out to understand the impact of enriched argument representations on the process of critically interpreting argumentation and debates. As a representation paradigm, we focused on relations and categories that are prevalent in argument representation, whose extraction is being 'commodities' with the recent evolution of argument mining methods. As expected, the introduction of a new representation modality that competes with a current text-based representation implies an initial learning and additional interpretation overhead to the end-users. Differences in time spent by the participants on each model are considered temporal measures reflected in this overhead ( p = 0.015, 5.8). In contrast, the structured modality (2D AGM) significantly outperformed the PTM in terms of attractiveness, stimulation \begin{table} \begin{tabular}{l|l} \hline \hline & **Feedback,** \\ & Median \\ & [Q1,Q3] \\ \hline I liked the Experiment & 6.0 [5, 7] \\ The experiment is helpful & 6.0 [5, 6] \\ The experiment investigates a valuable & 6.0 [5, 6] \\ topic & \\ \hline \hline \end{tabular} \end{table} Table 2: The feedback from 25 participants after completing the experiment. Answers on Likert Scale [1-7], 1 - strongly disagree, 7 - strongly agree Figure 12: The linear regression of the time spent and the workload on the two models. There is no significant correlation between the workload and the time spent on PTM (p=0.15) or 2D AGM (p=0.05). and efficiency, also providing better dependability and perspicuity (RQ5 - What are the users' perceptions of their preference for, and the easiness, effectiveness, and usefulness of the proposed solution?). Additionally, the 2D AGM significantly outperformed the PTM with regard to baseline user experience benchmarks, with a particular emphasis on perspicuity and efficiency. These results provide evidence that a visualisation model which elicits the structure of arguments can support the interpretation of complex arguments and debates (addressing RQ1 - Can discourse-level graph representations support the interpretation of complex arguments and debates? and RQ3 - Can argumentation schemes and argumentation types support the construction of this representation?). We also confirmed the inherent trade-off between the time required to interpret an argument and critical interpretation, hypothesising that the additional structure, while introducing a temporal overhead, forces users to be more critical and systematic about the quality of the argument (claim-premise structure, argumentation schemes and supporting CQs). Aiming to answer RQ4 (How does the proposed model perform for recall and workload metrics when compared to a pure text-based interpretation?), we found that the 2D AGM also required significantly less workload from the users than the PTM (p = 0.01, 5.7). Additionally, there was some level of evidence (although not statistically significant) that the graph-based representation may scale better for longer arguments when compared to PTM (p = 0.01, 5.9). Both models were found to have similar effects in relation to working memory, as measured by a recall task (p = 0.247, 5.3). Being able to be critical about an argument implies a direct dialogue with the ability to ask CQs about the parts of an argument. We also controlled for the relationship between the number of CQs instantiated in the context of an argument and potential biases and initial positions, such as the level of perceived familiarity with the topic, and level of controversy. The higher the perceived familiarity, the lower the number of CQs, although this does not explicitly affect the interpretation process (addressing RQ2 - Which categories and relationships positively affect the interpretation process?). Our findings suggest that the model configuration plays different roles in influencing participants' interpretation of the argument, especially with regard to change of position on the topic, and that it has more influence on unfamiliar argument topics. Thus, we begin by discussing the model's design implications and limitations, the results that were similar for models, and the slight impact that the 2D AGM model had compared with PTM. Finally, we discuss the positive impacts of 2D AGM and the benefits of the experiment settings. ### 2D AGM design implications and limitations In our findings, we saw that the model's performance (time spent) is observed in practice by the participants' understanding and, the complexity of the argument topic, yet it is hardly affected by the model's design. This indicates the potential need for some support for participants so that they can better understand the design and its related components (node, scheme, and relation types). Then, the key message that needs to be conveyed to participants takes less time to read a more complex topic. For instance, on the least complex topic, the participant who spent the shortest length of time on the task had not spent time reading all the premises because a pre-agreement evaluation was made. On the other hand, on the most complex topic, the slowest participant spent a long time reading all the premises, even though the pre-agreement evaluation had already been made, see Supp. Fig. A.8. Obviously, a significantly different amount of time was spent on two different complex topics. In other words, it is critical for participants to estimate their ability to read the argument and understand the model design in order for the model to have an honest evaluation. To be realistic, there were other reasons for the long time spent on 2D AGM. These results are a normal and expected reaction to initial interaction with an unfamiliar graph, based on Walton's classification schemes. The graph has some of the particular specifications of the representation's design and new configurations. The two levels of difficulty caused by the model's unfamiliarity and its configuration led to an increase in temporal and physical demand, see 5.8. Fortunately, neither the workload nor the data were significantly affected by time, see 5.9 and 5.10. ### Similar results to those of PTM According to various metrics in our experiment, such as the A-BLEU score, we also discover that the effects of the 2D AGM model are comparable to those of the PTM. Working memory is one of the key findings, and the A-BLEU score is used to measure it (see Result 6.3). It is difficult to see significant differences between the two models because the A-BLEU is based on the similarities between the candidates (recall sentences) and references, according to the ability of the participants to recall the sentences, see A.1. Although the 2D AGM model configurations help to break down the text into sentences, categorise them into different types, connect them as a relational tree, and attach premises with scheme types, the recall sentences are based on the participant's ability to recall and are not influenced by the model design. For instance, the BLEU score on 2D AGM is 0.05 for the most complex topic "Abortion", which is quite lower than PTM (0.12). On the other topic low in complexity, "Tablets and textbooks", the 2D AGM model (0.2) is much higher than PTM (0.02). On the non-complex or simple topic, "Zoos", 2D AGM (0.09) is quite similar to PTM (0.095), see A.7. ### Slight impact of 2D AGM In other findings, the participants were asked for their opinion before and after reading the topic in each model. This evaluation helps us to know whether the model has an effect on the participants' decisions or not. The fact that 2D AGM represents arguments in a different way than the PTM, as the participants are able to see where the premises came from and how their claims undercut or rebut other claims, for example. The participants can increase their confidence in the premises by asking the attached CQs or similar questions that test the acceptability of the premises' statements. Moreover, 2D AGM shows which side of the argument contains more supporting nodes. Furthermore, the agreement evaluation is based on the main individual claims on each side and then the total points show the final decision, which is closer to the participant's decision than a one-time decision. The decision is made based on the accumulative process of the partial parts of an argument on 2D AGM, rather than a one-time decision made on PTM, see (5.2). This makes the decision a more reasonable, fair decision and close to the nature of the debate process. Combined, these reasons mean that 2D AGM has more alternative decisions than PTM. Actually, at this point, we can still see a slight impact of the 2D AGM design over the PTM. ### The performance of AGM compared with PTM Our findings in terms of user experience, workload, and user perception (preference) showed that the 2D AGM outperformed PTM, with significant differences between the two models, see 5.1, 5.5, 5.6, and 5.7. These findings may contain bias that mainly influences the measurements but is unlikely to be a confounding variable for our main results regarding whether or not the model affects the user experience or workload. This is because all of our analyses were only conducted with two standard questionnaires, which are based on several quantitative items and scales, see 9 and A.5. Additionally, our experimental tasks, agreement evaluation, and other queries had been designed in the same way without any bias towards either model. Therefore, we believe that our results have a high level of reality and reliability, see A.9 and A.10. ### Only on 2D AGM: Number of selected CQs correlated to the topic's familiarity The most essential point is the decision made by the reader and whether or not it is based on confidence and acceptability reasons. Thus, we observe that the argument or debate can be evaluated using other measurements that are the natural assessment criteria for argument and debate, such as familiarity and the argument's representation. For instance, if you have two arguments with different levels of familiarity, then the reader will be more confident and assured about the decision made about the more familiar argument topic. Similarly, the way an argument is read and the supporting items or tools can positively or negatively affect the decision on that argument. For example, they can help to make the reader more confident, by knowing the source of the premise statements or the majority of the reasons on each side. In this study, we can see the impact of the representation of the argument. It is one of the interesting results that the method of representation can make the reader more confident, even on unfamiliar topics, so the reader does not need to ask many CQs, see 5.4. ### Benefits of using the "Uni-Excel" plugin package The project was shown to the participants on a desktop and they shared access using Microsoft Teams; they had to have a anonymous account. The project embedded the "Uni-Excel" plugin package, which has excellent benefits and is useful for collecting, organising, and saving results during the experiment. Its main benefit is that if a technical issue occurs and interrupts the experiment, the participants do not need to start everything over again, just pick up where they left off, because the results are collected after each frame and this saves time. This happened to three or four participants, either due to internet connection issues or other unexpected disconnecting issues. ## 7 Future Work With the empirical evidence obtained in this study, we would like to conduct more confirmatory studies in the future to validate the results and examine their influence in different settings and representations, for different populations, and for different tasks. Indeed, we will investigate, with more rigorous argument representation, how the types of argument schemes, which will be explicitly represented, will affect the critical interpretation process. The study will base its findings on the subtle, scientific, analytical, and spontaneity measurements for human interaction. Ultimately, we acknowledge that interpretation is a complex concept and multidimensional construct. Though we attempted to measure interpretation using a set of metrics including the NLP functions, rational tasks, and survey methods, future studies are needed to understand how to critically interpret a complex argument or debate. ## 8 Conclusions As argumentation becomes more complex, forked, and expanded in everyday life, investigating how humans interpret an argument critically becomes increasingly important and needs effective ways of interpretation other than textual. This study investigated whether a structured representation of an argument (2D AGM), can improve interpretation in the context of argumentative discourse when contrasted with arguments in textual form. The study included an experiment that presented 10 argument topics in two models: PTM and a 2D AGM model. The evaluation process for these models was based on task-related constructs (time spent, working memory, effort, and workload) and user experience (twenty-five participants). The structured modality (2D AGM) significantly outperformed the text-based one (PTM) in terms of attractiveness, perspicuity, and efficiency, also providing better dependability and stimulation. These results provided evidence that a visualisation model, which elicits the structure of arguments, can support the interpretation of complex arguments and debates. We also confirmed the inherent trade-off between time spent interpreting an argument (higher for 2D AGM when compared to a textual modality) and critical interpretation, hypothesising that the additional structure, while introducing a temporal overhead, forces users to be more critical and systematic when judging the quality of the argument. We also found that the 2D AGM implied significantly less workload for users when compared to the PTM. Additionally, there was some level of evidence that the structured model may scale better for longer arguments than the PTM. Both models were found to have similar effects in relation to working memory. Future work could involve gaining a better understanding of which categories within an argument scheme are most relevant for critical interpretation. ## Acknowledgements We thank all the volunteers and participants, and all supporting staff, who provided helpful comments on this study and experiment. The authors gratefully acknowledge all people who worked with them.
2301.02752
Finite normal subgroups of strongly verbally closed groups
In the recent paper by A. A. Klyachko, V. Yu. Miroshnichenko, and A. Yu. Olshanskii, it is proven that the center of any finite strongly verbally closed group is its direct factor. One of the results of the current paper is the generalization of this nontrivial fact to the case of finite normal subgroups of any strongly verbally closed groups. It follows from this generalization that finitely generated nilpotent groups with nonabelian torsion subgroups are not strongly verbally closed.
Filipp D. Denissov
2023-01-07T00:15:49Z
http://arxiv.org/abs/2301.02752v2
# Finite normal subgroups of strongly verbally closed groups ###### Abstract In the recent paper by A. A. Klyachko, V. Yu. Miroshnichenko, and A. Yu. Olshanskii, it is proven that the center of any finite strongly verbally closed group is its direct factor. One of the results of the current paper is the generalization of this nontrivial fact to the case of finite normal subgroups of any strongly verbally closed groups. It follows from this generalization that finitely generated nilpotent groups with nonabelian torsion subgroups are not strongly verbally closed. ## 1 Introduction A subgroup \(H\) of a group \(G\) is called _verbally closed_[14] if any equation of the form \[w(x_{1},x_{2},\dots,x_{n})=h,\text{ where }w\text{ is an element of the free group }F(x_{1},\dots,x_{n})\text{ and }h\in H,\] having solutions in \(G\) has a solution in \(H\). If each system of equations with coefficients from H \[\{w_{1}(x_{1},\dots)=1,\dots,w_{m}(x_{1},\dots)=1\},\text{ where }w_{i}\in H \,*\,F(x_{1},\dots,x_{n})\text{ (and }\,*\text{ means the free product)},\] having solutions in \(G\) has a solution in \(H\), then the subgroup \(H\) is called _algebraically closed_ in \(G\). Note that if the subgroup \(H\) is algebraically closed in the group \(G\), then it is verbally closed in \(G\). A group \(G\) is called _strongly verbally closed_ if it is algebraically closed in any group containing \(G\) as a verbally closed subgroup. Thus, the verbal closedness (as well as the algebraic closedness) is a property of a subgroup, while the strong verbal closedness is a property of an abstract group. The class of strongly verbally closed groups is fairly wide. For example, it includes * all abelian groups [11], * all free groups [10], * all virtually free groups containing no nontrivial finite normal subgroups [10], * all groups decomposing nontrivially into a free product [11], * fundamental groups of all connected surfaces except the Klein bottle [11], * all finite groups with nonabelian monolith [12], * inifinite dihedral group [13] and any finite dihedral group whose order is not divisible by 8 [14], * all acylindrically hyperbolic groups with no nontrivial finite normal subgroups [2]. The class of non-strongly-verbally-closed groups is fairly wide too. Among such groups are the following: * the already mentioned fundamental group of the Klein bottle [15], * the discrete Heisenberg group [16], * any finite group, whose center is not its direct factor (in particular, any finite nonabelian nilpotent group) [16], [17], [18]. Proving the strong verbal closedness (as well as its absence) of a group is not easy. In [16], for example, a question is raised: **Question 1**.: Does there exist a finitely generated nilpotent nonabelian strongly verbally closed group? A negative answer to this question would yield a broad generalization of the last two examples of non-strongly-verbally-closed groups mentioned above. So far, we managed to give a partial answer to this question. More precisely, we proved the absence of strong verbal closedness of finitely generated nilpotent groups with nonabelian torsion subgroups and of some finitely generated nilpotent nonabelian groups with abelian torsion subgroups. A property that is stronger than the strong verbal closedness is the property of being a strong retract [10]. A group \(H\) is called a _strong retract_ if it is a retract of any group \(G\geqslant H\) from the variety generated by the group \(H\). Let us recall some terminology [20]: * _the variety generated by a class of groups_ \(\mathcal{K}\) is the class of all groups satisfying all identities that hold in all groups from \(\mathcal{K}\), * the variety generated by a group \(G\) is designated by \(\operatorname{\mathbf{var}}G\). This gives rise to the following question from [10]: **Question 2**.: What is an arbitrary finite strong retract? In [10] some examples of strong retracts are provided. In the next section, we describe all the nilpotent strong retracts. Below we provide a brief list of **notation** we use. If \(x,y\) are elements of some group, then the symbol \([x,y]\) denotes their commutator \(x^{-1}y^{-1}xy\). The symbol \(\operatorname{ord}(x)\) denotes the order of an element \(x\) of a group \(G\). The center of a group \(G\) is denoted by \(Z(G)\), and its commutator subgroup is denoted by \(G^{\prime}\). The centralizer of a subset \(X\) of a group \(G\) is denoted by \(C(X)\). The symbol \(\langle\langle X\rangle\rangle\) stands for the normal closure of a subset \(X\) of a group \(G\) (that is the intersection of all normal subgroups of \(G\) containing \(X\)). The free group with a basis \(X\) is denoted as \(F(X)\) or \(F_{n}\) in case \(X\) has \(n\in\mathbb{N}\) elements. Identical mapping from \(X\) to itself is denoted by \(id\). We use the symbol \(H\cong G\) to express the fact that groups \(H\) and \(G\) are isomorphic. Finally, the symbol \(H\leqslant G\) denotes the fact that a group \(H\) is a subgroup of \(G\). The symbol \(H\unlhd G\) denotes the fact that \(H\) is a normal subgroup of \(G\). The author is grateful to his supervisor Anton Alexandrovich Klyachko for formulation of the problem and for valuable remarks during the work. ## 2 Nilpotent strong retracts Note that in case when \(G\) is an abelian group, \(H\leqslant G\) is its retract if and only if \(H\) is a direct summand of \(G\). It means that the property of being a strong retract for the abelian group \(G\) is equivalent to the property of \(G\) being a direct summand of any group \(H\in\operatorname{\mathbf{var}}G\) containing \(G\). For the further discussion, we need the description of all varieties of abelian groups (see [21], parapgraph 18, exercise 7): Varieties of abelian groups are precisely the following classes of groups: 1) the class of all abelian groups; 2) the class of all abelian groups of a period division \(n\in\mathbb{N}\). Recall that the _period of a group_\(G\) is the least number \(n\in\mathbb{N}\), such that \(x^{n}=1\) for any \(x\in G\). If such a number exists, then \(G\) is a group of _bounded period_. To begin with, consider the case, when \(G\) is not a group of bounded period. Then, according to the description, \(\operatorname{\mathbf{var}}G\) is the class of all abelian groups. The following is true of divisible abelian groups (see, for example, [20]): * If \(G\) is a divisible abelian group, and \(H\) is an abelian group such that \(G\leqslant H\), then \(G\) is a direct summand of \(H\). Let us remind that a group \(G\) is called _divisible_ if for any \(g\in G\) and \(n\in\mathbb{N}\), the equation \(x^{n}=g\) has a solution in \(G\). **Proposition 1**.: An abelian group \(G\) of unbounded period is a strong retract if and only if it is divisible. **Proof.** Sufficiency follows from the fact provided above. Let \(G\) be an abelian group of unbounded period. Then, as it was noted earlier, \(\operatorname{\mathbf{var}}G\) is the class of all abelian groups. In particular, \(\operatorname{\mathbf{var}}G\) contains a divisible group \(H\) containing \(G\)[20]. Though, if \(G\) is not divisible itself, it is not a direct summand of \(H\) (as direct summands of a divisible group are divisible themselves [20]), so \(G\) is not a strong retract. Let us move on to abelian groups of bounded period. The first Prufer theorem provides a complete description of these groups [20]: An abelian group \(G\) of bounded period \(d\) is a direct sum of primary cyclic groups, i.e. \(G\cong\bigoplus_{i\in I}{\mathbb{Z}}_{p_{i}^{k_{i}}}\), where \(p_{i}\) are prime numbers and \(k_{i}\) are natural numbers such that \(p_{i}^{k_{i}}|d\), \(i\in I\) (\(I\) is an index set). We need the following variation of the Zorn's lemma [10]: Let \(M\neq\varnothing\) be a partially ordered set. Suppose that every chain in \(M\) (a totally ordered subset of \(M\)) has an upper bound. Then \(M\) contains a maximal element. Now, we are ready to proceed with our description: **Proposition 2**.: An abelian group \(G\) of bounded period is a strong retract if and only if in its decomposition into the direct sum of primary cyclic groups, orders of any distinct direct summands are either equal or coprime: \[G\cong\bigoplus_{i=1}^{m}C_{p_{i}^{k_{i}}}(n_{i}),\text{ where }C_{p_{i}^{k_{i}}} (n_{i})\text{ is equal to the direct sum of }n_{i}\text{ copies of the group }{\mathbb{Z}}_{p_{i}^{k_{i}}},\] where all prime numbers \(p_{i}\) are distinct, \(m,k_{i}\in{\mathbb{N}}\), and \(n_{i}\) are some cardinal numbers. **Proof.** Suppose that \(G\) cannot be decomposed into such a direct sum. We may assume that \[G=\bigoplus_{i=1}^{m}\bigoplus_{j\in I_{i}}{\mathbb{Z}}_{p_{i}^{k_{j}}}, \tag{1}\] where \(m\in{\mathbb{N}}\), \(|I_{i}|=n_{i}\) and among \(k_{j}\), \(j\in I_{i}\) there are only finitely many different ones (because \(G\) is a group of bounded period) but there exists \(i\in\{1,\ldots,m\}\) such that for some \(j_{1},j_{2}\in I_{i}\), \(k_{j_{1}}\neq k_{j_{2}}\). Consider the group: \(H=\bigoplus_{i=1}^{m}C_{p_{i}^{s_{i}}}(n_{i})\), where \[s_{i}=\max\{k_{j}\ |\ {\mathbb{Z}}_{p_{i}^{k_{j}}}\text{ is a direct summand in the decomposition }(1)\},\ i=1,2,\ldots,m.\] Since both \(G\) and \(H\) are of the same period \(\prod_{i=1}^{m}s_{i}\), it follows from the description of abelian varieties that \(H\in\operatorname{\mathbf{var}}G\). Consider the injection \(f:G\to H\), which works on each direct summand from (1) as follows: let \(i\in\{1,\ldots,m\}\), \(j\in I_{i}\), \(f:{\mathbb{Z}}_{p_{i}^{k_{i}}}\hookrightarrow{\mathbb{Z}}_{p_{i}^{s_{i}}}\), where \({\mathbb{Z}}_{p_{i}^{s_{i}}}\) is the \(j\)th summand from the decomposition of \(C_{p_{i}^{s_{i}}}(n_{i})\) into the direct sum. Every direct summand from (1) is mapped into the corresponding direct summand of the decomposition of \(H\), so that the restriction of \(f\) to \({\mathbb{Z}}_{p_{i}^{k_{j}}}\) is a natural injection: if \(k_{j}=s_{i}\), then it is the identical map; otherwise it is a mapping to the subgroup of \({\mathbb{Z}}_{p_{i}^{s_{i}}}\) of the order \(p_{i}^{k_{j}}\). From the uniqueness of the decomposition of an abelian group of bounded period into the direct sum of primary cyclic groups [10], it follows that \(f(G)\) is not a direct summand of \(H\). Thus, \(G\) is not a strong retract. Now, suppose that \(G\) has the decomposition from the statement of the theorem. Let \(H\in\operatorname{\mathbf{var}}G\) and let \(f:G\hookrightarrow H\) be a monomorphism. As any monomorphism preserves the order of an element, the \(p_{i}\)th component of \(G\) is mapped into the \(p_{i}\)th component of \(H\) under \(f\), so it suffices to prove the theorem only for the case \(G=C_{p^{k}}(n)\), where \(p\) is prime, \(k\in{\mathbb{N}}\), and \(n\) is some cardinal number. Let us show that there exists such \(X\leqslant H\) that \(H=f(G)\oplus X\). In Zorn's lemma, take the set of all subgroups of \(H\) having trivial intersection with \(f(G)\) as \(M\): \[M=\{Y\leqslant H\ |\ Y\cap f(G)=\{0\}\}.\] Order on \(M\) is introduced as follows: for \(X,Y\in M\), \(X\leqslant Y\) if \(X\) is a subgroup of \(Y\). It can be verified directly that this is an order on \(M\). Set \(M\) is nonempty: \(\{0\}\in M\). Any chain \(\{Y_{\alpha}\}\subseteq M\) of subgroups having trivial intersection with \(f(G)\) is bounded by an element \(Y\in M\), where \(Y=\cup_{\alpha}Y_{\alpha}\). Consequently, Zorn's lemma is applicable, and \(M\) contains a maximal element \(X\): \(X\leqslant H\), \(X\cap f(G)=\{0\}\), and \(X\) is not a subgroup of any bigger (relatively to the order we introduced) subgroup satisfying this property. From \(X\cap f(G)=\{0\}\) it follows that \(f(G)+X=f(G)\oplus X\). It remains to prove that \(H=f(G)+X\). Let \(h\in H\). There exists such \(k\in\mathbb{N}\) that \(kh\in f(G)+X\). Indeed, otherwise \(\langle h\rangle\cap(f(G)+X)=\{0\}\), which means that \((\langle h\rangle+X)\cap f(G)=\{0\}\), leading to a contradiction with the maximality of \(X\). Let \(s\) be the least of such numbers \(k\). Without loss of generality, assume that \(s\) is prime or that \(s=1\) (otherwise, take a power of \(h\) instead of \(h\)). Two cases are possible: 1) \(s=p\). Then, \(ph=f(g)+x\) for some \(g\in G\), \(x\in X\). If \(g=pg_{1}\), \(g_{1}\in G\) (\(g_{1}\) may be equal to zero), then \(ph-f(pg_{1})=x\). However, from \(h-f(g_{1})\not\in X\) (as \(h\not\in f(G)+X\)) it can be obtained that \((X+\langle h-f(g_{1})\rangle)\cap f(G)=\{0\}\), which leads to a contradiction with the maximality of \(X\). Consequently, \(g\neq pg_{1}\) for any \(g_{1}\in G\). As \(g\neq 0\), \(\operatorname{ord}(g)=p^{k}\). Though, \(\operatorname{ord}(ph)=p^{r}<p^{k}\), so \(p^{r}(ph)=0=p^{r}(f(g))+p^{r}x\). As the sum \(f(G)+X\) is direct, \(p^{r}f(g)=p^{r}x=0\), which means that \(p^{r}g=0\), which is impossible. 2) \(s\neq p\). For abelian groups of period \(p\), the mapping \(g\mapsto sg\) is an automorphism, so, as \(sh=f(g)+x\) for some \(g\in G\), \(x\in X\), there exist such \(g_{1}\in G\), \(x_{1}\in X\) that \(g=sg_{1}\), \(x=sx_{1}\). Thus, \(s(h-f(g_{1})-x_{1})=0\). No nontrivial element of \(H\) has the order of \(s\), so \(h=f(g_{1})+x_{1}\). As a result, \(H=f(G)\oplus X\), and \(G\) is a strong retract. **Proposition 3**: **.** The center of a strong retract is its direct factor. **Proof.** Let \(G\) be a strong retract. The center of any group is a normal subgroup, so it suffices to prove that \(Z(G)\) is a retract of \(G\). Consider the central product of \(G\) with its copy \(\widetilde{G}\) with joined center: \[K=G\underset{Z(G)=Z(\widetilde{G})}{\times}\widetilde{G}=(G\times\widetilde{ G})/\{(g,g^{-1})|g\in Z(G)\}\in\operatorname{\mathbf{var}}G.\] The group \(\widetilde{G}\) is isomorphic to the group \(G\), so it is a strong retract too. Let \(\rho\) be a retraction of \(K\) to its subgroup \(\widetilde{G}\). From the fact that in the group \(K\), the group \(G\) commutes with the group \(\widetilde{G}\), we obtain \(\rho(G)\leqslant Z(G)\). By definition of the retraction, \(\rho(g)=g\) is true for any element \(g\in Z(G)\). Thus, the restriction of \(\rho\) to the subgroup \(G\) of the group \(K\) is the desired retraction to \(Z(G)\). The following simple proposition shows that consideration of nilpotent groups does not yield any new strong retracts: **Proposition 4**: **.** Nilpotent strong retract is an abelian group. **Proof.** Any nontrivial normal subgroup of a nilpotent group intersects the center of this group nontrivially (see [10]). From this fact and from the proposition 3, we obtain that any nilpotent strong retract is equal to its center. As a result, we proved the following theorem: **Nilpotent-strong-retract theorem.** Nilpotent strong retracts are precisely divisible abelian groups and abelian groups of bounded period in whose decomposition into the direct sum of primary cyclic groups, orders of any distinct direct summands are either equal or coprime. In the next paragraph we show that many nilpotent groups are not even strongly verbally closed. ## 3 Finite normal subgroups of strongly verbally closed groups We say that a group presentation \(\langle X\mid R\rangle\) is _finitely presented_ over a group presentation \(\langle Y\mid S\rangle\), if there exist such finite sets \(A\) and \(B\) that \(\langle X\mid R\rangle\cong\langle X^{\prime}\mid R^{\prime}\rangle\), where \(X^{\prime}=Y\cup A\), \(R^{\prime}=S\cup B\). The following lemma reveals that this definition is, in fact, a group property (which means it does not depend on the choice of a group presentation), so it makes sense to speak about the finite presentability of one group over the other group: **Lemma 1**: **.** Suppose that a group presentation \(\langle X\mid R\rangle\) is finitely presented over a group presentation \(\langle Y\mid S\rangle\) and \(\langle Y\mid S\rangle\cong\langle Y^{\prime}\mid S^{\prime}\rangle\). Then \(\langle X\mid R\rangle\) is finitely presented over \(\langle Y^{\prime}\mid S^{\prime}\rangle\). **Proof.** We may assume that \(X=Y\cup A\) and \(R=S\cup B\) for some finite sets \(A\) and \(B\). It is known (see, for example, [13]) that groups defined by group presentations \(\langle Y\mid S\rangle\) and \(\langle Y^{\prime}\mid S^{\prime}\rangle\) are isomorphic if and only if presentation \(\langle Y^{\prime}\mid S^{\prime}\rangle\) is obtained from presentation \(\langle Y\mid S\rangle\) by applying a finite number of _Tietze transformations_: * adding to the set \(S\) an arbitrary set \(T\subseteq\langle\langle S\rangle\rangle\unlhd F(Y)\) of its consequences, * adding to the set \(Y\) an arbitrary set \(\widetilde{Y}\) while adding to \(S\) a set \(\{\widetilde{y}=w_{\widetilde{y}}\mid\widetilde{y}\in\widetilde{Y},w_{ \widetilde{y}}\in F(Y)\}\), and their inverses. It is sufficient to prove the lemma only for the case, when \(\langle Y^{\prime}\mid S^{\prime}\rangle\) is obtained from \(\langle Y\mid S\rangle\) by applying one Tietze transformation. One can easily verify that in case of the first transformation, \(X^{\prime}=X\) and \(R^{\prime}=R\cup T\), while in case of the second transformation, \(X^{\prime}=X\cup\widetilde{Y}\) and \(R^{\prime}=R\cup\{\widetilde{y}=w_{\widetilde{y}}\mid\widetilde{y}\in \widetilde{Y},w_{\widetilde{y}}\in F(Y)\}\) provide the desired group presentation. By virtue of Lemma 1, the following definition may be introduced: A group \(G\) is _finitely presented_ over a group \(H\), if there exists such a presentation of \(G\) that it is finitely presented over any presentation of \(H\). **Lemma 2**.: Suppose that \(G\) contains a subgroup \(H\) and a finite normal subgroup \(N\) such that \(G/N\) is finitely presented over \(H/(H\cap N)\). Then \(G\) is finitely presented over \(H\). **Proof** (with minor changes) replicates the proof of the Hall theorem [11] about preservation of finite presentability of a group under extensions (see also [14]). Let \(G\) be a group, \(H=\langle X\mid R\rangle\leqslant G\), and \(N=\langle Y\mid S\rangle\unlhd G\) be its finite subgroup, where \(Y\) and \(S\) are finite sets. By condition of the lemma, the group \(G/N\) is finitely presented over \(H/(H\cap N)=\langle X\mid R\cup C\rangle\), where \(\langle\langle C\rangle\rangle=H\cap N\) and the set \(C\) is finite. Consequently \[G/N\cong\langle X\cup A\ |\ R\cup C\cup B\rangle,\] where sets \(A\) and \(B\) are finite. Let us construct a presentation of the group \(G\). As the set of generators, take \(\overline{X}\cup\overline{A}\cup\overline{Y}\), where sets \(\overline{X}\), \(\overline{A}\), \(\overline{Y}\) are in one-to-one correspondence with sets \(X\), \(A\), \(Y\) respectively. The sets \(R\), \(S\), \(C\), and \(B\) are in correspondence with the sets \(\overline{R}\), \(\overline{S}\), \(\overline{C}\), and \(\overline{B}\) respectively. As the set of defining relations, take the union of the following sets: \(\overline{R}\), \(\overline{S}\), \(\overline{C}_{1}=\{cw_{c}^{-1}\mid c\in\overline{C},w_{c}\in F(\overline{Y})\}\), \(\overline{B}_{1}=\{bw_{b}^{-1}\mid b\in\overline{B},w_{b}\in F(\overline{Y})\}\) (\(c\in\overline{C}\) and \(b\in\overline{B}\) are considered as words from \(F(\overline{X})\) and from \(F(\overline{X}\cup\overline{A})\) respectively), \(\overline{T}=\{a^{-1}yaw_{a,y}^{-1},\ aya^{-1}v_{a,y}^{-1}\mid\ a\in\overline{ A},y\in\overline{Y},\ w_{a,y},v_{a}\in F(\overline{Y})\}\): \[\widetilde{G}=\langle\overline{X}\cup\overline{A}\cup\overline{Y}\mid \overline{R}\cup\overline{S}\cup\overline{C}_{1}\cup\overline{B}_{1}\cup \overline{T}\rangle.\] Consider a surjective homomorphism \(\theta:\widetilde{G}\to G\), defined with the following bijections \(\overline{X}\to X\), \(\overline{A}\to A\), \(\overline{Y}\to Y\) on the generators (defining relations are mapped into true identities under such a map on generators, so such a homomorphism exists). The restriction \(\theta|_{K}:K\to N\) on the subgroup \(K=\langle\overline{Y}\rangle\leqslant\widetilde{G}\) is an isomorphism as all the relations in the alphabet \(\overline{Y}\) in \(\widetilde{G}\) are consequences of the defining relations \(\overline{S}\). Besides, \(K\unlhd\widetilde{G}\). Homomorphism \(\widetilde{\theta}:\widetilde{G}/K\to G/N\) generated by \(\theta\), is an isomorphism too. Now, let \(g\in\ker\theta\). Then \(gK\in\ker\widetilde{\theta}\), but \(\widetilde{\theta}\) is an isomorphism, so \(g\in K\). Finally, \(\theta|_{K}\) is an isomorphism, so \(g=1\). The following lemma provides a criterion for algebraic closedness of a subgroup \(H\) of a group \(G\) in case, when \(G\) is finitely presented over \(H\) (for similar propositions, refer to [13]): **Lemma 3**.: Suppose that \(H=\langle X\mid R\rangle\) is a subgroup of \(G\) and \(G\) is finitely presented over \(H\). The subgroup \(H\) is algebraically closed in \(G\) if and only if \(H\) is a retract of \(G\). **Proof.** Suppose \(H\) is algebraically closed in \(G\) and \(A=\{a_{1},\dots,a_{m}\}\), \(B=\{s_{1},\dots,s_{n}\}\) are the sets from the definition of finite presentability of \(G\) over \(H\). The relations \(s_{i}(a_{1},\dots,a_{m},X)=1\), \(i=1,\dots,n\) are corresponded to a system of equations with coefficients from \(H\): \[\begin{cases}s_{1}(t_{1},\dots,t_{m},X)=1\\ \dots\\ s_{n}(t_{1},\dots,t_{m},X)=1\end{cases}\] which, by condition, has a solution \(t_{1}=a_{1},\dots,t_{m}=a_{m}\). By virtue of algebraic closedness of \(H\) in \(G\), this system has a solution \(t_{1}=h_{1},\dots,t_{m}=h_{m}\) in \(H\). Mapping \(X\sqcup\{a_{1},\dots,a_{m}\}\to H,\,x\in X\mapsto x\) \(a_{i}\mapsto h_{i}\) extends to a surjective homomorphism \(\varphi:G\to H\), as defining relations of \(G\) are mapped into true identities under such a mapping of generators (note that \(R\) is the set of words in the alphabet \(X\)). This homomorphism is the desired retraction: let \(h\in H\), \(\ h=v(x_{1},\ldots,x_{r})\), \(x_{i}\in X\). Applying to this word the homomorphism \(\varphi\), we get: \(\varphi(h)=v(\varphi(x_{1}),\ldots,\varphi(x_{r}))=h\). Algebraic closedness of a subgroup \(H\) of a group \(G\) follows from retractness of \(H\) in \(G\) for every group \(G\)[13]. **Approximation lemma.** Let \(C\) be a finite elementary abelian \(p\)-group (where \(p\) is a prime number). For any \(k\in\mathbb{N}\), there exists \(t\geqslant k\) such that the direct product \(P=\times_{i=1}^{t}C_{i}\) of copies \(C_{i}\) of \(C\) contains a subgroup \(R\) invariant with respect to the diagonal action on \(P\) of the endomorphism algebra \(\operatorname{End}C\) with the following properties: 1. \(R\subseteq\bigcup\ker\rho_{j}\), where \(\rho_{j}:P\to C_{j}\), \(j=1,\ldots,t\) are the natural projections, 2. But \(R\cdot\times_{j\not\in J}C_{j}=P\) for any subset \(J\subseteq\{1,\ldots,t\}\) of cardinality \(|J|=k\), 3. Moreover, each such \(J\) is contained in a set \(J^{\prime}\supseteq J\) such that \(P=R\times(\times_{j\not\in J^{\prime}}C_{j})\); and there exist integers \(n_{ij}\in\mathbb{Z}\) such that the projection \(\pi:P\to\times_{j\not\in J^{\prime}}C_{j}\) with the kernel \(R\) acts as: \(C_{i}\ni c_{i}\mapsto\prod_{j\not\in J^{\prime}}c_{j}^{n_{ij}}\), where \(c_{j}\in C_{j}\) is the element corresponding to \(c_{i}\) under the isomorphism \(C_{i}\cong C\cong C_{j}\). The following theorem provides a generalization of the result from [12] about the center of a finite strongly verbally closed group. The proof is also analogical to the proof of that theorem, with the exception of some nuances. **Finite-normal-subgroup theorem.** Let \(H\) be a strongly verbally closed group. For any finite normal subgroup \(T\) of \(H\), for any abelian subgroup \(A\) of \(T\), normal in \(H\), it is true that \(Z(C_{T}(A))\) is a direct factor of \(C_{T}(A)\), and some complement is normal in \(H\). Here \(C_{T}(X)=C(X)\cap T\). **Proof.** Let \(H\) be such a group, and let \(L=C_{T}(A)\). It suffices, for each prime \(p\), to find a homomorphism \(\psi_{p}:L\to Z(L)\) commuting with the action \(H\curvearrowright L\) by conjugations (this action is well-defined as \(L\unlhd H\)) and injective on the \(p\)-component of the center \(Z_{p}(L)\) of \(L\). Then the homomorphism \(\psi:L\to Z(L)\), \(x\mapsto\prod_{p}\pi_{p}(\psi_{p}(x))\), where \(\pi_{p}:Z(L)\to Z_{p}(L)\) is the projection on the \(p\)-component, is injective on \(Z(L)\), so its kernel is the desired complement \(D\) (normality of \(D\) in \(H\) follows from the fact that \(\psi\) commutes with \(H\curvearrowright L\)). Suppose that there are no such homomorphisms for some prime number \(p\), i.e. every homomorphism \(f:L\to Z(L)\) commuting with the action \(H\curvearrowright L\) is not injective on \(Z_{p}(L)\). Then it is not injective on the maximal elementary abelian \(p\)-subgroup \(C\leqslant Z_{p}(L)\) (it is finite as \(L\) is finite). Indeed, if \(x\neq 1\in Z_{p}(L)\) is an element such that \(f(x)=1\), then, raising it to the appropriate power \(d\), we get \(f(x^{d})=1\) and \(x^{d}\in C\), \(x^{d}\neq 1\). Choose \(t\) by the approximation lemma applied to \(C\) (for some \(k\) to be specified later) and consider the fibered product of \(t\) copies of the group \(H\): \[Q=\{(h_{1},\ldots,h_{t})\ |\ h_{1}L=\cdots=h_{t}L\}\leqslant H^{t}.\] First of all, let us show that the subgroup \(R\leqslant C^{t}\leqslant Q\) from the approximation lemma is normal in \(Q\). Subgroup \(R\) is invariant under the diagonal action of automorphisms \(\operatorname{Aut}C\leqslant\operatorname{End}C\). It remains to show that \(Q\) acts by conjugations on \(P=C^{t}\) diagonally. It follows from the lemma: **Lemma 4.** Let \(G\) be a group, and \(N\unlhd G\). If \(xC(N)=yC(N)\) for some \(x,y\in G\), then \(x\) and \(y\) act on \(N\) (by conjugations) identically. **Proof.** From \(xC(N)=yC(N)\) it follows that for some \(c\in C(N)\), \(x=yc\). Then for \(n\in N\), we have: \[x^{-1}nx=c^{-1}(y^{-1}ny)c=y^{-1}ny.\] The last identity is true, as (due to normality) \(y^{-1}ny\in N\) and \(c\in C(N)\). Let \(q=(q_{1},\ldots,q_{t})\in Q\), \(p=(p_{1},\ldots,p_{t})\in P\). As \(q_{1}L=q_{2}L=\cdots=q_{t}L\), then (according to Lemma 4) \(q^{-1}pq=\widetilde{q}^{-1}p\widetilde{q}\), where \(\widetilde{q}=(q_{1},\ldots,q_{1})\). It means that the conjugation action of \(Q\) on \(P\) is diagonal. On the other hand, diagonal action by conjugations induces an endomorphism of \(C^{t}\) (due to normality of \(C\unlhd H\)), and \(R\) is invariant with respect to the diagonal action of such endomorphisms, leading to normality of \(R\) in \(Q\). Put \(G=Q/R\). First, let us show that \(H\) embeds into \(G\). The group \(H\) embeds into \(Q\) diagonally: \(h\mapsto(h,\ldots,h)\), \(h\in H\). This homomorphism serves as embedding into \(G\) as well, as all projections of any nontrivial diagonal element of \(Q\) are nontrivial (and \(R\) is contained in the union of the kernels of these projections). Now, let us prove the verbal closedness of this diagonal subgroup (denote it as \(H\) too) in \(G\). Consider an equation \[w(x_{1},\ldots,x_{n})=(h,\ldots,h)\] having a solution in \(G\) and let \(\widetilde{x}_{1},\ldots,\widetilde{x}_{n}\) be a preimage (in \(Q\)) of a solution \(x_{1},\ldots,x_{n}\). Then (in \(Q\)): \[w(\widetilde{x}_{1},\ldots,\widetilde{x}_{n})=(hc_{1},\ldots,hc_{t}),\] where \((c_{1},\ldots,c_{t})\in R\). By the property 1) of the approximation lemma, \(c_{i}=1\) for some \(i\). It means that in \(H\) (the group itself \(w(\widetilde{x}_{1}^{i},\ldots,\widetilde{x}_{n}^{i})=h\), where \(\widetilde{x}_{j}^{i}\) is the \(i\)th coordinate of the vector \(\widetilde{x}_{j}\), \(j=1,\ldots,n\). Let us take \(y_{j}=(\widetilde{x}_{j}^{i},\ldots,\widetilde{x}_{j}^{i})\), \(j=1,\ldots,n\). Then in \(H\leqslant G\) the following is true: \[w(y_{1},\ldots,y_{n})=(h,\ldots,h),\] which proves the verbal closedness of \(H\) in \(G\). Let \(U\leqslant L\). We use the following denotion: \[U_{i}:=\{(1,\ldots,1,u,1,\ldots,1)\ |\ u\in U\}\leqslant Q,\ i=1,\ldots,t\ (\mbox{ coordinate $u$ stands on the $i$th place}).\] It remains to prove that \(H\) is not algebraically closed in \(G\). **Lemma 5**.: The group \(Q\) is finitely presented over its subgroup \(H\). **Proof.** According to Lemma 2, it is sufficient to show that \(Q/(L_{1}\times\cdots\times L_{t})\) is finitely presented over \(H/\widetilde{L}\), where \(\widetilde{L}=\{(l,\ldots,l)\ |\ l\in L\}\). However, \(Q=H\cdot(L_{1}\times\cdots\times L_{t})\), so the statement we prove follows from this fact (see [13], theorem 4.2.4): Suppose that \(G\) is a group, \(F\) is its subgroup, and \(K\) is its normal subgroup. Then \((K\cdot F)/K\cong F/(F\cap K)\). Thus, the group \(Q/(L_{1}\times\cdots\times L_{t})\) is not just finitely presented over \(H/\widetilde{L}\) but is isomorphic to it. From Lemma 3 and Lemma 5, it follows that it suffices to show that \(H\) is not a retract of \(G\). Let \(\rho:G\to H\) be a hypothetical retraction, and let \(\hat{\rho}:Q\to H\) be its composition with the natural epimorphism \(Q\to Q/R=G\). Henceforth, all subgroups and centralizers we refer to relate to \(Q\). Let us verify that \(\hat{\rho}(L_{i})\leqslant C_{T}(C_{T}(L))\leqslant L\) for every \(i\). First, prove the left inclusion. Let \(h\in C_{T}(L)\). Then, \(h\) commutes with every element from \(L\); consequently, \(h\), as an element of \(Q\), commutes with \(L_{i}\). Applying the retraction \(\hat{\rho}\) to this identity, we get that \(\hat{\rho}(h)\ (=h)\) commutes with the subgroup \(\hat{\rho}(L_{i})\), which (by definition of the centralizer) proves the inclusion. The second inclusion follows from the fact that \(L=C_{T}(A)=C(A)\cap T\), which means that \[C_{T}(C_{T}(L))\leqslant C_{T}(A\cap T)=C_{T}(A)=L.\] The first inclusion here is true as \(C(L)\geqslant A\). The following equality is true as \(A\leqslant T\). On the other hand, for \(i\neq j\), the mutual commutator subgroup \([L_{i},L_{j}]\) is trivial (as in case \(i\) and \(j\) are different, \(L_{i}\) and \(L_{j}\) are contained in different components of the fibered product). It means that the image of this mutual commutator subgroup is trivial too: \([\hat{\rho}(L_{i}),\hat{\rho}(L_{j})]=\{1\}\). Consequently, \([L_{i},\prod_{j\neq i}L_{j}]=\{1\}\) and \([\hat{\rho}(L_{i}),\prod_{j\neq i}\hat{\rho}(L_{j})]=\{1\}\). If \(\hat{\rho}(L_{i})=\hat{\rho}(L_{l})\) for some \(i\neq l\), then (by the virtue of well-known commutator identities) \([\hat{\rho}(L_{i}),\prod_{j}\hat{\rho}(L_{j})]=\{1\}\), which means that \(\hat{\rho}(L_{i})\leqslant C_{T}(L)\) (as \(L=\hat{\rho}(L)\leqslant\prod_{j}\hat{\rho}(L_{j})\)). Thereby, if for some different \(i\) and \(j\), \(\hat{\rho}(L_{i})=\hat{\rho}(L_{j})\), then \(\hat{\rho}(L_{i})\leqslant C_{T}(L)\). From here and from the inclusion we proved earlier, we get \(\hat{\rho}(L_{i})\leqslant L\cap C_{T}(L)=Z(L)\). Let us take \(k\) in the approximation lemma to be the number of all subgroups of \(T\), and let \(J\) be the set of all _exclusive_ numbers \(i\), namely such that for any \(l\neq i\), \(\hat{\rho}(L_{i})\neq\hat{\rho}(L_{l})\). Since among \(\hat{\rho}(L_{i})\leqslant T\) there are no more than \(k\) different subgroups, \(|J|\leqslant k\). Thus, from the property 3) of the approximation lemma, we have a decomposition: \[\times_{i=1}^{t}C_{i}=R\times(\times_{i\in I}C_{i}),\] where \(I\subseteq\{1,\ldots,t\}\setminus J\) is some set of non-exclusive elements. Again, according to the property 3) of the approximation lemma, the projection \(\pi:\times_{i=1}^{t}C_{i}\to\times_{i\in I}C_{i}\) onto the second factor of this decomposition is defined by an integer matrix \((n_{ij})\), namely, for \(c_{i}\in C_{i}\), \(\pi:c_{i}\mapsto\prod_{j\in I}c_{j}^{n_{ij}}\), where \(c_{j}\) are elements corresponding to \(c_{i}\) under the isomorphism \(C_{i}\cong C\cong C_{j}\). This means that the restriction of \(\pi\) to \(C=\{(c,\ldots,c)\ |\ c\in C\leqslant H\}\) is defined by formula: \[\hat{\pi}:(c,\ldots,c)\mapsto\prod_{j\in I}c_{j}^{m_{j}},\ m_{j}=\sum_{i}n_{ij}.\] Here \(c_{j}\) are elements corresponding to \(c\) under the isomorphism \(C\cong C_{j}\). Then (as \(i\in I\) are non-exclusive, we have \(\hat{\rho}(L_{i})\leqslant Z(L)\)), consider the composition: \[\Psi:C\leqslant Q\to Z(L),\ c\stackrel{{\pi}}{{\mapsto}}\prod_{j \in I}c_{j}^{m_{j}}\stackrel{{\hat{\rho}}}{{\mapsto}}\prod_{j \in I}\hat{\rho}(c_{j}^{m_{j}}).\] It extends to a homomorphism \(\Phi:L\to Z(L)\) defined by the similar formula: \[\Phi:g\mapsto\prod_{j\in I}\hat{\rho}(g_{j}^{m_{j}}),\] where \(g\in L\) and \(g_{j}\in L_{j}\) are elements corresponding to \(g\in L\). Obviously, it is an extension of \(\Psi\) and a homomorphism, as for \(j\in I\), \(\hat{\rho}(L_{j})\leqslant Z(L)\) and the group \(Z(L)\) is abelian. This homomorphism commutes with the conjugation action of \(H\) on \(L\). Indeed, let \(g\in H\) and let \(\mathfrak{g}\) be the action of \(g\) on \(L\) by conjugation, namely, for \(x\in L\), \(\mathfrak{g}(x)=g^{-1}xg\). Let us show that \(\Phi\circ\mathfrak{g}=\mathfrak{g}\circ\Phi\). Let \(h\in L\). Then \(\Phi(\mathfrak{g}(h))=\prod_{j\in I}\hat{\rho}(g^{-1}h_{j}^{m_{j}}g)=\prod_{j \in I}g^{-1}\hat{\rho}(h_{j}^{m_{j}})g=\mathfrak{g}(\Phi(h))\). Penult identity is true, as \(\hat{\rho}\) is a retraction on \(H\), so it acts identically on \(H\) itself. By assumption we made in the beginning, the kernel of this homomorphism has nontrivial intersection with \(C\): \(\ker\Phi\cap C\neq\{1\}\), so the restriction \(\Psi=\Phi|_{C}\) has a nontrivial kernel too. On the other hand, \(\Psi\) is the identical mapping, since \(\Psi=\hat{\rho}|_{C}\circ\pi|_{C}=\hat{\rho}|_{C}\circ\hat{\pi}=\hat{\rho}|_{C}\) (the last identity is true as \(\hat{\pi}\) is a projection <<forgetting>> the \(R\) coordinate, and \(\hat{\rho}(R)=\{1\}\) is a composition of the natural homomorphism to the quotient group and of the retraction to \(H\)) and \(\hat{\rho}|_{C}=id\), as \(\hat{\rho}\) is the retraction from \(Q\) to \(H\), so it acts trivially on \(C\). The obtained contradiction completes the proof. Let us provide some corollaries of this theorem: **Corollary 1**.: Finitely generated nilpotent groups with nonabelian torsion subgroups are not strongly verbally closed. **Proof.** Let us take the torsion subgroup of such group as \(T\) from the theorem, and the center of this torsion subgroup as \(A\neq\{1\}\). Since \(T\) is nilpotent and nonabelian, every nontrivial normal subgroup of \(T\) has a nontrivial intersection with \(A\)[13], so \(A\) is not a direct factor of \(T\). **Corollary 2** **[**12**]**.: A finite group, whose center is not a direct factor is not strongly verbally closed. This theorem does not cover the case of finitely generated nilpotent nonabelian groups with abelian torsion subgroups, and it is still unknown whether there are strongly verbally closed groups among such groups. So far, we can provide only a partial answer to this question (see the first proposition of the following paragraph). ## 4 Nilpotent non-strongly-verbally-closed groups Let us remind that _the discrete Heisenberg group_ is the free nilpotent group of the nilpotency class two with two free generators. It can be easily verified that this group admits a faithful representation in the group of upper triangular matrices of size \(3\) by \(3\). **Proposition 5**.: Let \(H\) be the discrete Heisenberg group with \(a\) and \(b\) being its free generators and \(N\) being its subgroup: \[N=\langle\langle a^{\alpha},[a,b]^{n}\rangle\rangle,\ \alpha,n\geqslant 0.\] The group \(G=H/N\) is strongly verbally closed if and only if \(\gcd(\alpha,n)=1\). **Proof.** Let \(T(G)\) be the torsion subgroup of \(G\). The center of the group \(H\) is equal to its commutator subgroup, and it is isomorphic to the infinite cyclic group. As it was said earlier (refer to the proof of the Proposition 4), any nontrivial normal subgroup of a nilpotent group intersects its center nontrivially. Thus, if \(T(G)=\{1\}\), then either \(G=H\) is the discrete Heisenberg group, or \(G\) is abelian. Non-strong-verbal-closedness of \(H\) was proved in [13], and the strong verbal closedness of abelian groups was proved in [14]. The case of \(T(G)=\{1\}\) corresponds to the case of \(\alpha=0\), \(n=0\) and \(\alpha=0\), \(n=1\). If \(\gcd(\alpha,n)=1\), then, once again, \(G\) is abelian, since \([a,b]^{\alpha}=[a^{\alpha},b]\in N\); consequently, it is strongly verbally closed. Consider the case, when \(\gcd(\alpha,n)=d\neq 1\). Without loss of generality, we may assume that \(\alpha\) and \(n\) are the least of non-negative numbers such that \(a^{\alpha}\in N\), \([a,b]^{n}\in N\). Consider the central product of \(G\) with its copy \(\widetilde{G}\) with joined commutator subgroup: \[K=G\underset{G^{\prime}=\widetilde{G^{\prime}}}{\times}\widetilde{G}=(G\times \widetilde{G})/\{(c,c^{-1})|c\in G^{\prime}\}.\] The group \(G\) is not algebraically closed in \(K\), since \(G\) is not a retract of \(K\). Indeed, let \(\rho\) be a hypothetical retraction. The group \(G\) commutes with \(\widetilde{G}\) in \(K\), so \(\rho(\widetilde{G})\leqslant Z(G)\) and \(\rho(\widetilde{G^{\prime}})=\{1\}\), which leads to a contradiction with the definition of retraction. However, \(G\) is verbally closed in \(K\). Let \(w\in F(t_{1},\ldots,t_{s})\) be some word and \[w((h_{1}N,h_{1}^{\prime}N),\ldots,(h_{s}N,h_{s}^{\prime}N))=(hN,N)\] for some \(hN,h_{i}N\in G\), \(h_{i}^{\prime}N\in\widetilde{G}\). Then, for some \(cN\in G^{\prime}\), the following holds: \[\begin{cases}w(h_{1}^{\prime},\ldots,h_{s}^{\prime})N=cN\\ w(h_{1},\ldots,h_{s})N=hc^{-1}N\end{cases}\] By an automorphism of the free group, the word \(w\) can be reduced to a _normal form_[13]: \(w(t_{1},\ldots,t_{s})=t_{1}^{m}w^{\prime}(t_{1},\ldots,t_{s})\), where \(m\in\mathbb{N}\), \(w^{\prime}\in F_{s}^{\prime}\). From the first equation, we get \(cN\in G^{\prime}\)\(\cap\)\(\varphi(G^{s})\), where \(\varphi:G^{s}\to G\), \((g_{1},\ldots,g_{s})\mapsto w(g_{1},\ldots,g_{s})\) is a verbal mapping. This means that for some \(w_{1},w_{2}\in N\), in \(H\) it is true that: \[\begin{cases}w(h_{1}^{\prime},\ldots,h_{s}^{\prime})=cw_{1}\\ w(h_{1},\ldots,h_{s})=hc^{-1}w_{2}\end{cases}\] Let us show that in \(G\) the identity \((aN)^{x}=[aN,bN]^{z}\) doesn't hold for \(x\not\in\alpha\mathbb{Z}\). Converse would mean that in the discrete Heisenberg group the following holds: \[a^{x}[a,b]^{-z}=b^{-k}a^{-l}a^{\alpha t}[a,b]^{ns}a^{l}b^{k}\] for some \(k,l,t,s\in\mathbb{Z}\). After some reductions, we get: \(a^{x-\alpha t}=[a,b]^{ns+z+\alpha tk}\). In \(H\) it is possible only if \(x=\alpha t\in\alpha\mathbb{Z}\). We obtained a contradiction. Thus, \(h_{1}^{\prime}=[a,b]^{\gamma}\) for some \(\gamma\in\mathbb{Z}\), and, consequently, \(cw_{1}\in H^{\prime}\). Since for any verbal mapping \(\varphi\) in the discrete Heisenberg group (see [13]) \[\text{for any }g\in\varphi(H^{s}),\text{ it is true that }g(\varphi(H^{s})\cap H^{\prime})\subseteq \varphi(H^{s}),\] for some \(g_{1},\ldots,g_{s}\in H\), \(w(g_{1},\ldots,g_{s})=w(h_{1},\ldots,h_{s})cw_{1}\). It means that: \[w(g_{1},\ldots,g_{s})=hw_{3}\] for some \(w_{3}\in W\), and in \(G\): \[w(g_{1},\ldots,g_{s})N=hN,\] which proves verbal closedness of \(G\) in \(K\). At last, let us prove that the higher dimensional Heisenberg groups over any field are not strongly verbally closed: _The Heisenberg group of dimension \(2n+1\) over a field \(K\)_, where \(n\in\mathbb{N}\) is the group of upper triangular matrices of the kind \[H_{n}(K)=\Bigg{\{}T(\bar{a},\bar{b},c)=\begin{pmatrix}1&\bar{a}&c\\ 0&I_{n}&\bar{b}\\ 0&0&1\end{pmatrix}\Bigg{|}\bar{a},(\bar{b})^{\intercal}\in K^{n},c\in K \Bigg{\}},\] where \(I_{n}\) is the identity matrix of size \(n\). **Proposition 6**.: The group \(H_{n}(K)\) is not strongly verbally closed. **Proof.** Consider the central product of \(H_{n}(K)\) with its copy \(\widetilde{H}_{n}(K)\) with joined commutator subgroup: \[G=H_{n}(K)\underset{H_{n}(K)^{\prime}=\widetilde{H}_{n}(K)^{\prime}}{\times} \widetilde{H}_{n}(K).\] Denote with the symbol \(H\) the first factor of this central product. Let us show that \(H\) is not algebraically closed in \(G\). The group \(H\) is linear, and, consequently, it is _equationally noetherian_[1], so it is algebraically closed in \(G\) if and only if it is a retract of every finitely generated over \(H\) subgroup of \(G\)[13]. In particular, of such a subgroup of \(G\): \[\bar{H}=\langle H,(1,h_{1}),\ldots,(1,h_{n}),(1,g_{1}),\ldots,(1,g_{n})\rangle, \text{ where }h_{i}=\begin{pmatrix}1&\bar{a}_{i}&0\\ 0&I_{n}&0\\ 0&0&1\end{pmatrix},g_{i}=\begin{pmatrix}1&0&0\\ 0&I_{n}&\bar{b}_{i}\\ 0&0&1\end{pmatrix},\] where \(\bar{a}_{i}=(0,\ldots,1,\ldots,0)=(\bar{b}_{i})^{\intercal}\), (unit is on the \(i\)th place). Thus, \(N=\langle h_{1},\ldots,h_{n},g_{1},\ldots,g_{n}\rangle\) is a subgroup of \(\widetilde{H}_{n}(K)\), isomorphic to the discrete Heisenberg group of the dimension \((2n+1)\). Let \(\rho\) be a hypothetical retraction. Since in \(G\) the group \(H\) commutes with \(N\), we get that \(\rho(N^{\prime})=\{1\}\), which leads to a contradiction with the definition of retraction. Nevertheless, subgroup \(H\) is verbally closed in \(G\): let \(w\in F_{s}\) be some word (without loss of generality, this word is in the normal form we established earlier), and let \(\underline{\varphi}:H^{s}\to H\) be the verbal mapping associated with this word. Suppose that for some \(h_{i},h\in H\), \(h_{i}^{\prime}\in\widetilde{H}\), \(c\in H^{\prime}\): \[\begin{cases}w(h_{1}^{\prime},\ldots,h_{s}^{\prime})=c\\ w(h_{1},\ldots,h_{s})=hc^{-1}\end{cases}\] In general, on matrices \(g_{i}=T(\bar{a}_{i},\bar{b}_{i},c_{i})\), \(i=1,\ldots,s\), the mapping \(\varphi\) acts like that: \[\varphi(g_{1},\ldots,g_{s})=\begin{pmatrix}1&m\bar{a}_{1}&mc_{1}+f(\bar{a}_{1},\ldots,\bar{a}_{s};\bar{b}_{1},\ldots,\bar{b}_{s})\\ 0&I_{n}&m\bar{b}_{1}\\ 0&0&1\end{pmatrix},\] where \(f:(K^{n})^{s}\times(K^{n})^{s}\to K\) is some function linear in every argument. The image of \(f\) is either trivial or is equal to \(K\), which leads to: \[\varphi(H_{n}(K)^{s})=\begin{cases}\{1\},&\text{ if }m=0\text{ and the image of }f\text{ is trivial}\\ (H_{n}(K))^{\prime},&\text{ if }m=0\text{ and the image of }f\text{ equals }K\\ H_{n}(K),&\text{ if }m\neq 0\end{cases}\] Then \(\varphi(H_{n}(K)^{s})\cap(H_{n}(K))^{\prime}\leqslant H_{n}(K)\) and for every element \(h\in\varphi(H_{n}(K)^{s})\) it is true that \[h(\varphi(H_{n}(K)^{s})\cap(H_{n}(K))^{\prime})\subseteq\varphi(H_{n}(K)^{s}),\] whence verbal closedness follows.
2310.14671
Scrap Your Schedules with PopDescent
In contemporary machine learning workloads, numerous hyper-parameter search algorithms are frequently utilized to efficiently discover high-performing hyper-parameter values, such as learning and regularization rates. As a result, a range of parameter schedules have been designed to leverage the capability of adjusting hyper-parameters during training to enhance loss performance. These schedules, however, introduce new hyper-parameters to be searched and do not account for the current loss values of the models being trained. To address these issues, we propose Population Descent (PopDescent), a progress-aware hyper-parameter tuning technique that employs a memetic, population-based search. By merging evolutionary and local search processes, PopDescent proactively explores hyper-parameter options during training based on their performance. Our trials on standard machine learning vision tasks show that PopDescent converges faster than existing search methods, finding model parameters with test-loss values up to 18% lower, even when considering the use of schedules. Moreover, we highlight the robustness of PopDescent to its initial training parameters, a crucial characteristic for hyper-parameter search techniques.
Abhinav Pomalapally, Bassel El Mabsout, Renato Mansuco
2023-10-23T08:11:17Z
http://arxiv.org/abs/2310.14671v2
# Population Descent: A natural-selection based hyper-parameter tuning framework ###### Abstract First-order gradient descent has been the base of the most successful optimization algorithms ever implemented. On supervised learning problems with very high dimensionality, such as neural network optimization, it is almost always the algorithm of choice, mainly due to its memory and computational efficiency. However, it is a classical result in optimization that gradient descent converges to local minima on non-convex functions. Even more importantly, in certain high-dimensional cases, escaping the plateaus of large saddle points becomes intractable. On the other hand, black-box optimization methods are not sensitive to the local structure of a loss function's landscape but suffer the curse of dimensionality. Instead, memetic algorithms aim to combine the benefits of both. Inspired by this, we present Population Descent, a memetic algorithm focused on hyperparameter optimization. We show that an adaptive \(m\)-elitist selection approach combined with a normalized-fitness-based randomization scheme outperforms more complex state-of-the-art algorithms by up to 13% on common benchmark tasks. ## 1 Introduction Today's machine learning methods almost entirely rely on Gradient Descent as a core optimization technique. Many recent deep learning tasks, whether it is supervised learning, or unsupervised, include Neural Network (NN) optimization with ever-growing parameter spaces. State-of-the-art Large Language Models Cheng et al. (2023) are currently parameterized by billions of parameters. This inherently limits the methods used for optimization to ones that can effectively run in linear time with respect to the dimensionality of the parameters. To ensure efficiency at scale, there is a large body of research within this space, from momentum-based methods Goh (2017), to the forward-forward algorithm Hinton (2022). However, all of these methods include hyperparameters that need to be tuned for the task at hand. Therefore, many hyperparameter tuning and meta-learning methods have been proposed Akiba et al. (2019); Rogachev & Melikhova (2020); Gad (2022), each with its own limitations. One important limitation shared across these methods is the lack of active hyperparameter optimization. Instead of modifying hyperparameters as training proceeds, these algorithms only control initial hyperparameter values. They then let the dynamical system evolve until completion Liu & Theodorou (2019) -- training until a time-limit or a convergence condition is met. Thus, they are limited by the same problems of finding optimal controllers for open-loop dynamical systems. Instead, (1) performing closed-loop control, (2) observing the behavior as these local-optimization iterations occur, and (3) making informed decisions along the path results in an approach that can more efficiently solve complex non-convex optimization tasks. As evidence for the benefits of dynamic control, a large body of work on learning-rate schedules exists, showing very significant improvements over using static rates Liu & Theodorou (2019); Li et al. (2017); Darken et al. (1992); Li & Arora (2019). Schedules are limited in that they are only functions of the number of iterations taken, remaining "blind" to the performance of the algorithm and generating more hyperparameters to tune. Also, because the loss landscapes for many neural networks can be complex and non-smooth, local search methods may get "stuck" in high-dimensional places/saddle points Choromanska et al. (2015); Brea et al. (2019). Instead, global optimization methods, usually population-based, employ non-differentiable optimization for non-convex loss functions. By adding random noise, individuals can search the loss space, eventually converging at a global minima instead of local minima. Evolutionary/genetic algorithms Back (1996); Liashchynskyi & Liashchynskyi (2019) are one of the most popular methods that utilize mutations every iteration, and differential evolution Storn & Price (1995) is a subset that uses differentiable mutations for faster convergence Karaboga & Cetinkaya (2005). Still, evolutionary algorithms, can be prohibitively expensive and intractable by stochastic nature, and struggle against fine-tuned local search solutions. Thus, to take advantage of the efficiency of local search and the improved global search of evolutionary algorithms, memetic algorithms combine both Moscato (1999); Yuenyong & Nishihara (2014); D'Angelo & Palmieri (2021); Borna & Hashemi (2014). Recent work Cui et al. (2018); Xue et al. (2021) has proposed memetic algorithms for optimization on deep-learning workloads which track model fitness to make adjustments (in this case, mutations) during training. However, these methods themselves required hyper-parameter tuning for different benchmark machine learning tasks. Futhermore, their implementability is limited by the complexity of the employed genetic recombination schemes. To overcome the aforementioned limitations, we propose Population Descent (PopDescent), an \(m\)-elitist population-based memetic algorithm for hyperparameter optimization. The key idea in PopDescent is _actively_ choosing how much to explore the parameter and hyperparameter space or exploit a specific location on the loss curve. This decision is taken based on a normalized fitness score representing our model's progress during iterations of local updates. We show in our evaluations that calculating fitness based on a cross-validation set, and linking mutation strength to an individual's performance produces better results compared to regular hyperparameter tuning methods such as grid search, and specialised memetic algorithms such as ESGD Cui et al. (2018). As opposed to ESGD, PopDescent is significantly less sensitive to changes in its own hyperparameters, as precisely quantified in our evaluation--see Section 3. Unlike ESGD, PopDescent keeps the hyperparameters themselves unchanged across all the benchmarks. To demonstrate PopDescent's ability to more effectively traverse the loss landscape on real deep-learning workloads, we apply the algorithm to the FMNIST Xiao et al. (2017) and CIFAR-10 Krizhevsky et al. (2009) classification benchmarks. We also conduct ablation studies justifying the effectiveness of the choices made in PopDescent. PopDescent achieves better test and training loss on every experiment we performed, while taking a lower number of total gradient steps. We claim the following contributions: * A biased selection process for choosing replacement individuals that are of higher fitness on a cross-validation set. This directs the parameter search towards models that are not overfit on the training set; * A variable mutation strength based on the fitness of an individual, balancing whether it should exploit the progress made by gradient-descent or perturb the model's location in the loss and hyperparameter space; * An example adaptive learning/regularization rate randomization technique ensuring that training is not static but rather based on progress, emphasizing how poorly-performing models are adjusted appropriately; * An open source reference implementation based on Tensorflow 2 which can be used directly in machine learning tasks. ## 2 Population Descent PopDescent is a memetic algorithm, meaning it combines both meta-optimization and gradient-based optimization into a single scheme. We define the pseudocode of PopDescent in Alogirthm 1 which we hereafter describe in detail. ### Algorithm Definition The goal of PopDescent is to find an optimal set of parameters forming what we term an individual. An individual is constructed from sets of parameters \(\theta\), and hyperparameters \(\alpha\). We search for individuals which maximize a user-provided Fitness function. These individuals are maximized on batches from a **Test** distribution that remains unseen during the procedure. Namely \[\text{individual}^{*}=\langle\theta^{*},\alpha^{*}\rangle=\sup_{(\theta,\alpha) \text{Individuals}}\mathbb{E}_{\text{batch}\sim\textbf{Test}}\left[\text{ Fitness}(\langle\theta,\alpha\rangle,\text{batch})\right]\] However, since the _Test_ distribution must remain unseen, we are forced to make use of available proxy data in the form of a _Training_ distribution and a _Cross Validation_ distribution. This is standard in machine learning workloads. We do not make assumptions on the differentiability of the provided Fitness function. This allows one to use of common metrics of performance such as accuracy. Since the dimensionality of the parameter space can be exceedingly large (such as with Neural Networks), we allow the use of a Local Update function which can efficiently update the bulk of the parameters held in the \(\theta\) of every individual. We assume that invocations of Local Update maximizes the individual's expected Fitness over the _Training_ set. An example of such a function is Stochastic Gradient Descent (SGD) as defined in Algorithm 2. SGD makes use of gradient-backpropagation to update \(\theta\) in linear time. Local Update minimizes a differentiable Loss as a proxy for maximizing Fitness with respect to \(\theta\). However, the Local Update function does not modify the \(\alpha\) hyperparameters. This can for example be the learning rate in SGD, and it can also be the regularization magnitude. In order to find the best hyperparameters, PopDescent takes an \(m\)-elitist approach by holding onto a candidate set of individuals called a **Population**. In each iteration, The \(m\) fittest individuals from the **Population** are kept untouched (\(m\)-elite), while the weakest (\(|\textbf{Population}|-m\)) individuals are always replaced. We then pick replacements from the **Population** but bias our choices towards fitter individuals. These replacements then go through a Mutate operation provided by the user. The mutation magnitude depends on the fitness of the individual. That is, we mutate individuals more when they perform poorly. In a sense, the normalized Fitness value allows the algorithm to be aware of progress made during optimization, and explore more space when that is more beneficial. Throughout the algorithm, \(|\textbf{Population}|\) remains an invariant. PopDescent terminates when the user-defined Converged function outputs 1 (line 1). Then, at each iteration: 1. Lines 2-3: The individuals in the **Population** all take a Local Update step over a batch sampled form the _Training_ distribution. This produces a set of **Optimized** individuals; 2. Lines 4-5: A batch is sampled from the \(CV\) distribution, upon which we build Fitness\({}_{\boldsymbol{CV}}\), i.e., the fitness function for that batch; 3. Line 6: We use Fitness\({}_{\boldsymbol{CV}}\) to build a _WeightedMultinomial_ probability distribution whose samples are individuals from the **Optimized** set. The probability of each individual is defined by normalizing their fitness values, so that the probabilities sum to 1. This distribution is biased towards choosing fitter replacements; 4. Line 7-14: We iterate (\(|\textbf{Population}|-m\)) times replacing the (\(|\textbf{Population}|-m\)) lowest fitness individuals by a mutated replacement. We find replacement individuals via sampling from the _WeightedMultinomial_ distribution (Line 12). Then the replacement is mutated by an amount dependent on its fitness: the lower the fitness, the more it will be mutated; 5. Line 15: **Population** is now updated to include the \(m\)**Strong** individuals and the (\(|\textbf{Population}|-m\)) mutated replacements; 6. Line 17: Finally, we return the individual in the **Population** with the largest fitness. In the example function implementations in Algorithm 2, we also show a sample Mutate function where we randomize the \(\theta\) parameters via a _Gaussian_ distribution whose standard deviation is defined by the mutation magnitude. We opt to modify the learning rate geometrically via a _LogNormal_ distribution so that multiplying the learning rate by 0.1 and 10 is equally as likely with a standard deviation of 1. Note that when the magnitude is at 0, None of the parameters would change. ### Key points in PopDescent's design * We designed PopDescent to naturally select individuals which generalize well to a dataset not seen during local updates. We hypothesize that this would allow proper selection of regularization values rather than using ad-hoc techniques such as early stopping. This is evaluated in Section 3.3. * If we remove the selection and mutation procedure then PopDescent simply becomes the random hyperparameter search algorithm, since after initialization, the individuals will be undergoing only iterations of SGD. * PopDescent is also amenable to parallelization and the only synchronization required occurs in the replacements step. * PopDescent has a few hyperparameters itself (depending on the implementation of Mutate), but we have left these values constant across our experiments to showcase the effectiveness of the method and its low sensitivity to specific values of these parameters. ### Limitations Due to the no free lunch theorem Wolpert & Macready (1997), there will always be a case where this algorithm will be worse than a purely random approach at maximizing our Fitness. For example, if the learning rate is initialized too high, too many randomization steps would be needed for making progress, due to the random walk nature of the mutation method used. Another limitation is that the algorithm does not take into account the best individual ever observed, meaning there is no guarantee that the algorithm will always improve in performance with more iterations. This is due to the decision to always take a Local Update with respect to the **Population**. ## 3 Evaluations In this section, we demonstrate that **1)** PopDescent's active tuning framework achieves better performance than existing tuning and memetic algorithms on the FMNIST and CIFAR-10 benchmarks; **2)** While significantly simpler, PopDescent converges at rates similar to the state-of-the-art memetic algorithm in a fair comparison; **3)** PopDescent's specific randomization scheme contributes to its results; and **4)** PopDescent is remarkably insensitive to changes in its own hyperparameters, allowing it to tune the target parameters without having to tune the framework itself. ### Benchmarks We compare PopDescent against 1) grid search, due to its prevalence. 2) KerasTuner (KT): RandomSearch, due to its popularity (KT Rogachev & Melikhova (2020) and Optuna Akiba et al. (2019) are the most popular hyperparameter tuning frameworks, currently totalling around 2 million downloads per month). And 3) Evolutionary stochastic gradient descent (ESGD) Cui et al. (2018), as it is the state-of-the-art memetic algorithm for benchmark machine learning workloads (to our knowledge). For clarification, KT's RandomSearch algorithm does not just randomly sample a subset of hyperparameters than would be explored during a grid search. Sampling is not limited to discrete rates (i.e. it can choose from continuous distributions). Also, RandomSearch chooses the "best" hyperparameters after testing parameter combinations on the first few (in our case, two) epochs of training, then resetting the model again, seeing which combination has the best validation loss early on. This allows it to test more parameter combinations in fewer gradient steps. _Some notes on benchmarks._ For the FMNIST and CIFAR-10 benchmarks, we opted to train larger models (4,575,242 and 186,250 parameters respectively) as they are more prone to overfitting Arpit et al. (2017) leading to higher sensitivity in hyperparameter choice, a problem well-suited to evaluate these tuning frameworks. The "With Regularization" models use the same model with l2 kernel regularization added. To compare fairly against the available implementation of ESGD which does not implement regularization, we exclude comparisons with regularization. All algorithms use cross-validation loss as the metric for evaluating model fitness during training. All algorithms use the Adam optimizer for local search, except for ESGD, which uses SGD. Grid search "Without Regularization" trains five models each with a unique learning rate ([0.01, 0.001, 0.0001, 0.00001, 0.000001]). For "With Regularization," we let grid search enumerate the cartesian product of the five aforementioned learning rates and five different regularization rates producing 25 trained models. We use KT RandomSearch with 25 trials (# of combinations it tests). It samples learning rates from the continuous range of \([0.01-0.0001]\), and regularization rates from \([0.1-0.00001]\). RandomSearch and ESGD train over the whole dataset, and PopDescent and grid search sample portions of the data. A gradient step is defined by a single step taken by a local optimizer over one batch. We calculate the total number of gradients steps taken by every algorithm via total \(=\) iterations \(\times\) number of models \(\times\) batches. We choose how many gradient steps to run each algorithm by observing when they converge. _Our objective is minimizing the final test loss._ **FMNIST Benchmark.** We tested each algorithm on the Fashion MNIST image classification dataset in Table 1, containing a 60k image training-set and a 10K image test-set (we split the test-set into two halves for a validation-set for all methods except ESGD, which uses the full test-set for validation). Each image is size 28x28, with 1 color channel. An identical Convolutional Neural Net was used for each test (4,575,242 parameters), with three convolutional layers and two fully connected. For all tests, we set the batch size to 64, and ESGD and PopDescent are initialized with default learning rates of 0.001. **CIFAR-10 Benchmark.** We tested each algorithm on the CIFAR-10 image classification dataset in Table 1, containing a 50k image training-set and a 10K image test-set, splitting test/validation loss exactly the same as for FMNIST. Each image is size 32x32 with 3 color channels. An identical Convolutional Neural Net was used for each test (186,250 parameters), with four convolutional layers and two fully connected. We set the batch size to 64 in all cases, except for ESGD where we set it to 8; as is done in Masters & Luschi (2018). ESGD is initialized with a learning rate of 0.01, and PopDescent with 0.001 (ESGD performs much better on CIFAR-10 with 0.01 over 0.001 in our tests). **Benchmark Results.** PopDescent finds models with the lowest overall test loss across the board. It is also always taking the fewest or near fewest gradient steps. Both grid search and RandomSearch cannot adjust their parameters on-line, their convergence rates thus suffer. ESGD is our closest comparison to PopDescent as a memetic algorithm, but does not tune any hyperparameters. These results show ESGD's mutations perform well, but it relies on either a static learning rate or a schedule, both of which remain unchanged throughout gradient updates. On larger models that are prone to overfitting, PopDescent's ability to constantly monitor a model's performance on the cross-validation set and accelerate or decelerate its learning/regularization proves to be performant in these benchmarks. ### Convergence Memetic algorithms like ESGD often rely on mutation lenghts, reproductive factors, mixing numbers, etc.; their genetic schemes are complex, and thus difficult to implement. On the other hand, PopDescent's mutation step only adds independent noise to the parameters, and uses a simple rank-based (\(m\)-elitist) recombination step. However, when comparing convergence of the highest fitness model in the population, Figure 1 shows PopDescent converges to a lower validation loss faster than existing tuning methods and memetic algorithms. We train each algorithm on six random seeds, running them for more iterations than optimal to show convergence/divergence over time (Grid Search: 100 iterations, KT RandomSearch: 25, PopDescent 115, and ESGD: 15). We plot the mean exponential moving average (bold line) of the cross-validation loss of the best model for each algorithm across all seeds, with the standard deviation (shading) for each algorithm's trials, as a function of gradient steps taken. \begin{table} \begin{tabular}{c|c|c|c} \hline _Algorithm_ & _Test Loss \(\pm\sigma\)_ & _Train Loss \(\pm\sigma\)_ & _Gradient Steps_ \\ \hline \multicolumn{4}{c}{**FMNIST Without Regularization**} \\ \hline **Basic Grid Search** & \(0.251\pm 0.010\) & \(0.037\pm 0.006\) & 64,000 \\ **KT RandomSearch** & \(0.277\pm 0.023\) & \(0.112\pm 0.034\) & 46,800 \\ **ESGD** & \(0.276\pm 0.009\) & \(0.114\pm 0.007\) & 46,800 \\ **Population Descent** & \(0.249\pm 0.020\) & \(0.124\pm 0.052\) & 32,000 \\ \hline \multicolumn{4}{c}{**FMNIST With Regularization**} \\ \hline **Basic Grid Search** & \(0.309\pm 0.009\) & \(0.251\pm 0.007\) & 160,000 \\ **KT RandomSearch** & \(0.400\pm 0.061\) & \(0.295\pm 0.077\) & 46,800 \\ **Population Descent** & \(0.262\pm 0.019\) & \(0.152\pm 0.033\) & 32,000 \\ \hline \hline \multicolumn{4}{c}{**CIFAR-10 Without Regularization**} \\ \hline **Basic Grid Search** & \(1.176\pm 0.182\) & \(1.052\pm 0.250\) & 19,200 \\ **KT RandomSearch** & \(1.512\pm 0.275\) & \(1.343\pm 0.296\) & 39,000 \\ **ESGD** & \(0.998\pm 0.025\) & \(0.966\pm 0.033\) & 93,750 \\ **Population Descent** & \(0.863\pm 0.014\) & \(0.577\pm 0.060\) & 25,600 \\ \hline \multicolumn{4}{c}{**CIFAR-10 With Regularization**} \\ \hline **Basic Grid Search** & \(0.970\pm 0.027\) & \(0.770\pm 0.043\) & 96,000 \\ **KT RandomSearch** & \(1.195\pm 0.209\) & \(1.030\pm 0.249\) & 39,000 \\ **Population Descent** & \(0.843\pm 0.030\) & \(0.555\pm 0.070\) & 25,600 \\ \hline \hline \end{tabular} \end{table} Table 1: Benchmark comparison Figure 1: FMNIST validation loss progress. In Figure 1, RandomSearch is flat until about 46K gradients steps because it takes gradient steps to test which parameters are best without actually training the model; it only trains the model after 46K steps (25 trails). Grid search and RandomSearch both struggle to reach a low loss due to non-dynamic tuning. PopDescent and ESGD are most succesful during training, though PopDescent achieves better final test loss with lower standard deviation, and requires fewer tunable hyperparameters to implement its global step. ### Ablation Study This section analyzes how 1) the _randomization scheme_ of NN weights/learning rate/regularization rate, and 2) the use of _cross-validation loss to evaluate the fitness_ of individuals affects PopDescent's performance. To emphasize the differences, we add l2 kernel regularization to every layer in the benchmark FMNIST model, and reduced the training set size to 10K. All tests are initialized with a default learning and regularization rate of 0.001. We choose \(|\textbf{Population}|=10\) and \(m=5\). The top half of Table 2 shows how PopDescent's randomization (NN weights, learning and regularization rates) lowers test loss by 25%. Adding noise and choosing models that perform well on cross-validation loss helps the models explore more space while selecting models that prevent overfitting, as see with a lower test loss. The bottom half shows how deciding between training or cross-validation loss as the fitness function acts as a substantial heuristic when minimizing test loss, genetically "forcing" a model without regularization to still achieve decent test loss. Even when regularization is turned off, and also when cross-validation selection is turned off (now, training loss becomes the heuristic for minimization instead of validation loss), we still observe similar performance improvements due to randomization being turned on versus being turned off. We present the most pronounced differences in Table 2 to best highlight PopDescent's features. ### Hyperparameter Sensitivity In this section, we show that local search parameters affect other algorithms more than PopDescent on the CIFAR-10 dataset. We run each algorithm with a constant seed and constant hyperparameters except one (either learning rate or the number of iterations). One iteration defines one local and global update together. gradient updates the algorithm takes before performing a mutation. PopDescent defaults to a batch size of 64, a learning rate of 0.001 with Adam, and 30 iterations for the FMNIST benchmark. ESGD defaults to a batch size of 8, a learning rate of 0.01 with SGD, and 3 iterations for the FMNIST benchmark (PopDescent trains over 128 batches per iteration, ESGD over the whole training set). Table 4 shows how changes in local training parameters affect ESGD's test loss results more than PopDescent's in Table 3 (almost 275% higher standard deviation of results). PopDescent also has a much lower test loss across trials (avg. 19.2% lower). Complex memetic algorithms such as ESGD have a high number of adjustable hyperparameters, and their performance depends significantly on their specific values. As long as the parameters chosen are not extreme values, the specificity of PopDescent's hyperparameters is not particularly important. Another important note is the lack of need to tune PopDescent over different problems, while still yielding the best model. All tests for PopDescent across this entire paper (except the ablation \begin{table} \begin{tabular}{c|c|c|c} \hline \hline _Randomization_ & _CV Selection_ & _Regularization_ & _Test Loss \(\pm\sigma\)_ & _Train Loss \(\pm\sigma\)_ \\ \hline \multicolumn{4}{c}{**Ablation Study Over PopDescent Randomization**} \\ \hline ✓ & ✓ & ✓ & \(0.345\pm 0.006\) & \(0.139\pm 0.028\) \\ ✗ & ✓ & ✓ & \(0.412\pm 0.005\) & \(0.118\pm 0.077\) \\ \hline \multicolumn{4}{c}{**Ablation Study Over Cross-Validation Fitness**} \\ \hline ✓ & ✓ & ✗ & \(0.356\pm 0.009\) & \(0.163\pm 0.019\) \\ ✓ & ✗ & ✗ & \(1.140\pm 0.147\) & \(0.0003\pm 0.0002\) \\ \hline \hline \end{tabular} \end{table} Table 2: Ablation study FMNIST tests) use the same population size (5) and randomization scheme (same Gaussian noise distributions) for the global step, and the same default learning rate (0.001), regularization rate (0.001), and batch size (64) for the local step (except when they are changed for this experiment). **Discussion on learning rate schedules.** Learning rate schedules (having the learning rate be set according to the number of gradient steps taken) are one of the most common way to "actively" adjust hyperparameters during training. However, most schedules are only a function of the number of gradient steps taken, which are only a prediction of training rather than analyzing real time how a model is performing like PopDescent does. Specifically, Table 4 shows how non-dynamic optimization algorithms (most existing methods) rely on problem-specific hyperparameters to be pre-determined. Modifications to the idea of learning rate schedules do exist, in order to pay attention to a model's progress Wu et al. (2019), though they are very complex and have many hyperparameters, running into sensitivity issues like ESGD. ## 4 Related works **Gradient-Based Optimizers.** Stochastic Gradient Descent (SGD) offers quick convergence on complex loss spaces Kleinberg et al. (2018). As an improvement to SGD, momentum-based optimizers like Adam Goh (2017); Zeiler (2012) better traverse loss functions via utilizing momentum with the learning rate to more quickly escape plateaus or slow down learning as to not skip a minima. Adam's weight decay term also limits exploding gradients, and acts as a regularizer preventing overfitting. Other options like the newer Shampoo optimizer, which use "preconditioning matrices, promises even faster convergence Gupta et al. (2018). PopDescent relies on efficient local optimizers, hence such works are orthogonal. **Grid Search/Tuning Frameworks.** Grid search is the most commonly used method for searching the space of hyperparameters due to the ease of implementability yielding generally acceptable results for NN training. Essentially, it is an exhaustive search of the cartesian product of hyperparameters whose cardinality scales exponentially with dimentionality. Popular hyperparameter tuning frameworks like KerasTuner (KT) and Optuna employ more efficient versions of grid search Rogachev & Melikhova (2020), like bayesian search (uses Gaussian probabilities to check the "best" combination) Mockus (1989), random search (randomly samples the search space) Bergstra et al. (2011), or hyperband tuning (a variation of random search that chooses better individuals after half of the iterations) Rogachev & Melikhova (2020). They can sample different batches, batch sizes, learning/regularization rates, and even NN layers/units in order to find the best architecture for the task at hand. They often find a good set of hyperparameters within a constant amount of time as opposed to grid search's brute force method Liashchynskyi & Liashchynskyi (2019). However, \begin{table} \begin{tabular}{c|c|c} \hline _Learning Rate_ & _Iterations_ & _Test Loss_\(\pm\)\(\sigma\) Across Trials & _\(\sigma\) as \% of Test Loss_ \\ \hline \multicolumn{3}{c}{**Everything Constant Except Training Learning Rate**} \\ \hline **[0.01, 0.05, 0.001]** & 3 & \(1.325\pm 0.582\) & 43.95\% \\ \hline \multicolumn{3}{c}{**Everything Constant Except Total Iterations**} \\ \hline 0.001 & **[1, 3, 5]** & \(1.159\pm 0.455\) & 39.22\% \\ \hline \end{tabular} \end{table} Table 4: ESGD training with variable local parameters \begin{table} \begin{tabular}{c|c|c|c} \hline _Learning Rate_ & _Iterations_ & _Test Loss_\(\pm\)\(\sigma\) Across Trials & _\(\sigma\) as \% of Test Loss_ \\ \hline \multicolumn{3}{c}{**All Hyperparameters Constant Except Learning Rate**} \\ \hline **[0.01, 0.05, 0.001]** & 30 & \(1.049\pm 0.172\) & 16.35\% \\ \hline \multicolumn{3}{c}{**All Hyperparameters Constant Except Total Iterations**} \\ \hline 0.001 & **[10, 30, 50]** & \(0.958\pm 0.191\) & 19.94\% \\ \hline \end{tabular} \end{table} Table 3: Population Descent training with variable local parameters these methods do not allow for dynamic hyperparameter optimization; each run is independent of progress made in previous runs, and most algorithms simply return hyperparameters that the model should be initialized with during training. One common approach for adding dynamicity to the hyper-parameters is the use of schedules Li & Arora (2019). Learning rate schedules, for example, are often defined by three to five parameters, and have been proposed to improve convergence speed Dauphin et al. (2015); Darken et al. (1992). These approaches are fundamentally limited as they are based on predictions about training, rather than a model's current loss. Multiple works also explore cyclical, cosine-based, random-restart schedules adjusting the learning rate at every epoch Wu et al. (2019). However, they do not employ population-based selection, and thus explore less space. They introduce extra hyperparameters, causing many to instead use static schedules. **Memetic Algorithms** Memetic algorithms take advantage of both global and local learning, and are increasingly being used for supervised learning benchmarks D'Angelo & Palmieri (2021); Moscato (1999); Borna & Hashemi (2014); Xue et al. (2021); Yuenyong & Nishihara (2014). Evolutionary stochastic gradient descent (ESGD) Cui et al. (2018) utilizes Gaussian mutations for model parameters using an \(m\)-elitist average strategy to choose the best models after randomization and SGD optimization for local search. Performing well on CIFAR-10 classification tests, ESGD is a prime example of how adding stochastic noise benefits a strong local optimizer. Nonetheless, state-of-the-art memetic algorithms like ESGD suffer from having an extensive amount of training hyperparameters, both global (ie. mutation strength, population size, etc.) and local (ie. batch size, learning rate, etc.). Motivated by the difficulties in using a memetic framework for fine-tuning hyperparameters. PopDescent instead investigates how it is possible to tune models and prompt them to explore more space with simple stochastic noise efficiently. It strives to be a tuning framework that is not problem-specific, but can be easily applied elsewhere. ## 5 Conclusion In this paper, we propose PopDescent, a memetic algorithm that acts as a hyperparameter tuning framework using a simple population-based evolution methodology. Our tuning framework helps local search methods explore more space on the loss function. In turn, it more effectively traverses a non-convex search-space compared to methods relying only on static momentum terms or schedules. PopDescent performs better than existing tuning frameworks which do not adapt to a model's current progress. Four extensive experiments over common supervised learning benchmarks FMNIST and CIFAR-10 show the effectiveness of PopDescent. ## 6 Reproducibility Statement We take many efforts to make sure that our experiments can be reevaluated effectively: * We use the number of gradient steps as our metric of "time", so that these values remain independent of the computational hardware available * We always seed every experiment taken, and those seeds are available in our source-code. * We provide versioned source code with specific commits referenced for each test taken, and provide a README of instructions to follow to replicate our results * We provide our reference anonymized implementation of PopDescent and supplementary material at [https://github.com/anonymous112234/anonymousPopulationDescent.git](https://github.com/anonymous112234/anonymousPopulationDescent.git) * We provide a flake.nix file which exactly pins the versions of all the packages used in our tests
2303.14749
A variant of Baer's theorem
We provide a variant of Baer's theorem about isomorphism of endomorphism rings of vector spaces over division rings, where the full endomorphism rings are replaced by some subrings of finitary maps.
Pasha Zusmanovich
2023-03-26T15:02:23Z
http://arxiv.org/abs/2303.14749v1
# A variant of Baer's theorem ###### Abstract. We provide a variant of Baer's theorem about isomorphism of endomorphism rings of vector spaces over division rings, where the full endomorphism rings are replaced by some subrings of finitary maps. Key words and phrases:Baer's theorem; endomorphism ring; finitary linear map 2020 Mathematics Subject Classification: 16S50 2020 _Mathematics Subject Classification_. 16S50. ## 1. Introduction Let \(V\) be a finite-dimensional vector space over division rings. A _subring_ of \(V\) is a finite-dimensional vector space over division rings, and \(\operatorname{End}_{V}(V)\) is a finite-dimensional vector space over division rings. A _subring_ of \(V\) is a finite-dimensional vector space over division rings, and \(\operatorname{End}_{V}(V)\) is a finite-dimensional vector space over division rings. A _subring_ of \(V\) is a finite-dimensional vector space over division rings, and \(\operatorname{End}_{V}(V)\) is a finite-dimensional vector space over division rings. A _subring_ of \(V\) is a finite-dimensional vector space over division rings, and \(\operatorname{End}_{V}(V)\) is a finite-dimensional vector space over division rings. A _subring_ of \(V\) is a finite-dimensional vector space over division rings, and \(\operatorname{End}_{V}(V)\) is a finite-dimensional vector space over division rings. A _subring_ of \(V\) is a finite-dimensional vector space over division rings, and \(\operatorname{End}_{V}(V)\) is a finite-dimensional vector space over division rings. A _subring_ of \(V\) is a finite-dimensional vector space over division rings, and \(\operatorname{End}_{V}(V)\) is a finite-dimensional vector space over division rings. A _subring_ of \(V\) is a finite-dimensional vector space over division rings, and \(\operatorname{End}_{V}(V)\) is a finite-dimensional vector space over division rings. A _subring_ of \(V\) is a finite-dimensional vector space over division rings, and \(\operatorname{End}_{V}(V)\) is a finite-dimensional vector space over division rings. A _subring_ of \(V\) is a finite-dimensional vector space over division rings, and \(\operatorname{End}_{V}(V)\) is a finite-dimensional vector space over division rings. A _subring_ of \(V\) is a finite-dimensional vector space over division rings, and \(\operatorname{End}_{V}(V)\) is a finite-dimensional vector space over division rings. A _subring_ of \(V\) is a finite-dimensional vector space over division rings, and \(\operatorname{End}_{V}(V)\) is a finite-dimensional vector space over division rings. A _subring_ of \(V\) is a finite-dimensional vector space over division rings, and \(\operatorname{End}_{V}(V)\) is a finite-dimensional vector space over division rings, and \(\operatorname{End}_{V}(V)\) is a finite-dimensional vector space over division rings. A _subring_ of \(V\) is a finite-dimensional vector space over division rings, and \(\operatorname{End}_{V}(V)\) is a finite-dimensional vector space over to the subring \(\operatorname{FEnd}_{D}(V,\Pi)\) of \(\operatorname{FEnd}_{D}(V)\) generated by all infinitesimal transvections \(t_{v,f}\) with \(v\in V\) and \(f\in\Pi\). Note that all the just mentioned rings, \(\operatorname{End}_{D}(V)\), \(\operatorname{FEnd}_{D}(V)\), and \(\operatorname{FEnd}_{D}(V,\Pi)\), are also right vector spaces over \(D\), and hence have a structure of a right \(D\)-algebra. Theorem: Let \(V,W\) be right vector spaces over a division ring \(D\), \(\Pi\) a nonzero finite-dimensional subspace of \(V^{*}\), \(\Gamma\) a finite-dimensional subspace of \(W^{*}\), and \(\Phi:\operatorname{FEnd}_{D}(V,\Pi)\to\operatorname{FEnd}_{D}(W,\Gamma)\) an isomorphism of \(D\)-algebras. Then there is an isomorphism of \(D\)-vector spaces \(\alpha:V\to W\) such that \[\Phi(f)=\alpha\circ f\circ\alpha^{-1}\] for any \(f\in\operatorname{FEnd}_{D}(V,\Pi)\). Proof: Write the rings \(\operatorname{FEnd}_{D}(V,\Pi)\) and \(\operatorname{FEnd}_{D}(W,\Gamma)\) in the isomorphic form as the tensor products \(V\otimes_{D}\Pi\) and \(W\otimes_{D}\Gamma\) as above, and, by abuse of notation, denote by the same symbol \(\Phi\) the isomorphism of \(D\)-algebras \(\Phi:V\otimes_{D}\Pi\to W\otimes_{D}\Gamma\). Due to finite-dimensionality of \(\Pi\), we have isomorphism of vector spaces over \(D\): \[\operatorname{Hom}_{D}(V\otimes_{D}\Pi,W\otimes_{D}\Gamma)\simeq\operatorname {Hom}_{D}(V,W)\otimes_{D}\operatorname{Hom}_{D}(\Pi,\Gamma),\] and hence we can write \(\Phi\) in the form \[\Phi(v\otimes f)=\sum_{i\in I}\alpha_{i}(v)\otimes\beta_{i}(f),\] where \(\alpha_{i}:V\to W\) and \(\beta_{i}:\Pi\to\Gamma\) are two linearly independent families of \(D\)-linear maps, indexed by a finite set \(I\). The condition that \(\Phi\) is a homomorphism, written for a pair of decomposable tensors \(v\otimes f\) and \(u\otimes g\), where \(v,u\in V,f,g\in\Pi\), is equivalent to \[\sum_{i\in I}\Big{(}\alpha_{i}(v)f(u)-\sum_{j\in I}\alpha_{j}(v)\beta_{j}(f)( \alpha_{i}(u))\Big{)}\otimes\beta_{i}(g)=0.\] Since the family \(\{\beta_{i}\}_{i\in I}\) is linearly independent over \(D\), each first tensor factor in the external sum vanishes, i.e., \[\alpha_{i}(v)f(u)-\sum_{j\in I}\alpha_{j}(v)\beta_{j}(f)(\alpha_{i}(u))=0\] for any \(i\in I\), \(v,u\in V\), and \(f\in\Pi\). This can be rewritten as \[\alpha_{i}(v)\Big{(}f(u)-\beta_{i}(f)(\alpha_{i}(u))\Big{)}-\sum_{\begin{subarray} {c}j\in I\\ j\neq i\end{subarray}}\alpha_{j}(v)\beta_{j}(f)(\alpha_{i}(u))=0.\] Since the family \(\{\alpha_{i}\}_{i\in I}\) is linearly independent over \(D\), each coefficient from \(D\) in the last sum vanishes; in particular, \[\beta_{i}(f)(\alpha_{i}(u))=f(u)\] for any \(u\in V,f\in\Pi\), and \(i\in I\). This implies \[\operatorname{Tr}\Big{(}\Phi(u\otimes f)\Big{)}=\operatorname{Tr }\Big{(}\sum_{i\in I}\alpha_{i}(u)\otimes\beta_{i}(f)\Big{)}=\sum_{i\in I} \operatorname{Tr}\Big{(}\alpha_{i}(u)\otimes\beta_{i}(f)\Big{)}\\ =\sum_{i\in I}\beta_{i}(f)(\alpha_{i}(u))=\sum_{i\in I}f(u)=|I|f(u )=|I|\operatorname{Tr}(u\otimes f).\] As the ring \(V\otimes_{D}\Pi\) is linearly spanned by decomposable tensors, we have \[\operatorname{Tr}(\Phi(\xi))=|I|\operatorname{Tr}(\xi) \tag{5}\] for any \(\xi\in V\otimes_{D}\Pi\). Now consider the inverse isomorphism \(\Phi^{-1}:W\otimes_{D}\Gamma\to V\otimes_{D}\Pi\), with decomposition similar to (3) with the index set \(J\). By the same reasoning as in the case of \(\Phi\), we have \[\operatorname{Tr}(\Phi^{-1}(\eta))=|J|\operatorname{Tr}(\eta) \tag{6}\] for any \(\eta\in W\otimes_{D}\Gamma\). Combining (5) and (6), we have \[\operatorname{Tr}(\xi)=|I||J|\operatorname{Tr}(\xi)\] for any \(\xi\in V\otimes_{D}\Pi\). Since for any nonzero \(f\in V^{*}\) there is \(v\in V\) such that \(f(v)\neq 0\) (actually, we can choose \(f(v)\) to be equal to 1), \(\operatorname{FEnd}_{D}(V,\Pi)\) always contains elements of nonzero trace+. Picking such an element \(\xi\) in (7), we have \(|I|=|J|=1\), i.e., \(\Phi\) preserves traces and can be represented as a decomposable linear map: \(\Phi=\alpha\otimes\beta\) for some \(\alpha:V\to W\) and \(\beta:\Pi\to\Gamma\). The system of equalities (4) reduces to the single equality Footnote †: This is a trivial, albeit a crucial point in our reasoning. Compare with the condition of so-called totality in [J1, Chapter IX, §11, Theorem 7], which, in our notation, amounts to saying, in a sense, a dual thing: for any nonzero \(v\in V\) there is \(f\in\Pi\) such that \(f(v)\neq 0\). The latter condition is equivalent to the density of rings under consideration, and allows to use Jacobson’s density theorem. \[\beta(f)\circ\alpha=f \tag{8}\] for any \(f\in\Pi\). As \(\Phi\) is invertible, \(\alpha\) and \(\beta\) are invertible with \(\Phi^{-1}=\alpha^{-1}\otimes\beta^{-1}\), so (8) can be rewritten as \(\beta(f)=f\circ\alpha^{-1}\), and hence \(\Phi(v\otimes f)=\alpha(v)\otimes(f\circ\alpha^{-1})\). Rewriting the last equality in terms of \(\operatorname{FEnd}_{D}(V,\Pi)\) for an infinitesimal transvection \(t_{v,f}\), and expanding by linearity, we get (2). A couple of final remarks concerning possible extensions of the theorem: 1. We require the base ring \(D\) to be a division ring in order to ensure that all \(D\)-modules are free. We can merely require that all the modules appearing in the formulation of the theorem, i.e., \(V\), \(W\), \(\Pi\), \(\Gamma\), as well as all the modules appearing in the course of the proof, are free, without imposing any conditions on \(D\), but this will just lead to a cumbersome formulation without changing the essence of the things. 2. The theorem can be easily extended to the graded case (thus providing a "finitary" analog of results from [R] and [Bal]), with essentially the same proof which will keep track of the maps on each graded component separately. This is left as an exercise to the reader. Thanks are due to the anonymous referee for indicating an erroneous remark in the previous version of the manuscript.
2309.01479
Parameter and Computation Efficient Transfer Learning for Vision-Language Pre-trained Models
With ever increasing parameters and computation, vision-language pre-trained (VLP) models exhibit prohibitive expenditure in downstream task adaption. Recent endeavors mainly focus on parameter efficient transfer learning (PETL) for VLP models by only updating a small number of parameters. However, excessive computational overhead still plagues the application of VLPs. In this paper, we aim at parameter and computation efficient transfer learning (PCETL) for VLP models. In particular, PCETL not only needs to limit the number of trainable parameters in VLP models, but also to reduce the computational redundancy during inference, thus enabling a more efficient transfer. To approach this target, we propose a novel dynamic architecture skipping (DAS) approach towards effective PCETL. Instead of directly optimizing the intrinsic architectures of VLP models, DAS first observes the significances of their modules to downstream tasks via a reinforcement learning (RL) based process, and then skips the redundant ones with lightweight networks, i.e., adapters, according to the obtained rewards. In this case, the VLP model can well maintain the scale of trainable parameters while speeding up its inference on downstream tasks. To validate DAS, we apply it to two representative VLP models, namely ViLT and METER, and conduct extensive experiments on a bunch of VL tasks. The experimental results not only show the great advantages of DAS in reducing computational complexity, e.g. -11.97% FLOPs of METER on VQA2.0, but also confirm its competitiveness against existing PETL methods in terms of parameter scale and performance. Our source code is given in our appendix.
Qiong Wu, Wei Yu, Yiyi Zhou, Shubin Huang, Xiaoshuai Sun, Rongrong Ji
2023-09-04T09:34:33Z
http://arxiv.org/abs/2309.01479v3
# Parameter and Computation Efficient Transfer Learning for Vision-Language Pre-trained Models ###### Abstract With ever increasing parameters and computation, vision-language pre-trained (VLP) models exhibit prohibitive expenditure in downstream task adaption. Recent endeavors mainly focus on parameter efficient transfer learning (PETL) for VLP models by only updating a small number of parameters. However, excessive computational overhead still plagues the application of VLPs. In this paper, we aim at _parameter and computation efficient transfer learning_ (PCETL) for VLP models. In particular, PCETL not only needs to limit the number of trainable parameters in VLP models, but also to reduce the computational redundancy during inference, thus enabling a more efficient transfer. To approach this target, we propose a novel _dynamic architecture skipping_ (DAS) approach towards PCETL. Instead of directly optimizing the intrinsic architectures of VLP models, DAS first observes the significances of their modules to downstream tasks via a reinforcement learning (RL) based process, and then skips the redundant ones with lightweight networks, _i.e._, adapters, according to the obtained rewards. In this case, the VLP model can well maintain the scale of trainable parameters while speeding up its inference on downstream tasks. To validate DAS, we apply it to a bunch of representative VLP models, and conduct extensive experiments on a set of VL tasks. The experimental results not only show the great advantages of DAS in reducing computational complexity, _e.g._\(-11.97\%\) FLOPs of METER on VQA2.0, but also confirm its competitiveness against existing PETL methods in terms of parameter scale and performance. Our source code is given in [https://github.com/DoubtedSteam/DAS](https://github.com/DoubtedSteam/DAS). ## 1 Introduction Inspired by the great success in natural language processing (NLP) [8; 19; 32; 36], large-scale pre-training on massive image-text pairs also becomes the _de-facto_ standard in vision-language research [4; 24; 35; 65]. To accommodate the large-scale pre-training corpora, vision-language pre-trained (VLP) models [4; 70; 10; 24; 28; 35; 56; 74] often adopt Transformer-based networks with sheer sizes of parameters and computations. In this case, directly transferring these VLP models to downstream tasks is excessively expensive in terms of memory footprint and computation overhead. To reduce the costs of pre-training models, recent advances resort to _parameter efficient transfer learning_ (PETL) for affordable downstream task adaptions [16; 13; 26; 37; 59; 72; 74; 75]. In particular, the PETL methods aim to save the memory usage for downstream tasks by only updating or inserting a small number of trainable parameters rather than fully tuning the entire model. For instance, prompt-tuning methods [1, 5, 26, 36, 37, 50, 52, 74, 75] expand the input sequence with hand-craft or learnable tokens to bridge the gap between pre-training and downstream tasks. Practitioners also insert lightweight neural networks called _Adapter_[13, 20, 46, 47, 59, 72, 41] into the pre-trained models, thereby projecting the hidden features onto the semantic space of downstream tasks. More recently, these PETL methods have been successfully introduced to VLP models [13, 33] for either prompt-based image classification [26, 52, 74, 75] or conventional VL tasks like _visual question answering_[58, 59, 43]. Despite the great successes, PETL methods still cannot reduce the computation complexity of VLP models, which is of more significance for applications. In this paper, we study a novel problem called _parameter and computation efficient transfer learning_ (PCETL). To achieve more efficient downstream task adaptions, PCETL is not only expected to maintain the scale of trainable parameters similar to PETL, but more importantly, also needs to reduce the computation complexity of pre-training models, thereby speeding up their inference on downstream tasks. In existing works, the efficiency of the network itself is largely attributed to its manually [6, 22, 44, 54] or automatically structural designs [62, 66, 78, 45]. Although the computation complexity can be further reduced by the compression methods, such as _pruning_[67, 49, 3, 31, 7, 12, 30, 9, 64, 69, 55], _quantification_[11, 29, 73] or _distiliation_[2, 25], these approaches usually require retraining after optimizing the network architecture, which is not applicable to the VLP models that are well pre-trained on massive data. On one hand, the large-scale pre-training data still needs a certain model capacity to learn these prior knowledge, thus it is hard to obtain a good trade-off between performance and computation overhead for the pre-training objectives. On the other hand, devising a small and effective model for each downstream task is still laborious and expensive, which also contradicts the target of PETL [20, 36], since fully fine-tune is often required. In this case, we argue that the key to PCETL is to explore the parameter and computation redundancy in existing VLP models. It is generally assumed that the model scale is proportional to the complexity of the task [77, 1, 18, 37, 76]. To robustly serve a variety of downstream tasks, VLP models are pre-trained by multiple pre-training objectives based on tens of millions of image-text pairs [14, 51, 52, 57]. In this case, the excessive parameters are suitable for pre-training, but prone to redundant for a downstream task. As shown in Fig. 1-(a), the performance of METER [10] on VQA is barely affected when skipping a certain number of its Transformer layers. This empirical result also suggests that exploring a short-cut pathway in existing VLP models is a feasible way for PCETL. To this end, we propose a novel _Dynamic Architecture Skipping_ (DAS) approach towards efficient transfer learning for VLP models. By observing the module redundancy of VLP models, DAS can realize the optimal subnetwork routing of VLP models for a downstream task, thereby reducing the computation during inference. In practice, DAS regards this process as a \(k\)-armed bandit problem, and evaluates the importance of each VL layer/block to the downstream task via numerous subnetwork samplings and quick validations. Thus, the obtained rewards can be used to reflect the redundancy of VL modules and determine which layers to be skipped. Meanwhile, to achieve parameter efficiency, Figure 1: (a) The performance of METER [10] is barely affected when skipping a certain number of its Transformer layers. (b) The comparison on VQA2.0 between the conventional PETL methods [17, 21, 26, 36, 59] and the proposed _Dynamic Architecture Skipping_ (DAS) for METER. The circle size represents the memory footprint. DAS is the only method faster than the original VLP model. we also adopt lightweight networks, _i.e._ Adapter [20; 59], to serve the hidden feature adaptions and the short-cut connections of DAS for VLP models. To validate DAS, we apply it to a set of VLP models, namely including [10], ViLT [28] and LaVIN [42]2, on three VL benchmarks, namely VQA2.0 [14], NLVR2 [57] and Flickr30K [51]. The experimental results not only show the competitive performance of DAS against the fully finetune and PETL methods [17; 21; 59; 26], but also witness its great advantage in reducing the computation complexity of VLP models. For instance, DAS can help METER achieve \(96.60\%\) performance of full tuning on the VQA2.0 benchmark with only \(1.65\%\) trainable parameters, while decreasing \(11.97\%\) FLOPs. For the practical deployment of a specific VL task, DAS can reduce up to \(93.75\%\) parameters of the VLP models 3. These results well confirm our assumption about the redundancy of VLP models on downstream tasks, and also validated the design of the proposed DAS. Footnote 2: Due to the page limit, the results of LaVIN are given in our Github project. Footnote 3: When the model is only deployed for a task, the skipped layers can be also removed during deployment. Overall, our contributions can be summarized as three-fold: * We raise a new problem called _parameter and computation efficient transfer learning_ (PCETL) for VLP models, which not only requires to keep the scale of training parameters but also needs to reduce the computation complexity of VLP models on downstream tasks. * We propose a novel _Dynamic Architecture Skipping_ (DAS) for PCETL, which can explore the optimal short-cut pathway in VLP models with the combination of parallel adapters. * On two VLP models and three benchmark datasets, the proposed DAS not only reduces the computation overhead by a large extent, _e.g._, \(-11.97\%\) FLOPs of METER on VQA2.0, but also is on par with existing PETL methods in terms of parameter and performance. ## 2 Related Work ### Vision-Language Pre-training In recent years, the advancement in natural language processing (NLP) [32; 36] also sparks the prevalence of large-scale pre-training in vision-language (VL) research [4; 10; 24; 28; 35; 56; 74]. In particular, VL pre-training also accomplishes self-supervised learning on massive image-text pairs based on the generative prediction tasks, _e.g. masked language modeling_ (MLM) and _masked image modeling_ (MIM). Furthermore, _Image Text Matching_ (ITM) [10; 28] is also applied to align two modalities. In terms of network architecture, most VLP models are equipped with two modality encoders to extract the visual and text features, _e.g._ BERT [8] and Faster-RCNN [53], respectively, based on which a stack of Transformer-based layers are deployed for cross-modal fusions [4; 10; 23; 24; 28; 34; 38; 39; 56; 61; 63]. For instance, ViL-BERT [39] and LXMERT [61] contain two independent Transformer-based branches [63] for region and text feature extractions, and another Transformer block is used for cross-modal interaction. To simplify the framework, Visual-BERT [34], VL-BERT [56] and UNITER [4] abandon additional branches and directly feed features into a single Transformer network. Then Pixel-BERT [24], CLIP-ViL [38], and METER [10] break the limitation of object detection backbones by directly applying grid features for multi-modal pre-training. To further simplify the model complexity, ViLT [28] directly feeds the word embeddings and image patches into the Transformer blocks. Additionally, CLIP [52] applies cross-modal contrastive learning for vision-language alignment with a shallow fusion layer for prediction. Overall, these VLP models often require more network branches due to the increase of modalities, resulting in more parameter and computation overheads. In this paper, we present the first attempt to evaluate their network redundancy on downstream tasks. ### Parameter Efficient Transfer Learning Parameter Efficient Transfer Learning (PETL) [13; 17; 20; 21; 46; 47; 48; 58; 59; 60; 71; 72] aims to approach the fully-tuned performance on downstream by updating a small number of parameters. One of the main methodology in PETL is prompt-tuning [1; 5; 26; 36; 37; 50; 52; 74; 75], which is originally designed for large pre-trained language models such as GPT-3 [1]. Concretely, the hand-craft prompts [50; 52] expand the original input sequence with natural language and regard all problems as a generation task. To better fit downstream tasks, soft prompt tuning methods [26; 36; 75] replace the handcraft prompts with a sequence of trainable embeddings. In addition to prompt-tuning, adapter-based [13; 20; 46; 47; 59; 72] methods insert lightweight feed-forward networks into VLP models, and these methods transfer VLP models by projecting hidden features onto the downstream distributions [20; 59]. Furthermore, LoRA [21] is proposed to transfer VLP models without additional calculation overhead in the inference stage by updating the low-rank parts of the original parameters. Besides, Side-tuning [71] runs in parallel with the pre-trained models to adapt downstream tasks while overcoming the constraint from the concrete structure. In addition, LST [58] stacks the outputs of the pre-trained modules in a parallel path. Without feedback to the VLP model, LST alleviates the memory requirement in the transfer while increasing the computation overhead. Compared to fine-tuning the entire model, PETL methods significantly improve the efficiency in transferring VLP models to downstream tasks. However, all of the above methods take the original VLP model as the upper bound of inference efficiency. In this paper, the proposed DAS method is the first to reduce the computation of VLP models while maintaining competitive performance. In terms of computation efficiency, network compression methods can also reduce the computation overhead,but they often require to fully tune the model on the downstream tasks, such as LayerDrop [12], EfficientVLM [64] and J2C [9]. This setting make them against the target of PCETL about parameter efficiency. ## 3 Preliminary We first revisit the principle of PETL methods for VLP models. Given a vision-language pre-trained (VLP) model, denoted as \(G(\cdot)\), the target of PETL is to achieve the parameter-efficient adaption on the downstream task, which can be summarized as \[\operatorname*{argmin}_{\sigma}\mathcal{L}\big{(}G(I,T|[\mathbf{\theta},\mathbf{ \sigma}])\big{)}, \tag{1}\] where \(\mathbf{\theta}=\{\theta_{1},\theta_{2},..,\theta_{n}\}\) represent the parameters of \(n\) layers in the VLP model, and \(\mathbf{\theta}\) is usually frozen in PETL. \((I,T)\) denotes the image-text pair, and \(\mathbf{\sigma}\) is a small number of updated parameters. Since all VLP layers are reserved on downstream tasks, PETL methods can only reduce the parameter expenditure but not the computation of VLP models. Moreover, most PETL methods often incur non-negligible latency during inference [21; 40]. According to the observation in Fig 1-(a), there exists obvious redundancy in the VLP models. To this end, the objective of the proposed task of _parameter and computation efficient transfer learning_ (PCETL) can be defined by \[\operatorname*{argmin}_{\mathbf{\sigma},\mathbf{K}}\mathcal{L}\big{(}G(I,T|[\mathbf{ \theta}_{\mathbf{K}},\mathbf{\sigma}])\big{)}, \tag{2}\] Figure 2: Illustration of _Dynamic Architecture Skipping_ (DAS). DAS regards the network skipping as a \(k\)-armed bandit problem, and evaluates the redundancy of each VL layer/block via numerous subnetwork samplings. The accumulated rewards are used to determine which layers can be skipped, and adapters are also used for feature adaptions and short-cut connections. where \(\mathbf{\theta_{K}}=\{\theta_{k_{1}},\theta_{k_{2}},...,\theta_{k_{m}}\}\in\mathbf{\theta}\) is the parameters of VLP modules except the skipped ones. Via skipping the redundant layers in VLP models and the combination of PETL methods, VLP models can accelerate the inference speed and maintain the scale of updated parameters. ## 4 Dynamic Architecture Skipping ### Redundancy Estimation In this paper, we propose a novel transfer learning approach called _Dynamic Architecture Skipping_ (DAS) towards the parameter and computation efficient adaption of VLP models. DAS first observes the model redundancy to downstream tasks before skipping the layers of VLP models. In practice, we regard this process as a \(k\)-armed bandit problem, as illustrated in Fig. 2. Firstly, we define the degree of redundancy as \(\mathbf{r}\in\mathbb{R}^{n}\), where \(n\) is the number of VLP modules. To correctly estimate the redundancy, we equally initialize \(\mathbf{r}_{i}=0\) and update it momentously. In each training step, we skip \(m\) modules according to uniform distribution based on \(\mathbf{r}\), and train the sampled architectures on the downstream data. For \(t\)-th step, the action policy \(\pi_{i}^{(t)}\) for the \(i\)-th VLP module follows the distribution: \[\pi_{i}^{(t)}\sim U(0,\rho(\mathbf{r}_{i}^{(t)})), \tag{3}\] where \(U(a,b)\) is the uniform distribution between \(a\) and \(b\). And \(\rho\) represent the Sigmoid function. We randomly pick a probability from the \(\pi_{i}^{(t)}\) of each module as the score \(s_{i}^{(t)}\). According to the score \(s_{i}^{(t)}\), the sampled subnetwork can be defined by \[\begin{split} G_{s}&=g_{0}\circ g_{1}\circ... \circ g_{n},\\ where& g_{i}=\left\{\begin{array}{l}\theta_{i},i \in\{j|s_{j}^{(t)}<s_{m}^{(t)}\},\\ \sigma_{i},i\in\{j|s_{j}^{(t)}\geq s_{m}^{(t)}\}.\end{array}\right.\end{split} \tag{4}\] Here, \(g_{i}\circ g_{i+1}\) represents the compositional function \(g_{i}(g_{i+1}(\cdot))\). \(\theta_{i}\) denotes the original VL module, and \(\sigma_{i}\) is the lightweight module like adapter for short-cut connection. And \(s_{m}^{(t)}\) are the \(m\) largest values in the picked scores. Here, the module with a larger \(\mathbf{r}_{i}^{(t)}\) is more likely to be skipped during training. Meanwhile, Eq. 4 also help \(\sigma_{i}\) learn pre-trained knowledge from \(\theta_{i}\) in a distillation way [68]. Then, DAS observes the redundancy of VLP modules in a reinforcement learning manner, as shown in 2. DAS samples \(c\) candidate network structures and calculates their rewards according to their loss values during validation, _i.e._ reward \(v=e^{-loss}\). Based on the rewards, the degree of redundancy \(\mathbf{r}\) can be updated by \[\mathbf{r}_{i}^{(t+1)}=\mathbf{r}_{i}^{(t)}+(v_{h}-\frac{1}{c}\sum_{j=1}^{c} v_{j}). \tag{5}\] Here, \(v_{h}\) denotes the reward of the sampled subnetwork, where the \(i\)-th module is skipped. When its validation loss is larger than the mean value, it suggests that this skipped module is more redundant. Eq. 5 is conducted at short training intervals to makes sure that most subnetworks can be sufficiently validated via numerous samplings, and the theorem of large numbers can guarantee the optimality of search results. The detailed search procedure of DAS is illustrated in Algorithm. 1. Finally, according to the degree of redundancy \(\mathbf{r}\), we can select top-\(m\) layers to be skipped, thereby reducing the computation complexity of VLP models. ### Model Adapting To reduce the updated parameter scale during adaptation, DAS also introduces lightweight adapters [20; 59] to serve the hidden feature transformations as well as the short-cut connections in VLP models. Typically, an adapter is constructed by two linear projection layers and an activation function in between: \[adapter(\mathbf{x})=ReLU(\mathbf{x}\mathbf{W}_{in})\mathbf{W}_{out}. \tag{6}\] Here, \(\mathbf{W}_{in}\in\mathbb{R}^{d\times h}\) and \(\mathbf{W}_{out}\in\mathbb{R}^{h\times d}\) are two trainable matrices, where \(h\ll d\). For the \(i\)-th VLP module, the adaptation can be defined by \[\mathbf{x}_{i}=\mathbf{x}_{i-1}+VLP(\mathbf{x}_{i-1})+adapter(\mathbf{x}_{i- 1}), \tag{7}\] where \(\mathbf{x}_{i}\) is the output of the \(i\)-th component. In this manner, DAS can freeze most parameters in the VLP models during adaption, similar to existing PETL methods [59]. Notably, directly removing the redundant modules will make the subsequent layers to receive the hidden features with drastic changes. Meanwhile, we do not expect the fully tuning of the whole model. In this case, we apply the adapter to serve the short-cut connection of the skipped layers: \[\mathbf{x}_{i}=\mathbf{x}_{i-1}+adapter_{r}(\mathbf{x}_{i-1}). \tag{8}\] In this way, DAS can not only bridge the gap between feature transformations, but also retain parameter efficiency. Based on the estimated redundancy, DAS skips the redundant modules and finds out the optimal pathway for the downstream task with the helps of adapters, as shown in Fig. 2. ## 5 Experiment ### Datasets and Experimental Setup **Visual Question Answering**. We conduct experiments on VQA2.0 [14]. Instead of answering the question in open-ended natural language, it is converted into a classification task with \(3,129\) classes. Following the previous setting [10; 28], the PETL methods and DAS are trained on the train and validation sets of VQA2.0, and we report the _test-dev_ results from the online evaluation 4. Footnote 4: [https://eval.ai/web/challenges/challenge-page/830/overview](https://eval.ai/web/challenges/challenge-page/830/overview) **Natural Language for Visual Reasoning**. The NLVR\({}^{2}\)[57] is a dataset for classifying triplets of two images and a question into two classes. Because its form is different from the setup of VLP models, which has two images in one VL example, we feed these triplet examples to the model following the default setting of ViLT [28] and METER [10]. Under this setting, the paired images and the question are input to the network, respectively. And the classifier predicts the results according to the concatenation of two representations. **Retrieval Task**. For cross-modal retrieval, we measure the performance on Flickr30K [51] re-splited by Karpathy _et al._[27]. We initialize the predictor for similarity measurement from the pre-trained ITM head. During the training, we randomly sample \(15\) instances as negative samples. ### Implementation details We validate DAS on two deep-fusion based VLP models, which are ViLT [28] and METER [10]. In terms of ViLT, we update the parameters of the additional components, classifier, class token and modal-type embeddings, while the rest are frozen. Following the most conventional setting [17; 59], the width of hidden states in adapters is set to \(96\). And the hidden dimension of the adapter used for the skip connection is set to \(192\) to retain a certain capacity. The VLP model is first warmed up for one epoch. In this epoch, the subnetwork is randomly sampled according to the skipped number \(m\). Then the search runs \(2\) epochs and the redundancy observation is executed at \(10\)-th step per interval. Finally, the optimal architecture will be trained for another \(10\) epochs. Notably, the validation set is used during training for all methods. In terms of METER, we split its fusion layer into two modules, _i.e._ the vision and language ones, which are skipped separately. The hidden dimension of the adapter used as the skip connection is set to \(192\) for encoders and \(288\) for fusion layers. The rest settings are the same as ViLT. We conduct all experiments with a single NVIDIA Tesla A100 GPU and the settings not mentioned are the same as ViLT [28] and METER [10]. ### Experimental results #### 5.3.1 Quantitative analysis **Comparation with PETL methods.** We first compare DAS with a bunch of PETL methods [17; 21; 26; 36; 59] on the VLP models, of which results are given in Tab. 1. Here, **the suffix of DAS** denotes the number of skipped layers, and **"_fusion_"** and **"_global_"** refer to the range of network skipping, _i.e._, only the fusion branch or the complete model. From Tab. 1, the first observation is that existing PETL methods can largely reduce the amount of updated parameters for VLP models. For instance, the prompt-based methods only require about 1M parameters for two VLP models, nearly 300 times less than full tuning. Meanwhile, their performance gap to fully tuning is also marginal, _e.g._, Scaled PA [17]. However, we can also see that all these PETL methods cannot reduce the computation of VLP models, and some of them incurs obvious increases in FLOPs, _e.g._ +\(28.71\)G by Shallow Prompt [36] on METER. The most computation efficient one is LoRA [21], which applies the re-parameterization technique to merge the additional modules into the VLP model, taking no extra computations. However, its performance obviously lags behind other PETL methods and our DAS. In stark contrast, DAS is the only method that can reduce the computation overhead on the downstream VL tasks, _e.g._, -11.16G FLOPs by DAS\({}_{4}\)-Fusion on VQA. More importantly, its updated parameter scale is only slightly larger than Adapter and Scaled PA, while the overall performance is still competitive. These results well confirm the effectiveness of DAS towards PCETL. **Ablation of the number of skipped layers.** In Tab. 2, we report the results of skipping different numbers of layers by DAS. In terms of METER, we can first observe that skipping a few layers \begin{table} \begin{tabular}{l|c|c c c c c c|c c} \hline \hline \multicolumn{1}{c}{} & \multicolumn{1}{c}{\multirow{2}{*}{\begin{tabular}{c} **METER** \\ \end{tabular} }} & \multicolumn{6}{c}{**VQA**} & \multicolumn{2}{c}{**NLVR\({}^{2}\)**} & \multicolumn{2}{c|}{**Flickr30K**} & \multicolumn{2}{c}{**Avg.**} \\ \cline{3-10} \multicolumn{1}{c}{**Method**} & \begin{tabular}{c} **Updated** \\ **Parameter** \\ \end{tabular} & \begin{tabular}{c} **test-dev** \\ **test-dev** \\ \end{tabular} & \begin{tabular}{c} **Additional** \\ **FLOPs** \\ \end{tabular} & \begin{tabular}{c} **test-p** \\ **test-p** \\ \end{tabular} & \begin{tabular}{c} **Additional** \\ **FLOPs** \\ \end{tabular} & \begin{tabular}{c} **Her/TR R@1** \\ **FLOPs** \\ \end{tabular} & \begin{tabular}{c} **Her.** \\ **Per.** \\ \end{tabular} & \begin{tabular}{c} **Additional** \\ **FLOPs** \\ \end{tabular} \\ \hline Full Tuning & 323.31M & 77.43 & 0.00 & 83.05 & 0.00 & 82.2294.30 & 0.00 & 84.25 & 0.00 \\ Classifier Only & - & 69.93 & 0.00 & 73.23 & 0.00 & 78.808/90.00 & 0.00 & 77.74 & 0.00 \\ \hline Shallow Prompt [36] & 0.30M & 68.51 & +28.71G & 65.69 & 26.84G & 74.208/86.00 & +28.71G & 74.25 & +28.71G \\ Deep Prompt [26] & 1.84M & 70.78 & +6.53G & 72.64 & +5.59G & 78.84/89.40 & +6.53G & 77.92 & +6.53G \\ LoRA [21] & 0.29M & 74.00 & 0.00 & 78.82 & 0.00 & 79.86/92.60 & 0.00 & 81.32 & 0.00 \\ Adapter [59] & 5.34M & 74.70 & +1.64G & 79.93 & +1.38G & 80.38/91.90 & +1.64G & 81.73 & +1.64G \\ Scaled PA [17] & 3.59M & **75.11** & +1.12G & 80.38 & +0.66G & 80.40/**932.00** & +1.12G & **82.27** & +1.12G \\ \hline \multicolumn{1}{l}{**DAS\({}_{4}\)-Fusion**} & \multicolumn{1}{c}{74.80} & \multicolumn{1}{c}{**-11.16G**} & \multicolumn{1}{c}{80.11} & \multicolumn{1}{c}{**-5.13G**} & \multicolumn{1}{c}{80.12/91.80} & \multicolumn{1}{c}{**-11.16G**} & \multicolumn{1}{c}{81.71} & \multicolumn{1}{c}{**-9.15G**} \\ \cline{3-10} \multicolumn{1}{c}{**DAS\({}_{4}\)-Global**} & \multicolumn{1}{c}{6.23M} & \multicolumn{1}{c}{75.09} & \multicolumn{1}{c}{-4.51G} & \multicolumn{1}{c}{**80.69**} & \multicolumn{1}{c}{-3.67G} & \multicolumn{1}{c}{**80.42**/91.40} & \multicolumn{1}{c}{-6.06G} & \multicolumn{1}{c}{81.90} & \multicolumn{1}{c}{-4.74G} \\ \hline \hline \multicolumn{1}{c}{} & \multicolumn{1}{c}{\multirow{2}{*}{\begin{tabular}{c} **ViLT** \\ **Method** \\ \end{tabular} }} & \multicolumn{6}{c}{**ViLT**} \\ \cline{3-10} \multirow{2}{*}{ \begin{tabular}{c} **Method** \\ **Parameter** \\ \end{tabular} } & \multicolumn{6}{c}{**Updated**} & \multicolumn{6}{c}{**VQA**} & \multicolumn{2}{c}{**NLVR\({}^{2}\)**} & \multicolumn{2}{c|}{**Flickr30K**} & \multicolumn{2}{c}{**Avg.**} \\ \cline{3-10} & \multicolumn{1}{c}{**Test-dev**} & \multicolumn{2}{c}{**Additional**} & \multicolumn{2}{c}{**Additional**} & \multicolumn{2}{c}{**Additional**} & \multicolumn{2}{c}{**Per.**} & \multicolumn{2}{c}{**Additional**} \\ \cline{3-10} & \multicolumn{1}{c}{**Test-dev**} & \multicolumn{1}{c}{**FLOPs**} & \multicolumn{1}{c}{**FLOPs**} & \multicolumn{1}{c}{**FLOPs**} & \multicolumn{1}{c}{**FLOPs**} & \multicolumn{1}{c}{**FLOPs**} & \multicolumn{1}{c}{**FLOPs**} \\ \hline Full Tuning & 115.43M & 71.26 & 0.00 & 76.13 & 0.00 & 64.408/35.50 & 0.00 & 73.82 & 0.00 \\ Classifier Only & - & 65.75 & 0.00 & 66.08 & 0.00 & 57.42/86.00 & 0.00 & 66.81 & 0.00 \\ \hline Shallow Prompt [36] & 0.15M & 66.47 & +19.53G & 66.47 & +19.53G & 55.92/74.80 & +19.53G & 65.92 & +19.53G \\ Deep Prompt [26] & 1.84M & 69.30 & +5.14G & 73.34 & +5.14G & 58.64/79.50 & +5.14G & 70.20 & +5.14G \\ LoRA [21] & 0.15M & 68.44 & 0.00 & 72.77 & 0.00 & 57.44/77.70 & 0.00 & 69.09 & 0.00 \\ Scaled PA [17] & 1.80M & 70.40 & +0.44G & 75.13 & +0.44G & 61.88/79.00 & +0.44G & 71.60 & +0.44G \\ Adapter [59] & 3.56M & **70.85** & +0.86G & **75.51** & +0.86G & **62.68/81.40** & +0.86G & **72.61** & +0.86G \\ \hline \multicolumn{1}{l}{**DAS\({}_{1}\)**} & \multicolumn{1}{c}{3.56M} & \multicolumn{1}{c}{69.28} & \multicolumn{1}{c}{**-1.03G**} has limited impact on its performance, _e.g._, skipping up to 8 fusion layers only has about \(2.2\%\) performance drops, strongly suggesting the redundancy of this VLP model. However, DAS can only reduce about one layer for ViLT without notable performance degradations. To explain, METER is a deep VLP model with two independent encoders to extract the features of image and text, while ViLT processes the multi-modal information from image pixels and word embeddings. In this case, ViLT is more compact than METER for downstream tasks, and this is also reflected in their parameter scales and performance. In terms of METER, we can also see the difference in optimizing the fusion branch and the entire model, _i.e._ DAS-Fusion and DAS-Global. With the same number of skipped layers, DAS-Fusion can reduce more FLOPs since the length of multi-modal inputs is often larger than the single-modal one. Meanwhile, when evaluating the entire model, DAS often tends to reduce the layers in the language encoders 5, which also suggests that natural language understanding is often less difficult in the VL tasks. Overall, these results well confirm our motivation about the redundancy of VLP models in downstream VL tasks, especially the ones with independent encoders like METER. Footnote 5: The detailed results are given in our Appendix. **Reliability of DAS.** Considering that DAS is an RL-based search approach, we also examine its stability and reliability via comparing to random skipping, of which results are given in Fig. 3. It can be seen that DAS is consistently better than random skipping without regard to the number of skipping layers, well confirming its effectiveness. In particular, when the number of skipped layers increases, the performance deviation of random skipping will become much more obvious, _e.g._\(\pm 1.33\) for skipping 8 layers. Instead, DAS is still more stable and superior than the random solution, which also suggests the importance of correct redundancy estimations. Overall, these results further confirm the effectiveness and reliability of the proposed DAS. **Inference Efficiency.** We further compare the actual inference efficiencies of DAS and other methods. The computation overhead during both the training and testing stages are reported in Tab. 3. We can first observe that the PETL methods, _i.e._ LoRA, Adapter and Scaled PA, significantly reduce the computational burden during the training stage. However, during test, these methods lose their efficiency advantage. For instance, Scaled PA is \(-5.04\%\) slower compared to the full tuning method. In contrast, the proposed DAS enhances the efficiency in both phases. Specifically, DAS only takes similar computation overhead to the PETL methods in the training stage and improves inference efficiency by \(+19.23\%\). Overall, these results well confirm the effectiveness of the proposed DAS in the PCETL task. \begin{table} \begin{tabular}{c|c|c c c c|c} \hline \hline \multirow{3}{*}{**Method**} & \multirow{3}{*}{ \begin{tabular}{c} **Skipped** \\ **Number** \\ \end{tabular} } & \multicolumn{3}{c|}{**VQA**} & \multicolumn{2}{c|}{**NITER**} & \multicolumn{2}{c}{**Avg.**} \\ \cline{3-7} & & **total-dev** & **FLOPs** & **total-P** & **FLOPs** & **Per.** & **FLOPs** \\ \hline Baseline & 0 & 75.28 & 1.680 & 81.28 & 0.996 & 78.28 & 1.346 \\ \hline \multirow{3}{*}{DAS-Fusion} & 2 & 74.92 & -9.064 & 80.07 & -2.660 & 77.30 & -5.866 \\ & 4 & 74.80 & -11.166 & 80.14 & -1.446 & 77.46 & -7.656 \\ & 6 & 74.67 & -17.586 & -18.16 & -9.976 & 76.42 & 1.1376 \\ & 8 & 73.70 & -24.00G & 79.30 & -11.45G & 76.50 & -17.72G \\ \hline \multirow{3}{*}{DAS-Global} & 2 & 75.24 & -3.964 & 81.37 & -2.196 & 78.31 & -3.086 \\ & 75.13 & -45.1G & 81.34 & -3.676 & 78.24 & -4.096 \\ & 6 & 75.02 & -5.066 & 80.04 & -4.226 & 77.53 & -4.646 \\ & 8 & 74.95 & -5.61G & 79.61 & -8.34G & 77.28 & -6.976 \\ \hline \multicolumn{7}{c}{**ViLT**} \\ \hline Baseline & 0 & 70.13 & 0.736 & 76.26 & 0.736 & 73.20 & 0.736 \\ \hline DAS & 1 & 69.28 & -1.036 & 74.29 & -1.036 & 72.09 & -1.036 \\ & 2 & 67.64 & -2.79 & 73.00 & -2.796 & 70.32 & -2.796 \\ \hline \hline \end{tabular} \end{table} Table 2: Ablation study of different numbers of skipped layers by DAS. “Fusion” denotes the skipping is only for the fusion branch, while “Global” refers to the entire METER. Figure 3: The comparison between DAS and random skipping for METER on the VQA2.0. \begin{table} \begin{tabular}{c|c|c|c c|c c} \hline \hline \multirow{3}{*}{**Method**} & \multirow{3}{*}{ \begin{tabular}{c} **VQA** \\ **test-dev** \\ \end{tabular} } & \multicolumn{3}{c|}{**Additional**} & \multicolumn{2}{c|}{**Training**} & \multicolumn{2}{c}{**Testing**} \\ \cline{3-7} & & **FLOPs** & **Memory(G)** & **Time(h)** & **Memory(G)** & **Speed(samples)** \\ \hline Full Tuning & 77.43 & 0 & \(\geq\)40 & N/A & 6.8 & 4.16 \\ LoRA & 74.00 & 0 & 21.5 & 27 & 6.8 & 4.16 (+0.00\%) \\ Adapter & 74.70 & +1.646G & 22.9 & 28 & 7.2 & 4.09 (-1.68\%) \\ Scaled PA & 75.11 & +1.12G & 23.1 & 30 & 7.1 & 3.95 (-5.04\%) \\ \hline DAS\({}_{+}\)Global & 75.09 & -4.51G & 21.7 (search) \& 20.6 (train) & 10 (search) + 20 (train) & 6.5 & 4.57 (+9.35\%) \\ DAS\({}_{+}\)Fusion & 74.80 & -11.16G & 21.7 (search) \& 18.1 (train) & 10 (search) + 18 (train) & 6.5 & 4.96 (+19.23\%) \\ \hline \hline \end{tabular} \end{table} Table 3: Computation overhead of different methods for METER on the VQA2.0. #### 5.3.2 Qualitative analysis To obtain deep insight into DAS, we also visualize its changes of redundancy estimations during search, as depicted in Fig. 4. From these plots, we can observe several patterns of DAS across tasks. The first is that the most redundant layers can quickly emerge during DAS, especially when the number of skipped layers is smaller, _e.g._ Fig. 4.a. Meanwhile, for some uncertain layers, their redundancies can be gradually determined after a short period of oscillation. We can also see that the numbers of the skipped language and visual layers are similar on VQA and NLVR. However, the preference about the skipping layers is still different on the two tasks, see Fig. 4.b and Fig. 4.c. In Fig. 5, we visualize the subnetworks of METER searched by DAS, which are also the final results of Fig. 4. As discussed above, the preferences of the hidden layers of METER are still different for two tasks. In terms of VQA, DAS tends to discard the language modules at the middle level while skipping the visual ones on the top and bottom of the network. In contrast, the language layers skipped on NLVR are all the top ones. These search results can somewhat reflect the properties of these two tasks. For instance, VQA is a reasoning task, thus it focuses more on the high-level semantics of the input text, while NLVR needs a detailed comparison between two images and one sentence, so the last few language layers may be more redundant to this task. Overall, Fig. 4 and Fig. 5 not only confirm the feasibility of DAS towards PCETL, but also yields some interesting findings about the downstream adaption of VLP models. ## 6 Limitation Currently, DAS has two main limitations. First, it still needs to set the number of layers to skip. In our future work, we will introduce the computation or hardware constraints to DAS for the more Figure 4: The change of the redundancies of METER’s fusion layers. The horizontal axis shows the progress of training and the vertical axis represents the degree of redundancy based on Eq. 5. F_Lang and F_Visual denote the language and visual modules in the fusion branch. Figure 5: The optimal subnetworks of METER searched by DAS. Here, the green modules are trainable adapters while the blue ones are frozen during adaption. automatic network skipping. Second, DAS only regards the complete Transformer layers in VLP models as the skipping candidates, limiting its potential in pathway routing. In the future, we will extend the search space with more detailed components, such as MHA and FFN. ## 7 Conclusion In this paper, we propose a new problem for vision-language pre-trained (VLP) models termed _parameter and computation efficient transfer learning_ (PCETL). Existing transfer learning solutions for VLP models can only save the parameter expenditure during downstream task adaption, _e.g._, the PETL ones, while the excessive computation is still a unconquered problem. To this end, we propose a novel approach called _Dynamic Architecture Skipping_ (DAS) towards effective PCETL. DAS can observe the redundancies of VLP modules to downstream tasks via a reinforcement learning based process, and then skip the redundant ones to speed up inference. Meanwhile, DAS also adopts lightweight adapters to serve the hidden feature adaptions and the short-cut connections, thereby reducing the scale of trainable parameters. On two VLP models and three VL tasks, DAS not only shows a great superiority in reducing computation, but is also on par with the PETL methods in terms of parameter overhead and performance. ## 8 Acknowledge This work was supported by National Key R&D Program of China (No.2022ZD0118201), the National Science Fund for Distinguished Young Scholars (No.62025603), the National Natural Science Foundation of China (No. U21B2037, No. U22B2051, No. 62176222, No. 6217622, No. 62176226, No. 62072386, No. 62072387, No. 62072389, No. 62002305 and No. 62272401), the Natural Science Foundation of Fujian Province of China (No.2021J01002, No.2022J06001) and the China Fundamental Research Funds for the Central Universities (Grant No. 20720220068).
2308.01036
Analysing QBER and secure key rate under various losses for satellite based free space QKD
Quantum Key Distribution is a key distribution method that uses the qubits to safely distribute one-time use encryption keys between two or more authorised participants in a way that ensures the identification of any eavesdropper. In this paper, we have done a comparison between the BB84 and B92 protocols and BBM92 and E91 entanglement based protocols for satellite based uplink and downlink in low Earth orbit. The expressions for the quantum bit error rate and the keyrate are given for all four protocols. The results indicate that, when compared to the B92 protocol, the BB84 protocol guarantees the distribution of a higher secure keyrate for a specific distance. Similarly, it is observed that BBM92 ensures higher keyrate in comparison with E91 protocol.
Muskan, Ramniwas Meena, Subhashish Banerjee
2023-08-02T09:28:50Z
http://arxiv.org/abs/2308.01036v2
# Analysing QBER and secure key rate under various losses for satellite based free space QKD ###### Abstract Quantum Key Distribution is a key distribution method that uses the qubits to safely distribute one-time use encryption keys between two or more authorised participants in a way that ensures the identification of any eavesdropper. In this paper, we have done a comparison between the BB84 and B92 protocols and BBM92 and E91 entanglement based protocols for satellite based uplink and downlink in low Earth orbit. The expressions for the quantum bit error rate and the keyrate are given for all four protocols. The results indicate that, when compared to the B92 protocol, the BB84 protocol guarantees the distribution of a higher secure keyrate for a specific distance. Similarly, it is observed that BBM92 ensures higher keyrate in comparison with E91 protocol. _Keywords--_ Quantum cryptography, Quantum communication, Free space communication ## 1 Introduction The most developed use of quantum communication is quantum cryptography [1, 2, 3, 4, 5] which offers a completely secure coding mechanism. In order to provide secure quantum optical communication, quantum key distribution (QKD) uses the qubits to safely distribute one-time use encryption keys between two or more authorised participants in a way that ensures the identification of any eavesdropper. Heisenberg's uncertainty principle and the quantum no-cloning principle provide assurance for QKD security [6, 7]. Essentially, a solution to the key distribution problem is devised by leveraging these physical characteristics of the information carrier to prevent eavesdropping. Any information obtained by an illegitimate third party about the exchanged key leads to a corresponding increase in the quantum bit error rate (QBER) of the transmitted data. The concept of QKD was initially proposed in [8]. In [9], the first implementation of free space QKD over a 30 cm optical link was demonstrated. Since then, significant research efforts have been dedicated to developing this technology for future optical communication systems that support secure transmission of critical information. While early experimental setups were capable of sending quantum signals over distances of up to 100 km [10] using optical fiber links, the propagation limitations of optical fibers restricted QKD over fibers to only a few hundred kilometers [11]. However, using free space links offers the potential to extend these distances further [12, 13]. Free space links leverage low absorption in specific wavelength ranges and non-birefringent characteristics, ensuring the preservation of polarization. To fully harness the advantages of free space communication, satellites are ideal [14]. By utilizing satellites in Earth satellite links, the path within the atmosphere can be reduced to approximately 30 km (depending on satellite elevation). Establishing a network of satellites would enable a physically secure global communication network, thereby significantly expanding the range of QKD capabilities. In the construction of a worldwide network, satellite-based quantum communication is crucial and effective [15, 16, 17, 18, 19, 20, 21, 22, 23]. Free space optical (FSO) communications are the focus of these satellite-based quantum communication [24]. Free space QKD under atmospheric turbulence must be taken into account for the successful implementation of satellite-based quantum communication[25],[26],[27],[28]. For earth-to-space quantum communication to be possible, the connection attenuation must be less than 60 dB; quantum communication is not practical above this number. The following situations between Earth and space have the following link distances (L): Ground-LEO and LEO-ground link distances are 500-1400 km, ground-GEO and GEO-ground link distances are above 360,000 km, and the LEO-LEO (intersatellite link) link distance is 2000 km. LEO-GEO link distance is 35,500 km, and the GEO-GEO link distance is 40,000 km. Ground-MEO and MEO-ground link distances are 10000-30000 km, LEO-MEO link distance is 14000 km, MEO-MEO (intersatellite link distance) is also around 14000 km and MEO-GEO link distance is around 15000 to 28000 km. [29]. Quantum communication still requires additional research to address problems with security, data rate, and communication distance despite the huge success of QKD's commercial applications in terms of scientific advancement [24, 30, 31, 32, 33]. The process of generating a secret key using QKD involves five essential steps: authentication [34, 35], single photon transmissions, sifting, error correction, and privacy amplification [36]. Initially, a randomly generated raw key is transmitted over the quantum channel to create secret key information. Following this, the key information is exchanged over a public channel, resulting in the acquisition of the sifted key. Subsequently, error correction and privacy amplification steps are employed. The purpose of the error correction step is twofold: it corrects any errors in the received information bits and provides an estimation of the error rate. On the other hand, privacy amplification is implemented to distill a shorter yet significantly more secure final key as desired. When assessing the effectiveness of different QKD systems, two important criteria are considered: the 'QBER' and the 'keyrate'. The QBER serves as an indicator of security and is crucial for evaluating the performance of the link after error correction. If an unauthorized third party gains any knowledge about the exchanged key, it leads to an increase in the QBER. A higher QBER means that the eavesdropper can gather more information about the transmitted key, compromising the security of the legitimate recipient. It is important to note that higher QBER values in QKD systems can lower the keyrate during the error correction stage of the protocol. Therefore, it is desirable to have a low QBER and a higher keyrate in order to ensure effective and secure communication. These are the parameters on which we have performed the comparison between well known protocols BB84 and B92, along with entangled-based protocols BBM92 and E91. We have calculated the QBER and keyrate for both the uplink and downlink scenarios, considering different types of losses. The paper is structured as follows: Sec.2 provides a concise analysis of the BB84, B92, BBM92, and E91 protocols. In Sec.3, we review different types of losses and their impact on the QBER for various protocols. Next, in Sec.4, we introduce and analyze the QBER for different protocols. Sec.5 focuses on the keyrate calculations for different protocols and examines blinding attacks specifically targeting entangled-based protocols. In Sec.6, we present numerical results and have discussions. The paper concludes with a summary of the key points. ## 2 QKD Protocols QKD protocols are the foundation of secure communication enabled by quantum mechanics. These protocols establish a shared secret key between two parties, typically referred to as Alice and Bob, while ensuring that any eavesdropping attempts are detectable. Several QKD protocols have been developed, each with its unique approach to key generation and security guarantees. Here we briefly discuss the QKD protocols used in this work: ### Bb84 The BB84 protocol is a QKD protocol, proposed in [8]. Its purpose is to enable two parties, traditionally named Alice and Bob, to securely generate and share a secret cryptographic key that can be used for subsequent communication. The protocol works by encoding information in quantum states, typically using photons as the carrier of information. Alice sends a sequence of photons to Bob, with each photon randomly prepared in one of four possible states, which are represented by two different orthogonal polarizations (e.g., horizontal/vertical or diagonal/anti-diagonal). Bob receives the photons and measures their polarization using a randomly chosen basis, either horizontal/vertical or diagonal/anti-diagonal. Due to the uncertainty principle of quantum mechanics, any attempt to eavesdrop on the communication would unavoidably introduce errors in the polarization measurements, allowing Alice and Bob to detect the presence of an eavesdropper. After the transmission is complete, Alice and Bob compare a subset of their measurement results to check for errors and potential eavesdropping. They discard any bits that have errors, and the remaining bits form a shared secret key that can be used for subsequent communication, such as encrypting and decrypting messages. ### B92 The B92 protocol is another QKD protocol [37]. The B92 protocol is a simplified version of the BB84 protocol, requiring fewer quantum states to transmit information. Alice sends a sequence of photons to Bob, with each photon randomly prepared in one of two possible states, represented by two different orthogonal polarizations. Bob randomly chooses to measure the photons in either the horizontal/vertical or diagonal/anti-diagonal basis. If Bob measures a photon with the same polarization as Alice sent, he obtains a bit value of "0", while if he measures a photon with the orthogonal polarization, he obtains a bit value of "1". If Bob measures a photon with the same polarization as Alice sent, but with a different intensity, he does not obtain any bit value. After the transmission is complete, Alice and Bob publicly disclose the basis used for each photon transmission. Alice and Bob compare a subset of their measurement results to check for errors and potential eavesdropping. They discard any bits that have errors, and the remaining bits form a shared secret key that can be used for subsequent communication. The B92 protocol is simpler and faster than the BB84 protocol, but it is less secure, as an eavesdropper can obtain information by manipulating the photon intensity without being detected. Despite this limitation, the B92 protocol remains an important contribution to the field of quantum cryptography, as it illustrates the potential for quantum information processing with a minimal set of resources. ### E91 The E91 is a QKD protocol that is based on the idea of entanglement and is developed in [38].The procedure operates as follows: There is a source that distributes maximally entangled pairs of qubits to Alice and Bob, for example, states of the type \[|\psi^{-}\rangle_{AB}=\frac{1}{\sqrt{2}}(|01\rangle_{AB}-|10\rangle_{AB}). \tag{1}\] Alice and Bob measure an observable that is randomly selected from the sets \(\{A_{i}\}\) and \(\{B_{i}\}\), respectively, for each of these bipartite states \(|\psi^{-}\rangle_{AB}\). Figure 1 shows these observables, which are spin components located in the Bloch sphere's \(x\)-\(z\) plane. In general, these operators are defined as \[A_{i}=cos\varphi_{i}^{A}+sin\varphi_{i}^{A} \tag{2}\] \[B_{i}=cos\varphi_{i}^{B}+sin\varphi_{i}^{B} \tag{3}\] with \(\varphi_{1}^{A}=0\), \(\varphi_{2}^{A}=\frac{\pi}{2}\), and \(\varphi_{3}^{A}=\frac{\pi}{4}\) for Alice and \(\varphi_{1}^{B}=0\), \(\varphi_{2}^{B}=\frac{-\pi}{4}\), \(\varphi_{3}^{B}=\frac{\pi}{4}\) for Bob. In terms of the measurement operators Z = \(|0\rangle\langle 0|-|1\rangle\langle 1|\) and X= \(|+\rangle\langle+|-|-\rangle\langle-|\), the measurements can also be written as \[\begin{array}{cc}A_{1}=Z&B_{1}=Z\\ A_{2}=X&B_{2}=\frac{1}{\sqrt{2}}(Z-X)\\ A_{3}=\frac{1}{\sqrt{2}}(Z+X)&B_{3}=\frac{1}{\sqrt{2}}(Z+X).\end{array} \tag{4}\] The measurements of Alice and Bob, \(A_{1}\) and \(B_{1}\) as well as \(A_{3}\) and \(B_{3}\), are taken in the same direction. The next stage is for Alice and Bob to reveal the measurement directions they decided upon. The pairs \((A_{1},B_{1})\) and \((A_{3},B_{3})\) are examples of those where the directions match, and they produce totally anti-correlated outputs. The results of these measurements thus constitute the sifted key by inverting all bits for one side. To determine how much knowledge an eavesdropper knows about the key, the results from the measurement pairs \((A_{1},B_{3})\), \((A_{1},B_{2})\), \((A_{2},B_{3})\), and \((A_{2},B_{2})\) are employed. This is accomplished by examining a supposedly CHSH inequality. The CHSH inequality is a limit on the expectation values of some classical correlations. It is a component of a wider collection of inequalities known as Bell inequalities. Assume you have the four standard random variables \(A_{1}\), \(A_{2}\), \(B_{2}\), and \(B_{3}\). Assume that each of them can take either \(+1\) or \(-1\) as its value. It is simple to confirm that \(A_{1}(B_{3}+B_{2})+A_{2}(B_{3}-B_{2})=\pm 2\) by simply ruling out all other options. By taking the expectation value of these quantities over \(N\) assignments of the random variables, we get \[|\langle A_{1}(B_{3}+B_{2})+A_{2}(B_{3}-B_{2})\rangle|\leq 2, \tag{5}\] where \(\langle A_{i}B_{j}\rangle=\frac{1}{N}\sum A_{i}^{\nu}B_{j}^{\nu}\), and \(A_{i}^{\nu}\) and \(B_{j}^{\nu}\) represent the assigned values \(\nu\) to the random variables \(A_{i}\) and \(B_{i}\). We can now consider \(A_{1},A_{2},B_{2},B_{3}\) to be quantum observables as described in the Ekert protocol. The expectation value for their products is then given by \[\langle A_{i}B_{j}\rangle=Tr(A_{i}\otimes B_{j}\rho). \tag{6}\] Figure 1: Measurements directions for the Ekert protocol. The measurements are depicted in the \(x\)-\(z\) plane of the Bloch sphere. On the left side are the three different measurements that Alice can choose between, and on the right side Bob’s possible measurement directions are shown. Using the measurement directions defined in the Ekert protocol, we can evaluate their expectation values with respect to the state \(\rho=|\Psi^{-}\rangle\langle\Psi^{-}|\). For instance, the expectation value of \(A_{1}\) and \(B_{3}\) is \[\langle A_{1}B_{3}\rangle=\langle\Psi^{-}|(Z\otimes\frac{1}{\sqrt{2}}(Z+X))| \Psi^{-}\rangle=-\frac{1}{\sqrt{2}}. \tag{7}\] In this way we can evaluate all terms in the sum of expectation values S and find that \[S=2\sqrt{2}. \tag{8}\] This indicates that Alice and Bob share a maximally entangled state because it violates the CHSH inequality, derived above. A maximally entangled bipartite state cannot be entangled with a third party. Hence, Eve is unaware of the key in this situation. In actuality, this is the greatest value of S that is possible. The CHSH inequality can generally be violated even with smaller values of S. Eve might know something about the key in this situation. As long as there is some violation of the CHSH inequality, it is still possible to extract a secret key from this data. It is impossible to create a secret key, as demonstrated in [39], if \(S\leq 2\), which denotes that Alice and Bob share a pair of separable states. If their measurement findings pass the test, Alice and Bob can continue with the protocol and use privacy amplification along with error correction to get the final secret key. ### Bbm92 BBM92 is an another entangled based protocol. The protocol was first proposed in 1992[40]. The protocol makes use of pairs of entangled particles, known as EPR pairs, which are distributed between Alice and Bob. Here's how the BBM92 protocol works: Alice creates a set of EPR pairs and sends one particle from each pair to Bob. Bob randomly measures each incoming particle using one of two bases: the standard basis (Z-basis) or the Hadamard basis (X-basis). Alice tells Bob which basis she used to prepare each particle. Bob discards all the measurements that were made in the wrong basis, and keeps the rest. Alice and Bob publicly compare a subset of their remaining measurements to estimate the error rate. Alice and Bob use the remaining measurements to generate a secret key by applying a reconciliation procedure and a privacy amplification procedure. Alice and Bob can now use their secret key for secure communication. The BBM92 protocol is secure against eavesdropping because any attempt to measure the particles in transit will disturb the entanglement and be detected by Alice and Bob. Additionally, the use of entangled pairs ensures that any attempt to clone the particles will also be detected. ## 3 Different type of losses In QKD, various types of losses can occur during the transmission and processing of quantum information. These losses can have an impact on the performance and security of the QKD system. We describe here various losses like geometrical loss, optical loss, and atmospheric losses including turbulence loss and scattering loss. These losses are considered when calculating the QBER and key rate. These losses are factored into the evaluation to determine their influence on the system's performance and security. ### Geometrical loss Geometrical loss is a measure of the reduction in signal power as it propagates from a transmitter to a receiver. It is determined by the ratio of the surface area of the receiver aperture to the surface area of the transmitter beam at the receiver [41, 42]. The surface area of the transmitter beam is affected by its divergence, which causes it to spread out as it travels through space. Therefore, geometrical loss is primarily influenced by the divergence of the beam and the distance between the transmitter and receiver. The geometrical loss can be determined by the formula stated as: \[\text{Geometrical loss}=10log\left[\frac{d_{r}^{2}}{\left(d_{t}+\left(L\theta \right)\right)^{2}}\right], \tag{9}\] here \(d_{r}\) represents diameter of the receiver (in m), \(d_{t}\) represents diameter of the transmitter (in m), \(\theta\) is divergence angle of the beam (in mrad), L is the distance between transmitter and receiver. ### Turbulence Loss-Induced Scintillation loss for Uplink and Downlink Scintillations are characterized by sudden and rapid changes in the phase and amplitude of a wave. These fluctuations occur due to the local and rapid variations in the refractive index of the medium through which the wave is traveling. When laser radiation propagates through a turbulent medium, it experiences both temporal and spatial fluctuations in irradiance, which are referred to as scintillation or atmospheric turbulence-induced irradiance fluctuations [43]. This phenomenon is a consequence of the random and irregular changes in the refractive index of the atmosphere that the laser beam encounters during propagation. The scintillation index is the most commonly used measure to quantify the magnitude of scintillation \[\sigma_{I}^{2}=\frac{\langle I^{2}\rangle-\langle I\rangle^{2}}{\langle I \rangle^{2}}=\frac{\langle I^{2}\rangle}{\langle I\rangle^{2}}-1, \tag{10}\] where \(I\) signifies the optical wave's irradiance and \(\langle\rangle\) stands for the ensemble average, which is equivalent to the long-time average if the process is assumed to be ergodic. The scintillation index in weak fluctuation theory is proportional to the Rytov variance given by: \[\sigma_{1}^{2}=1.23C_{n}^{2}k^{7/6}L^{11/6}, \tag{11}\] where turbulence strength is measured by \(C_{n}^{2}\), \(k=\frac{2\pi}{\lambda}\), is the optical wave number and L is the path length between the communication transmitter and receiver. The value of \(C_{n}^{2}\) is assumed to remain constant for horizontal paths up to a few kilometres, whereas for downlink (space-to-ground) or uplink (ground-to-space) of altitude \(h\) has to be use. For vertical or slant path, the so-called Hufnagel-Vallely model is generally regarded as representative of continental conditions [44]. #### 3.2.1 Hufnagel-Valley Model \[C_{n}^{2}(h)=0.00594\left(\frac{v}{27}\right)^{2}(10^{-5}h)^{10}\exp\left(- \frac{h}{1000}\right)+2.7\times 10^{-16}\exp\left(-\frac{h}{1500}\right)+A \exp\left(-\frac{h}{100}\right), \tag{12}\] where \(A=C_{n}^{2}(0)\) is the ground-level of \(C_{n}^{2}\) and \(v\) is the rms wind speed. For our numerical calculations, we have assumed \(v=21m/s\) and \(A=1.7\times 10^{-14}\) for the \(C_{n}^{2}(h)\) model. Scintillation loss of the transmission system (in dB) is given by [45]: \[a_{sci}=[3.3-5.77(-\ln p_{thr})^{\frac{1}{2}}](\sigma_{I}^{2}(D))^{0.4}, \tag{13}\] Here \(p_{thr}\) is the probability that the receiver power falls below the limit. \(\sigma_{I}^{2}(D)\) is the aperature-averaged scintillation index. This is the factor which includes the concept of uplink and downlink. When it comes to the downlink of space-to-ground communications, the beam experiences distortion upon entering the Earth's atmosphere. The scintillation near the center of the received wave at ground level can be effectively represented and modeled as a plane wave [46]. For downlink, the aperature averaged scintillation index for plane wave is given by: \[\sigma_{I}^{2}(D)=\exp\left[\frac{0.49\sigma_{1}^{2}}{\left(1+0.65d^{2}+1.11 \sigma_{1}^{12/5}\right)^{7/6}}+\frac{0.51\sigma_{1}^{2}(1+0.69\sigma_{1}^{12/ 5})^{-5/6}}{1+0.90d^{2}+0.62d^{2}\sigma_{1}^{12/5}}\right]-1, \tag{14}\] where \(\sigma_{1}^{2}\) is the Rytov variance for plane wave and is given by above Eq.(11). Also, \[d=\sqrt{\frac{Kd_{r}^{2}}{4L}}, \tag{15}\] where \(d_{r}\) is the receiver ap altitude \(H=h_{0}+L(cos(\theta))\) where L is propagation length. The other parameter \(r_{0}\) is the atmospheric coherence length (or Fried parameter) defined by: \[r_{0}=\left[0.423K^{2}sec(\theta)\int_{h_{0}}^{H}C_{n}^{2}(h)dh\right]^{-3/5}, \tag{21}\] where \(K\) is the optical wave number. The beam wander effect will lead to effective pointing error of the beam \(\sigma_{Pe}\) given as[48] \[\sigma_{Pe}^{2}=<r_{c}^{2}>\left[1-\left(\frac{C_{r}^{2}W_{0}^{2}/r_{0}^{2}}{1 +C_{r}^{2}W_{0}^{2}/r_{0}^{2}}\right)^{1/6}\right]. \tag{22}\] In the above equation, the parameter \(C_{r}\) is a scaling constant typically in the range from 1 to \(2\pi\). The pointing error induced by beam wander contributes to an increase in the scintillation index, which differs from the prediction of conventional Rytov theory [49]. In the first-order Rytov theory, the scintillation index, denoted as \(\sigma_{I}^{2}\), is expressed as the sum of longitudinal \(\sigma_{I,l}^{2}\) and radial \(\sigma_{I,r}^{2}(r,l)\) components. However, when accounting for the beam wander effect in FSO uplink scenarios, the increase in \(\sigma_{I,l}^{2}(L)\) should be considered. This is because the effect of beam wander can cause slight flattening of the beam and an increase in the long-term beam profile near the bore-sight. The Scintillation produced by beam wander effect is expressed as: \[\sigma_{I,l}^{2}(L)=5.95(H-h_{0})^{2}sec^{2}(\theta)\left(\frac{2W_{0}}{r_{0}} \right)^{5/3}\left(\frac{\alpha_{Pe}}{W}\right)^{2}. \tag{23}\] In the above equation, \(\alpha_{Pe}=\sigma_{Pe}/L\) is the angular pointing error due to beam wander effect. ### Scattering loss The primary factors contributing to loss in the atmospheric channel are absorption and scattering processes [50]. Absorption in the atmosphere occurs when the photons of the beam interact with dispersed particles such as water vapors, dust, ice, and organic molecules. However, it's important to note that atmospheric absorption is dependent on the wavelength of the beam. To minimize absorption effects, the wavelength range of FSO communication systems is carefully selected to ensure minimal absorption. Scattering refers to the phenomenon where a beam of radiation is dispersed into various directions due to physical interactions. Rayleigh scattering occurs when molecules and atmospheric gases scatter light with sizes significantly smaller than the wavelength of the incident light. Mie scattering occurs when the diameter of particles is equal to or larger than one-tenth of the wavelength of the incident laser beam. In the context of FSO at terrestrial altitudes, Mie scattering is the primary cause of attenuation at the laser wavelengths used. Fog and haze droplets, which dominate the Mie-scattering effect, are responsible for the significant attenuation of transmitted optical beams in free space. Non-selective scattering refers to the scattering phenomenon caused by rainfall, where the radius of raindrops is considerably larger than the wavelength of typical FSO systems. Due to this size difference, the laser beam is capable of passing through raindrops or particles with minimal scattering effects. Hence attenuation is mainly due to Mie-scattering. The measurement of atmospheric visibility provides a valuable indication of the prevailing environmental conditions in the atmosphere. Visibility is the distance that a parallel luminous beam can travel through the atmosphere until its intensity decreases by 2% of its original value. This measurement is conducted using a wavelength of 550 nm, which corresponds to the wavelength that the human eye is most sensitive to. Mie scattering theory can be utilized to predict the attenuation caused by fog. The specific attenuation of fog given by common empirical model for Mie scattering - \[\beta_{fog}(\lambda)=\frac{3.91}{V}\left(\frac{\lambda}{550}\right)^{-P}, \tag{24}\] Where V(km) stands for visibility range, \(\lambda\)(nm) is the operating wavelength and P is the size distribution coefficient of scattering. According to Kruse Model [51] \[P=\left\{\begin{array}{cc}1.6&V>50km\\ 1.3&6km<V<50km\\ 0.585V^{1/3}&V<6km\end{array}\right\}. \tag{25}\] The formula mentioned above for specific attenuation of fog does not account for dense fog conditions. Recent investigations have revealed that wavelength dependency is absent when visibility decreases below 500m. As a result, the parameter "P" in the Kim formula [51] has been adjusted accordingly to accommodate these findings. The modified formula is as follows: \[P=\left\{\begin{array}{cc}1.6&V>50km\\ 1.3&6km<V<50km\\ 0.16V+0.34&1km<V<6km\\ V-0.5&0.5km<V<1km\\ 0&V<0.5km\end{array}\right\}. \tag{26}\] As \(T_{a}=\exp(-\sigma L)\). In terms of scattering coefficient above equation can be written as:- \[T_{a}=\exp(-\beta_{fog}L). \tag{27}\] ### Optical loss The optical losses in FSO systems are primarily attributed to imperfections in the optical elements utilized at the transmitter (\(\eta_{t}\)) and receiver (\(\eta_{r}\)). The losses are expressed in decibels (dB) and can be calculated using the formula described in the [52] \[L_{opt}=10log(\eta_{t}\eta_{r}). \tag{28}\] ## 4 QBER The QBER is a measure of the ratio of incorrect bit counts to the total number of received bit counts. It is used to quantify the probability of obtaining a false detection in comparison to the total probability of detection per pulse. The QBER is influenced by two main components: the signal component and the dark count component. It is assumed that the channel is an exponentially decaying function of distance. Thus, channel transmission \(T_{chan}\) can be written as \[T_{chan}=10^{-\sigma.L/10}, \tag{29}\] where \(\sigma\) is loss coefficient. It includes all types of above calculated losses. ### QBER for BB84 In BB84 protocol, the QBER can be calculated as [53]: \[e_{84}=p_{pol}+\frac{p_{dark}}{\mu.T_{chan}.\eta_{det.2}}, \tag{30}\] where \(p_{pol}\) is the probability of a photon arriving at the wrong detector, leading to false identification of polarization and is given by \(p_{pol}=\frac{1-V_{f}}{2}\) where \(V_{f}\) is fringe visibility, \(p_{dark}\) is the probability of a dark count registered in a detector, \(T_{chan}\) is the transmittance of the free space channel, \(\mu\) is the mean photon number of the signal, and \(\eta_{det}\) is the quantum efficiency of the detector. ### QBER for B92 The B92 protocol utilizes basis states to determine the code values. This results in 50% of cases using the same basis for coding and decoding, and 50% detecting differences in used bases. The number of usable bits equals 25% of the total received bits. The QBER parameters for the B92 protocol are as follows [53]: \[e_{92}=p_{pol}+\frac{p_{dark}}{\mu.T_{chan}.\eta_{det}}. \tag{31}\] Here \(p_{dark}\) is the probability of dark count, \(\mu\) is the average number of photons in a pulse, \(T_{chan}\) is the transmittance of the channel, \(\eta_{det}\) is the quantum efficiency of the detector and \(p_{pol}\) is the probability of a photon arriving at the wrong detector and \(p_{pol}=\frac{1-V_{f}}{2}\). ### QBER for BBM92 The QBER for BBM92 depends on various factors such as the properties of the quantum channel, the quality of the detectors used, and the presence of any eavesdroppers. In the assumed model, the channel is characterized as an exponential decay function with respect to distance. As a result, the transmission of the channel, denoted as \(T_{chan}\), can be expressed as follows \[T_{chan}=10^{-\sigma.L/10}, \tag{32}\] where \(\sigma\) is loss coefficient. In one beam splitter with transmission, we combine all losses to each receiver from the channel, detectors, and optics. \[\alpha_{L}=\eta_{det}.T_{chan}(L). \tag{33}\] The parameter \(\eta_{det}\) represents the cumulative effect of distance-independent losses within the system. The coincidence probability is divided into two components: \(p_{true}\), which denotes the probability of a genuine coincidence between a pair of entangled photons and \(p_{false}\), which represents the probability of a false coincidence. In an ideal source, false coincidences can only arise from a combination of a photon and a dark count or two dark counts [11]. When dual fire events are insignificant, the following expression holds in the limit:- \[p_{coin}=p_{true}+p_{false}. \tag{34}\] We must choose a location for the source. Setting the source at a distance of \(L-x\) from Bob and \(x\) from Alice, we get \[p_{true}=\alpha_{x}\alpha_{L-x}=\eta_{det}\alpha_{L}, \tag{35}\] \[p_{false}=4\alpha_{x}d+4\alpha_{L-x}d+16d^{2}. \tag{36}\] Keeping only terms which are second order in \(\alpha_{x}\) and d, it can be observed that the probability of a true coincidence remains constant with respect to \(x\), while the false coincidence rate changes. By performing a straightforward optimization, it can be determined that the false coincidence rate reaches its minimum value at a distance halfway between Alice and Bob. The value of this minimum false coincidence rate can be calculated using the given formula: \[p_{false}=8\alpha_{L/2}\;d+16\;d^{2}. \tag{37}\] The QBER is given by \[e_{M92}=\frac{p_{false}/2+b\;p_{true}}{p_{coin}}. \tag{38}\] ### QBER for E91 QBER for \(E91\) is given by [11] \[e_{91}=\frac{p_{false}/3+b\ p_{true}}{p_{coin}}labele91. \tag{39}\] where \(\frac{1}{3}\) factor is used because in \(E91\) we use three bases, while in \(BBM92\) we use two therefore the \(\frac{1}{2}\) factor in Eq.(38). ## 5 Keyrate Key rate, in the context of QKD, refers to the rate at which a secure cryptographic key can be generated and shared between two communicating parties, typically referred to as Alice and Bob. It quantifies the speed or efficiency at which error-free key bits can be securely exchanged and determines the practicality and effectiveness of a QKD system. The key rate is measured in bits per second (bps) and is influenced by factors such as the quality of transmitted quantum states, detection efficiency, channel losses, and potential eavesdropping attempt. The key rate serves as a benchmark for evaluating the effectiveness and practicality of protocols using QKD. In the calculation of key rate, the QBER is considered, which includes all types of losses discussed earlier. The QBER, denoted by \(e\), incorporates the effects of various losses, including geometrical loss, optical loss, atmospheric losses like turbulence loss and scattering loss, and other factors that impact the security and performance of the QKD system are discussed above. By accounting for these losses in the QBER, the key rate provides a comprehensive assessment of the utility and efficiency of QKD protocols. ### Keyrate for BB84 The secure key generation rate against PNS attack for the BB84 protocol is given by [54]: \[R_{BB84}=\frac{1}{2}P_{click}(1-\tau^{\prime}+F(e_{84})h(e_{84})). \tag{40}\] and \[\tau^{\prime}=\tau\left(\frac{e_{84}}{\beta}\right), \tag{41}\] where \(\beta\) security parameter defined as \(\beta=\frac{P_{click}-P^{\prime}}{P_{c}tick}\) and \[P^{\prime}=1-(1+\mu+\frac{\mu^{2}}{2}+\frac{1}{2}\frac{\mu^{3}}{6})\exp(-\mu). \tag{42}\] \(F(e_{84})\) is error correction efficiency, \(\tau\) is fraction of the key to be discarded during privacy amplification, \(\tau(e_{84})\)= \(\log(1+4e_{84}-4e_{84}^{2})\) if \(e_{84}<1/2\) and \(\tau(e_{84})=1\) if \(e_{84}>1/2\), And \(h(e_{84})\) is binary shannon entropy given by \[h(e_{84})=-e_{84}\log_{2}-(1-e_{84})\log_{2}(1-e_{84}). \tag{43}\] In the context of the standard BB84 protocol, an essential parameter of interest is the system's signal, which is commonly referred to as \(P_{click}\). This quantity represents the overall anticipated probability that Bob will observe the detection of a photon during a specific pulse. Typically, \(P_{click}\) is determined by considering two distinct sources that can independently trigger a detection event. These sources encompass photons transmitted by Alice and background dark counts [55]. \[P_{click}=p_{signal}+p_{dark}-p_{signal}p_{dark}. \tag{44}\] The probability of Bob's detector firing due to a photon emitted by Alice's source is denoted as \(p_{signal}\). On the other hand, \(p_{dark}\) represents the probability of a dark count occurring in Bob's detector. Since each of Bob's detectors has a specific probability of dark counts per time slot in the absence of a real signal, the cumulative contribution of dark counts to the detection event is determined by the following relationship: \[p_{dark}=4d. \tag{45}\] The presence of dark counts in the detection process is primarily determined by the characteristics of the detectors. Dark counts tend to become more significant when the probability of Bob's detector firing due to a photon from Alice's source, \(p_{signal}\) is small. Dark counts can arise from various sources such as thermal fluctuations in the detector and stray counts. In the equation mentioned above, the coefficient 4 is present because there are four detectors in the passive module, indicating that the dark count is four times larger than the parameter D. In addition, the number of dark counts occurring within the measurement time window can be expressed as four times larger than D. Furthermore, the dark count per measurement time window is given by a number of dark counts per measurement window: \[d=Dt_{w}. \tag{46}\] The equation mentioned above relates the dark counts per measurement time window to the dark count rate of the detectors, represented by \(D\), and the duration of the measurement time window, denoted as \(t_{w}\). It is important to note that the expression assumes the neglect of simultaneous occurrences of signal and dark count events when both \(p_{signal}\) and \(p_{dark}\) are small. Additionally, it is important to emphasize that QKD systems are implemented either using fiber optics or free space channels. As the channel of interest here, we consider the free space link. The atmospheric channel is susceptible to a variety of undesirable transmission phenomena, including atmospheric scattering, absorption and turbulence. These phenomena can result in photon losses during the propagation, leading to a condition known as decoherence. Decoherence poses a significant challenge in achieving successful free space QKD. Therefore, in addition to the \(P_{click}\) parameter, the total transmission efficiency becomes another important figure of interest. In the case of a free space channel with relatively high link loss, the contribution of the signal to the detection event is greatly influenced by the total transmission efficiency \(\eta_{tot}\), which can be formulated as: \[\eta_{tot}=T_{chan}\eta_{det}, \tag{47}\] where \(T_{chan}\) is the quantum channel transmission and \(\eta_{det}\) is the detector efficiency. Depending on the link scenario, \(T_{chan}\) can be \(A_{atm}^{GS-GS}\), \(A_{atm}^{GS-SL}\), \(A_{atm}^{SL-SL}\). In the context of satellite-to-ground (downlink), the light signal emitted from the satellite travels through a relatively long vacuum distance before encountering the unpredictable and troublesome atmosphere. On the other hand, in the case of ground-to-satellite (uplink), the beam spreading effect caused by turbulence occurs primarily in the initial part of the path. In contrast, when it comes to satellite-to-satellite links, turbulence does not occur at all. The atmospheric losses in a point-to-point link can be given by [55]: \[A_{atm}^{GS-GS}=exp(-\sigma L), \tag{48}\] where \(\sigma\) is used to denote the attenuation coefficient of light signal after passing through the atmosphere. The quantum channel transmission of both ground-to-satellite and satellite-to-ground links, excluding the point-to-point link, can be calculated as: \[A_{atm}^{GS-SL}=T_{chan}^{B_{\theta}}. \tag{49}\] In the Ground-to-Satellite direction, \(B_{\theta}\) is the Zenith angle, and \(T_{chan}\) is atmospheric transmission at Zenith angle. In the scenario of a Satellite-to-Satellite link, where there is no atmosphere present, the channel attenuation is provided as: \[A_{atm}^{SL-SL}=1. \tag{50}\] In general, photon sources are governed by the Poisson probability distribution. Utilizing the characteristic of laser pulses following the Poisson number distribution, the distribution of photon pulses can be mathematically represented as follows: \[P(n,\mu)=\frac{\mu^{n}}{n!}exp(-\mu). \tag{51}\] The Poisson probability distribution, denoted as \(P(n,\mu)\), characterizes the distribution of photons in each weak laser pulse emitted by the transmitter, assuming that there are \(n\) photons in a pulse. The parameter \(\mu\) represents the average number of photons per weak laser pulse. During the process of communication, the photons being transmitted encounter various disturbances and alterations in the channel. These disturbances encompass phenomena such as reflection, absorption, and scattering. In order to understand how these undesirable channel characteristics affect the transmitted photon pulses, the binomial probability distribution rule is applied. Accordingly, when photons propagate through the channel, the probability of registering at least one photon at the receiver can be expressed using the probability distribution \[P_{n}\geq 1=\sum_{k=1}^{n}C_{n}^{k}.(1-\eta_{tot})^{n-k}=1-(1-\eta_{tot})^{n}. \tag{52}\] Additionally, the quantum channel efficiency \(\eta_{Qchann}\) can be calculated by multiplying Eqs.(51)and (52) together. Quantum channel efficiency is given by \(\eta_{Qchann}\) \[\eta_{Qchann} =\sum_{n=1}^{\infty}P(n,\mu)P_{n}\geq 1,\] \[=exp(-\mu)\sum_{n=1}^{\infty}\frac{\mu^{n}}{n!}[1-(1-\eta_{tot})^ {n}],\] \[=1-exp(1-\eta_{tot})^{n}. \tag{53}\] Based on the behaviour of \(p_{signal}\), \[p_{signal}=1-exp(-\mu\eta_{tot}), \tag{54}\] \(P_{click}\) diminishes with increasing distance between the remote communicating parties. In the above expression, the symbol \(\mu\) represents the average number of photons per pulse. In the case of an ideal single photon source, \(\mu\) is equal to 1. However, for a Poisson source, \(\mu\) becomes a variable that requires optimization. The arrival of single photon signals at Bob's detector site is influenced by the overall probability of \(\eta_{tot}\) due to losses in the quantum channel. These single photon signals, when detected, contribute to the detection process. ### Keyrate for B92 The secure key generation rate of the B92 protocol against photon number splitting attack can be formulated as [54]: \[R_{B92}=\frac{1}{4}P_{click}\{(1-\tau^{\prime}+F(e_{92})h(e_{92}))\}. \tag{55}\] In B92, only 25% of the bits transmitted will be detected by Bob, i.e., only 25% of the raw key bits should be kept. Hence \(\frac{1}{4}\) is the Sifting Factor. All the other expressions are defined in Sec.5.1. ### Keyrate for BBM92 The keyrate for BBM92 protocol against double blinding attack is given by [11]: \[R_{BBM92}=\frac{p_{coin}}{2}\{\tau(e_{M92})+f(e_{M92})(e_{M92}log_{2}(e_{M92}) +(1-e_{M92})log_{2}(1-e_{M92}))\}, \tag{56}\] \(\tau\) is fraction of the key to be discarded during privacy amplification, \(f(e_{M92})\) is error correction factor, \(p_{coin}\) is the coincidence probability which has already been explained in Sec.4.3. #### 5.3.1 Blinding Attacks: (Single and Double Blinding Attacks) In the context of the BBM92 protocol, the existing blinding attack are of the intercept and resend type. In this type of attack, a malicious entity, often referred to as Eve, intercepts the signal that was originally intended for Bob. Eve then proceeds to perform measurements using random bases in order to obtain the raw key, just as Bob would have done in the intended communication process. To conceal her presence, Eve forwards a signal to Bob whenever she successfully obtains a measurement result. This signal ensures that Bob receives an identical outcome, while in the case of diagonal alignment, no detection occurs at all. In practical implementation using QKD devices [56], Eve employs techniques to blind Bob's detectors to single-photon detection. She achieves this by manipulating the detectors to shift from Geiger mode to linear mode, where a detector only registers a click if the incoming signal intensity exceeds a preset discriminator threshold, denoted as \(I_{th}\). After each detection, Eve sends a bright pulse with linear polarization aligned to her own measurement result. When Eve and Bob randomly select identical measurement bases, the pulse deterministically generates a click in one of Bob's detectors. This ensures that Bob's measurement outcomes match those of Eve because the pulse is either fully reflected or transmitted at Bob's polarizing beamsplitter. However, to prevent double counting and incorrect results when Eve and Bob randomly select bases that are diagonal to each other, Eve adjusts the intensity of the pulses to be lower than twice the threshold intensity of the detectors. Consequently, the pulse is split in half at Bob's polarizing beamsplitter, resulting in an output that is insufficient to surpass the threshold and produce a click in either of Bob's detectors. The objective of the attack is for Eve to obtain an exact replica of Bob's key at the conclusion of the raw key distribution process. If Alice and Bob are sufficiently satisfied with the measured QBER on a subset of the key, Eve can eavesdrop on the error correction protocol that Alice and Bob employ. By performing the same operations as Bob during the error correction phase, Eve can successfully acquire an exact copy of the sifted key in the end. One limitation of single-blinding attacks is that, on average, Bob's resulting key size is reduced by half compared to what he would have obtained without the attack. This reduction occurs because approximately half of the time, the randomly chosen measurement bases of Eve and Bob turn out to be diagonally opposite to each other. Consequently, Bob's detectors do not register any clicks in such cases. Therefore, the efficiency of this attack, by design, is fundamentally limited to 50% on Bob's side. Here the proposed double-blinding attack involves a similar implementation to the single-blinding attack, but with the key difference that Eve blinds all detectors on both sides instead of just Bob's detectors. Due to the double-blinding attack, Alice and Bob are unable to detect the presence of Eve, resulting in a complete elimination of information leakage. In other words, the measure of information leakage, denoted as \(\tau\) becomes zero in this scenario. ### 5.4 Keyrate for E91 \[R_{E91}=\frac{p_{coin}}{3}\{\tau(e_{91})+f(e_{91})(e_{91}log_{2}(e_{91})+(1-e _{91})log_{2}(1-e_{91}))\}, \tag{57}\] here \(\frac{1}{3}\) is Sifting Factor because we use three basis in E91. The parameters used in Eq.(57) have been described above. ## 6 Numerical Results and Discussion We have done a comparison between the BB84 and B92 Protocols for both the uplink and downlink communication scenarios. Our focus is to examine the relationship between channel length and two key metrics: the QBER and the keyrate. The simulated results, depicted in Fig.2 and Fig.3 showcase these dependencies under two different zenith angles: \(0^{\circ}\) and \(45^{\circ}\), for both the uplink and downlink scenarios. Further, we have done comparison for BBM92 and E91 protocols, which are entanglement-based protocols, for both the uplink and downlink scenarios at two distinct Zenith angles: \(0^{\circ}\) and \(45^{\circ}\). The outcomes of this comparison are illustrated in Fig.4 and Fig.5 below. Figure 3: (a) Comparison of keyrate for BB84 and B92 protocols with distance for uplink at \(0^{\circ}\) and \(45^{\circ}\) zenith angles, (b) Comparison of keyrate for BB84 and B92 protocols with distance for downlink at \(0^{\circ}\) and \(45^{\circ}\) zenith angles. The bold curves represent \(0^{\circ}\) zenith angle and the dash curves represent \(45^{\circ}\) zenith angle. Figure 2: (a) Comparison of QBER for BB84 and B92 protocols with distance for uplink at \(0^{\circ}\) and \(45^{\circ}\) zenith angles, (b) Comparison of QBER for BB84 and B92 protocols with distance for downlink at \(0^{\circ}\) and \(45^{\circ}\) zenith angles. The bold curves represent \(0^{\circ}\) zenith angle and the dash curves represent \(45^{\circ}\) zenith angle. Figure 4: (a) Comparison of QBER for BBM92 and E91 protocols with distance for uplink at \(0^{\circ}\) and \(45^{\circ}\) zenith angles, (b) Comparison of QBER for BBM92 and E91 protocols with distance for downlink at \(0^{\circ}\) and \(45^{\circ}\) zenith angles. The bold curves represent \(0^{\circ}\) zenith angle and the dash curves represent \(45^{\circ}\) zenith angle. Figure 5: (a) Comparison of keyrate for BBM92 and E91 protocols with distance for uplink at \(0^{\circ}\) and \(45^{\circ}\) zenith angles, (b) Comparison of keyrate for BBM92 and E91 protocols with distance for downlink at \(0^{\circ}\) and \(45^{\circ}\) zenith angles. The bold curves represent \(0^{\circ}\) zenith angle and the dash curves represent \(45^{\circ}\) zenith angle. ### QBER Performance: Based on the QBER expression for QKD link, we have done numerical simulations for laser links between a ground station and a satellite in low Earth orbit. Our simulations have been carried out at an operating wavelength of 800 nm, with an average photon number \(\mu=1\) for BB84 and B92 protocols and an ideal entangled photon source for BBM92 and E91 protocols. The plots shown in Fig.2 and Fig.4 demonstrates that the QBER tends to increase as the distance between the ground station and the satellite increases. The QBER exhibits a rising trend with increasing distance for both the uplink and downlink scenarios. Moreover, when comparing the uplink and downlink scenarios, it is evident that the QBER values are consistently higher in the uplink for all the aforementioned protocols. As expected, the introduction of additional losses in the quantum channel leads to an increase in the QBER values. This relationship is clearly observed in the Fig.2 and Fig.4 by varying the communication distance. We have calculated QBER for \(0^{\circ}\) and \(45^{\circ}\) zenith angles and observed that the QBER is lower for the \(0^{\circ}\) zenith angle compared to the \(45^{\circ}\) zenith angle. This difference can be attributed to the shorter distance associated with the \(0^{\circ}\) zenith angle, as opposed to the longer distance at the \(45^{\circ}\) zenith angle. The reduced distance at \(0^{\circ}\) zenith angle result in lower losses and improved signal quality, leading to a lower QBER. By analyzing the obtained QBER values, it has been observed that the BB84 protocol exhibits greater stability against the channel losses compared to the B92 protocol. Similarly, comparison between BBM92 and E91 protocols reveals that E91 has more QBER than BBM92. ### Keyrate Performance: In Fig.3 and Fig.5, the keyrate is plotted as a function of distance. In our numerical analysis, different parameters are carefully considered, among which are the dark count rate of the detector set to \(4\times 10^{-8}\), the repetition rate of the laser source is \(10MHz\). Moreover, as previously stated, this repetition rate is chosen because it is the maximum achievable with existing APD detectors available. A comparison between BB84 and B92 protocols reveals that BB84 exhibits a higher key rate and hence more stable. Fig.5(a) depicts the key generation rate as a function of distance for Ground-to-Satellite communication and Fig.5(b) shows the key generation rate as a function of distance for Satellite-to-Ground for BBM92 and E91 protocols. The obtained keyrate brings out that BBM92 protocol is more stable against channel loss as compared to E91 protocol. ## 7 Conclusion We have compared the performance of the BB84 and B92 protocols as well as BBM92 and E91 Entangled based protocols for laser links between a ground station and a satellite in low Earth orbit. The QBER calculation was done using above mentioned losses for all four protocols. The expressions for the quantum keyrate were given for the ideal single photon sources for BBM92 and E91 protocols with some modifications. Similarly, the expressions for the quantum keyrate were given for the single photon sources for BB84 and B92 protocols with some modifications. On this basis, an evaluation of the quantum keyrate including various losses for the laser links between a ground station and a satellite for both uplink and downlink in the low earth orbit were performed at \(0^{\circ}\) and \(45^{\circ}\) zenith angles. It was observed that \(0^{\circ}\) zenith angle has more keyrate than \(45^{\circ}\) zenith angle in both uplink and downlink scenarios. This indicates that the shorter distance and associated reduced losses at the \(0^{\circ}\) zenith angle contribute to an increased key rate in the communication between the ground station and the satellite. Also the presented theoretical analysis results show that BB84 protocol can ensure the distribution of high secure keyrate for a given distance in comparison to the B92 and BBM92 ensures higher keyrate for a given distance in comparison to E91 protocol. ## Acknowledgements The author would like to thank CSIR for the fellowship support. SB acknowledges support from Interdisciplinary Cyber Physical Systems (ICPS) programme of the Department of Science and Technology (DST), India, Grant No.:DST/ICPS/QuST/Theme-1/2019/6. SB also acknowledges the valuable contribution of the Defense Research and Development Organization (DRDO). ### Declarations The authors declare no conflicts of interest related to this research. ### Data Availability Statement Data sharing is not applicable to this article as no datasets were generated or analyzed during the current study.
2305.08788
Examining transitional galaxies to understand the role of clusters and their dynamical status in galaxy quenching
In this work, we consider four different galaxy populations and two distinct global environments in the local Universe (z $\leq 0.11$) to investigate the evolution of transitional galaxies (such as star-forming spheroids and passive discs) across different environments. Our sample is composed of 3,899 galaxies within the R$_{200}$ radius of 231 clusters and 11,460 field galaxies. We also investigate the impact of the cluster's dynamic state, as well as the galaxy's location in the projected phase space diagram (PPS). We found that although the cluster environment as a whole influences galaxy evolution, the cluster dynamical state does not. Furthermore, star-forming galaxies represent recent cluster arrivals in comparison to passive galaxies (especially in the case of early-types). Among the ETGs, we find that the D$_n(4000)$ and H$_\delta$ parameters indicate a smooth transition between the subpopulations. In particular, for the SF-ETGs, we detect a significant difference between field and cluster galaxies, as a function of stellar mass, for objects with Log $M_*$/M$_{\odot} > 10.5$. Analyzing the color gradient, the results point toward a picture where field galaxies are more likely to follow the monolithic scenario, while the cluster galaxies the hierarchical scenario. In particular, if we split the ETGs into lenticulars and ellipticals, we find that the steeper color gradients are more common for the lenticulars. Finally, our results indicate the need for galaxy pre-processing in smaller groups, before entering clusters.
Douglas Brambila, Paulo A. A. Lopes, André L. B. Ribeiro, Arianna Cortesi
2023-05-15T16:47:13Z
http://arxiv.org/abs/2305.08788v1
Examining transitional galaxies to understand the role of clusters and their dynamical status in galaxy quenching ###### Abstract In this work, we consider four different galaxy populations and two distinct global environments in the local Universe (z \(\leq\) 0.11) to investigate the evolution of transitional galaxies (such as star-forming spheroids and passive discs) across different environments. Our sample is composed of 3,899 galaxies within the R\({}_{200}\) radius of 231 clusters and 11,460 field galaxies. We also investigate the impact of the cluster's dynamic state, as well as the galaxy's location in the projected phase space diagram (PPS). We found that although the cluster environment as a whole influences galaxy evolution, the cluster dynamical state does not. Furthermore, star-forming galaxies represent recent cluster arrivals in comparison to passive galaxies (especially in the case of early-types). Among the ETGs, we find that the D\({}_{n}\)(4000) and H\({}_{\delta}\) parameters indicate a smooth transition between the subpopulations. In particular, for the SF-ETGs, we detect a significant difference between field and cluster galaxies, as a function of stellar mass, for objects with Log \(M_{*}\)/M\({}_{\odot}>10.5\). Analyzing the color gradient, the results point toward a picture where field galaxies are more likely to follow the monolithic scenario, while the cluster galaxies the hierarchical scenario. In particular, if we split the ETGs into lenticulars and ellipticals, we find that the steeper color gradients are more common for the lenticulars. Finally, our results indicate the need for galaxy pre-processing in smaller groups, before entering clusters. keywords: galaxies: clusters: general - galaxies: clusters - galaxies: evolution - galaxies: clusters: environment ## 1 Introduction The existence of a bi-modality in different galaxy properties is well established in the past years (Strateva et al., 2001; Kauffmann et al., 2003; Baldry et al., 2004; Balogh et al., 2004; Baldry et al., 2006). We can separate galaxies into two main categories: blue cloud galaxies (BC) - mostly blue star-forming spiral galaxies; and red sequence objects (RS) -- mainly red early-types with no significant star-formation activity. Despite that, we have found galaxies that defy the simplistic view that elliptical galaxies are red and dead while spiral galaxies are blue and lively (e.g., Wolf et al., 2009; Masters et al., 2010; Cortese, 2012; Crossett et al., 2014; Lopes et al., 2016; Kuchner et al., 2017; Mahajan et al., 2018). We have also established the existence of an intermediary region between the BC and the RS, called Green Valley (GV), that works as a transitioning region between the BC and RS (Martin et al., 2007; Salim et al., 2009; Mendez et al., 2011; Schawinski et al., 2014). These two points together paint a clear scenario where galaxies evolve from the BC to the RS as their life progresses. However, the mechanisms responsible for this transformation are still open for debate. Dressler (1980) showed that there is a relation between the galaxy morphology and the environment where the galaxy inhabits (the Morphology-Density Relation). Early-type galaxies (ETGs; the dominant morphology in the RS) show a preference for inhabiting higher-density environments, and therefore are primarily found in galaxy groups or clusters. On the contrary, late-type galaxies (LTGs; the dominant morphology in the BC) show a preference for inhabiting lower-density environments, being mainly found in the field or the outskirts of groups and clusters. Alongside that relation, we also have evidence that the fraction of blue galaxies inside clusters increases while the fraction of red galaxies decreases when we go for higher redshift regimes (Butcher & Oemler, 1984). Different mechanisms impact a galaxy as it enters a cluster. Those can be responsible for quenching the galaxy's star formation or, at least, accelerating this process. One of those effects is ram-pressure striping (RPS), where the gas inside the galaxy is removed due to the pressure caused by the hot intracluster medium (Gunn & Gott, 1972). Other ones are strangulation and starvation -- when the hot intracluster medium removes gas from the galaxy halo and also prevents a new supply of gas from falling into the galaxy (Larson et al., 1980). We also have events like mergers (when two galaxies go into a fusion process and become one single entity -- Toomre & Toomre, 1972), harassment (the accumulative effect of several fly-by encounters of a galaxy with others -- Moore et al., 1996, 1998), and tidal force from the gravitational potential well of the cluster (capable of driving gas to the center of the galaxy and triggering a center starburst and bar instabilities -- Byrd & Valtonen, 2001; Lokas et al., 2016). All of those processes are, in some way or another, capable of altering the gas content of a galaxy inside or entering the cluster. On the other hand, one cannot ignore that galaxies also evolve when found in isolation in the Universe. This isolated evolution points toward a myriad of processes happening inside those galaxies. This event falls under the umbrella of mass quenching. In lower mass systems (M\({}_{*}<10^{9}M_{\odot}\)), stellar and AGN feedback are capable of heating and, in some cases, even removing galactic gas (e.g., Ponman et al., 1999), while for higher mass systems, the importance of stellar feedback decreases, but the AGN one is still very important (Larson, 1974; Dalla Vecchia & Schaye, 2008; Croton et al., 2006; Fabian, 2012). Several studies show that mass quenching and environmental quenching have distinct effects on different galaxy types. According to Peng et al. (2010), satellite galaxies are more likely to be quenched by environmental action, whereas central galaxies are more likely to be quenched by mass-related effects. It is not an easy task to separate the influence of internal and external mechanisms in galaxy evolution. Besides showing that mass and environmental quenching have a distinct impact on different types of galaxies, Peng et al. (2010) also verified that environmental quenching acts independently of the mass and vice-versa. A clear evidence of the cluster influence in galaxy evolution is presented by Jaffe et al. (2015). The authors analyzed galaxies of the relaxed cluster Abell 963 (\(z=0.203\)) observed with the Blind Ultra Deep HI Environmental Survey (BUDHIES). They showed that galaxies lose their gas content, down to the BUDHIES detection limit, in their first passage into the cluster center. The authors also encounter a significant fraction of galaxies that arrive at the cluster already without gas content. This last result raises the question of the role of the cluster in the gas removal of those galaxies. In the present work, we try to shed some light on the role of clusters in galaxy quenching, investigating the properties of different galaxy populations and at different locations within clusters. We focus on the comparison of transitional galaxies - such as the star-forming spheroids and red discs - to the more regular populations. That is done in different locations in the projected phase space diagram. We also investigate possible dependencies to the cluster's dynamical state and compare the cluster results to what is found in the field. The paper is organized as follows: In SS2, we introduce our data, describing the galaxy morphological and star-formation activity classification, as well as the cluster dynamical state, while in SS3, we show our results. InSS4, we have a discussion of the main results and our conclusions. The cosmology assumed in this work is \(\Omega_{\rm m}=0.3\), \(\Omega_{\lambda}=0.7\), and H\({}_{0}=100\) h km \(s^{-1}\) Mpc\({}^{-1}\), with h set to 0.7. ## 2 Data In this work, we consider galaxies from two very different environments, as we have a sample of cluster galaxies and one of field objects. The cluster sample is a combination of objects selected in different wavelengths. We have clusters from the supplement version of the Northern Sky Optical Cluster Survey (NoSOCS, Lopes et al., 2004, 2009a), the Cluster Infall Regions in the SDSS (CIRS, Rines & Diaferio, 2006), the HIghest X-ray FLUx Galaxy Cluster Sample (HIFLUGCS, Reiprich & Bohringer, 2002; Andrade-Santos et al., 2017), the Planck Early Sunyaev-Zel'Dovich (ESZ, Planck Collaboration et al., 2011) and the SPIDERS catalog (Kirkpatrick et al., 2021). The NoSOCS clusters comprise the only optically selected catalog of those listed above. HIFLUGCS is an X-ray cluster catalog. The CIRS and SPIDERS samples are composed of X-ray selected clusters, with SDSS spectroscopic data. The latter is actually based on an extensive follow-up effort of SDSS-IV. The Planck ESZ catalog consists of clusters selected through the Sunyaev-Zel'Dovich effect. All the clusters we use in the current work are within the SDSS DR7 spectroscopic footprint, so that we can uniformly derive cluster properties for these systems. We have previously worked with the NoSOCS+CIRS clusters in Lopes et al. (2009a,b, 2014, 2016, 2017) and Ribeiro et al. (2013). In Lopes et al. (2018), we compared substructure estimates and BCG properties of those clusters to the ones from the HIFLUGCS and Planck ESZ lists (as provided by Andrade-Santos et al., 2017). In the current paper, we combined all these catalogs with the SPIDERS sample. Our goal is to minimize possible biases due to different selection techniques and wavelengths. One bias example is related to the substructure fraction, discussed in Andrade-Santos et al. (2017) and Lopes et al. (2018), that can be found when comparing X-ray and SZ selected samples (also seen when comparing to optically selected systems). However, in order to have a more uniform data set we only consider systems at \(z\leq 0.10\), containing at least 10 members within R\({}_{200}\) and with LOG M\({}_{200}/M_{\odot}\geq 13.5\) (membership, R\({}_{200}\) and M\({}_{200}\) estimates are explained below). Doing so, we are able to work with a complete spectroscopic sample from the SDSS DR7 main sample and derive reliable estimates of membership and cluster physical parameters. Figure 1 shows the distribution of Log M\({}_{200}\) for each cluster sample mentioned in the previous paragraph. The number of objects with (dashed line) or without (solid line) substructure is also indicated. The distributions are for the original data sets, before we check for duplicates and remove them (see below), but we do consider the redshift and mass cuts described above (\(z\leq 0.10\) and LOG M\({}_{200}/M_{\odot}\geq 13.5\)). This figure shows that the combination of the samples is an important step towards a more complete distribution according to the cluster mass. We are also able to mitigate the impact of cluster catalogs heavily affected by substructure (such as the Planck ESZ list). Note that systems with substructure generally have higher masses in comparison to the more regular clusters, a result consistent to the findings of Ribeiro et al. (2011). All the clusters in our sample have the same redshift limits and had a minimum number of member galaxies and cluster mass imposed. However, the main factor that could still affect our results is the cluster mass cut, as we also consider groups, or low mass clusters. We verified that applying a higher mass cut (at LOG M\({}_{200}/M_{\odot}=14\), instead of 13.5) do not affect our main results. For instance, the fractions of the galaxy populations displayed in Table 1 (see below) do not show a large variation. Note also, that the fraction of all galaxies within the groups (LOG M\({}_{200}/M_{\odot}<14\)) is small (23%). Hence, we decided to keep the analysis with all systems in our sample, as described above (with LOG M\({}_{200}/M_{\odot}\geq 13.5\)). For each cluster from the above catalogs, we used SDSS-DR7 photometric and spectroscopic data to select members (and exclude interlopers), estimate the velocity dispersion (\(\sigma_{c1}\)), physical radius (R\({}_{500}\) and R\({}_{200}\)) and mass (M\({}_{500}\) and M\({}_{200}\)). As in Lopes et al. (2009a) we select members and exclude interlopers after applying the shifting gopper technique (Fadda et al., 1996) to all galaxies with available redshifts around each cluster. We only use the members within 2.5 h\({}^{-1}\) Mpc to derive an initial estimate of the velocity dispersion (\(\sigma_{c1}\)). Differently from what we did in previous works when we estimated physical radius and cluster masses from a virial analysis (following the approach of Girardi et al., 1998), we now follow the procedure described in Ferragamo et al. (2020). First, we apply the corrections proposed by Ferragamo et al. (2020) to the velocity dispersion estimate (initially derived within 2.5 h\({}^{-1}\) Mpc). Next, we obtain an estimate of M\({}_{200}\) adopting the equation 1 (listed below) of Ferragamo et al. (2020) (also see Munari et al., 2013). The corrections considered by Ferragamo et al. (2020) to the mass estimate are also employed. \[\frac{\sigma_{\rm 1D}}{\rm km~{}s^{-1}}=A\left[\frac{h(z)M_{200}}{10^{15}\, \rm M\odot}\right]^{\alpha}, \tag{1}\] where \(A\) is 1777.0 km s\({}^{-1}\) and \(\alpha=0.364\). R\({}_{200}\) is assumed to be the radius at which the averaged density reached 200 times the critical mass density of the Universe at redshift \(z\). Hence, considering the mass \(M_{200}=(4\pi/3)200\rho_{c}\left(z\right)R_{200}^{3}\) is the total mass within R\({}_{200}\), we can derive an estimate of R\({}_{200}\) from the above mass estimate. We then obtain a final mass estimate, but now considering only members within R\({}_{200}\) (instead of 2.5 h\({}^{-1}\) Mpc). Having this updated member list we estimate again \(\sigma_{c1}\) and the mass, using equation 1 of Ferragamo et al. (2020); Munari et al. (2013). The corrections from Ferragamo et al. Figure 1: The distribution of Log M\({}_{200}\) for the cluster samples considered for the present work. In the top left we show the NoSOCS+CIRS clusters, in the top right the HIFLUGCS objects are displayed, while in the bottom we have the Planck ESZ (left) and SPIDERS (right) samples. The distributions are for the original data sets, before we check for duplicates and remove them, but we do consider the redshift and mass cuts described above (\(z\leq 0.10\) and LOG M\({}_{200}/M_{\odot}\geq 13.5\)). On each panel, we display objects with no substructure in red (solid lines) and those with substructure in blue (dashed lines). The number of systems with or without substructure is also listed in the panels. (2020) are again applied both to \(\sigma_{c1}\) and M\({}_{200}\). R\({}_{500}\) and M\({}_{500}\) estimates are derived after assuming a NFW profile and interpolating to the appropriate radius. For more details on the estimates above, we refer the reader to Lopes et al. (2009, 2014, 2018) and Ferragamo et al. (2020). Next, we eliminate common clusters from the different catalogs, arriving at a cluster sample with 231 objects. They are between \(0.018\leq z\leq 0.100\), and have \(13.5\leq\) LOG M\({}_{200}/M_{\odot}\leq 15.13\). In order to keep a cluster, we also impose a minimum number of 10 members within R\({}_{200}\). The upper redshift limit (\(z=0.100\)) corresponds to the completeness limit of the SDSS main spectroscopic sample, limited at \(r_{r}^{Petro}=17.77\). That is translated to an absolute magnitude limit of \(M_{r}\sim M^{*}+1=-20.58\). We call galaxies more luminous than this absolute magnitude limit as bright, which are the ones studied in the current work. In total, we have 3899 bright galaxies (\(M_{r}\leq M^{*}+1\)), inside R\({}_{200}\) of those 231 galaxy clusters, ranging between \(0.015\leq z\leq 0.107\) and \(10^{9.0}\leq M_{*}/M_{\odot}\leq 10^{12}\). For all those galaxies, we have morphological information from Dominguez Sanchez et al. (2018, hereafter DS18) or Ibertras-Company et al. (2011, hereafter HC11); see SS2.1. Although most of our work based on the cluster sample considers only galaxies within R\({}_{200}\), we still have member galaxies up to 5\(\times R_{200}\) (approximately the turn-around radius). In total, we have 9931 members within 5\(\times R_{200}\) (which are used in Fig. 11). It's important to notice that our sample includes low-mass galaxy systems (M\({}_{200}<1\times 10^{14}\)), which should be referred to as groups. However, for ease of reading, we decided to call, in this paper, all the objects in our sample as clusters. Groups correspond to \(\sim 40\%\) of our sample, containing \(\sim 23\%\) of all the galaxies in our _cluster_ sample. The field sample was selected by comparing the photometric and spectroscopic SDSS-DR7 data with a sample of about \(15,000\) groups and clusters provided by Gal et al. (2009), combined with our cluster sample described above. To select field galaxies, we excluded any galaxy with a distance smaller than 4 Mpc and with \(\Delta z<0.06\) of any object from the combined cluster catalog. To ensure a lower level of contamination from clusters and groups that eventually are not present in the cluster comparison sample, we also remove galaxies with a local galaxy density LOG(\(\Sigma_{5}\)) \(<-0.5\). The parameter \(\Sigma_{5}\) (described in La Barbera et al., 2010; Lopes et al., 2014, 2017) is given by \(5/\pi d_{n}^{2}\), where \(d_{n}\) was selected to be the projected distance of the fifth-nearest galaxy with maximum velocity offset of \(1,000\)\(kms^{-1}\). The total number of field galaxies selected according to the criteria above is \(55,842\) sources. Our sample was reduced to \(\sim 30,000\) galaxies after we impose redshift (\(0.01\leq z\leq 0.11\)), stellar mass (\(10^{9.0}\leq M_{*}/M_{\odot}\leq 10^{12}\)) and luminosity cuts (\(M_{r}<M^{*}+1\)). Note that we also require the galaxies to have morphological information provided by DS18 (or HC11). Finally, we construct a sample of field galaxies that follows the mass distribution of the cluster sample. To do so, we select a random sample (of 3899 galaxies) from the \(30,000\) field galaxies, forcing it to have the same pattern of stellar mass distribution that the cluster sample has. We then apply a Kolmogorov-Smirnov (KS) and Anderson-Darling (AD) test to check if the samples are statistically similar. In this case, we keep the selection and repeat the process, removing the already selected galaxies from the \(30,000\) pile. We repeat the process until the KS and AD return that the sample is statistically distinct. In the end, we retain the largest sample of field galaxies that has a similar stellar mass distribution to the cluster sample, resulting in \(11,460\) field objects. The \(p-value\) is 0.13 and the AD test has the confidence of 99.5%. ### Galaxy morphology Galaxy morphology was mainly extracted from DS18. They adopt a deep learning technique to classify galaxies from the sample, described at Meert et al. (2015, 2016). This data set was constructed from the SDSS-DR7 spectroscopic main sample, taking into account redshift (\(0.005<z<1.0\)) and magnitude (\(14<m_{r}^{Petro}<17.77\)) cuts. DS18 provides morphological information for about 670,000 galaxies. The Convolutional Neural Network was trained to obtain the T-type values and the probability of the galaxies containing certain features like a disc, inclination (face/edge-on), bar signature, bulge prominence, roundness, and mergers. They also provide the probability of a galaxy having a lenticular morphology. To create a robust classification, we combine the T-type information and the probabilities provided by DS18 to have a morphological classification for elliptical and lenticular galaxies. We proceed as below: * **Elliptical (E):* * T-type \(<-1.5\) * **Lenticular (S0):* * \(-1.5<\) T-type \(<0.0\) * P\({}_{\rm S0}>0.6\) We combined E and S0 galaxies into one class, which we named Early-type Galaxies (ETGs). Objects that are not ETGs are called Late-type Galaxies (LTGs). When morphological information was not obtainable through DS18, we supplemented the data with the classification provided by HC11. HC11 provided morphological information gathered for the SDSS-DR7 spectroscopic sample (up to \(z=0.25\)) via a Machine Learning technique (in their case, using a Support Vector Machine). HC11 gives probabilistic information to classify galaxies as ellipticals, early Spirals (ab), and late Spirals cd. From the 11460 field galaxies, only 13 (\(\sim 0.1\%\)) \begin{table} \begin{tabular}{c c c c c c} & \multicolumn{4}{c}{Cluster sample} \\ & **Passive** & **GV** & **SF** & **GV+SF** & **Total** \\ \hline **ETG** & 2432 & 77 & 56 & 133 & 2565 \\ & (94.81\%) & (3.00\%) & (2.18\%) & (5.18\%) & (65.79\%) \\ Ellipicals (E) & 2003 & 39 & 24 & 63 & 2066 \\ Lenticulars (S0) & 429 & 38 & 32 & 70 & 499 \\ \hline **LTG** & 827 & 116 & 391 & 507 & 1334 \\ & (61 39\%) & (8.7\%) & (29.31\%) & (38.01\%) & (33.21\%) \\ \hline **Total** & 3259 & 193 & 447 & 640 & 3899 \\ & (83.58\%) & (4.95\%) & (11.46\%) & (16.41\%) & (100\%) \\ \hline \hline \multicolumn{4}{c}{Field sample} \\ & **Passive** & **GV** & **SF** & **GV+SF** & **Total** \\ \hline **ETG** & 4385 & 378 & 715 & 1093 & 5478 \\ & (80.05\%) & (6.90\%) & (13.05\%) & (19.95\%) & (47.80\%) \\ Ellipicals (E) & 3461 & 188 & 239 & 427 & 3888 \\ Lenticulars (S0) & 924 & 190 & 476 & 666 & 1590 \\ \hline **LTG** & 1596 & 827 & 3559 & 4386 & 5982 \\ & (26.68\%) & (13.82\%) & (59.50\%) & (73.32\%) & (52.02\%) \\ \hline **Total** & 4386 & 1205 & 4274 & 4274 & 11460 \\ & (52.19\%) & (10.51\%) & (37.29\%) & (47.81\%) & (100\%) \\ \hline \hline \end{tabular} \end{table} Table 1: Morphological division for the galaxy cluster sample (top table) and for the field sample (bottom table). We also show the division between passive, green-valley, and star-forming galaxies (and also a combination of the GV and SF populations). This separation was made using eqs. 1 and 2 (see §2.2 for more details). were absent from the DS18 sample and required HC11 information. From the 3899 cluster galaxies, only 38 (\(\sim 1\%\)) do not have DS18 information and were classified using HC11 data. For these 51 sources, we utilize the HC11 as follows: * Prob(E) \(>0.8\) * Prob(S0) \(>0.8\) We added the morphological classification from HC11 as it is based on a similar data set (also built from the SDSS-DR7 spectroscopic main sample) and it is a classification provided by the same research group as in DS18. Nonetheless, it is important to mention that we consider less than 1% of galaxies from HC11, and removing those galaxies does not affect our main conclusions. ### Separation of passive and star-forming galaxies Galaxy properties such as star formation rate (SFR) and stellar masses were calculated by the Max Planck Institute for Astrophysics and the Johns Hopkins University group (MPA-JHU), following the methods of Brinchmann et al. (2004), Kauffmann et al. (2003a), and Tremonti et al. (2004). The MPA-JHU sample provides this information for over \(1,800,000\) galaxies up to \(z\sim 0.3\) for the SDSS-DR8 release. Figure 2 shows, in the left panels, the M\({}_{*}\)_vs._ SFR plane for our cluster sample (top) and field sample (bottom). The dashed lines in each of the panels are the visual construction of the equations (1) and (2) of Trussler et al. (2020) (equations 2 and 3 here). Equation 2 gives the boundary between star-forming (SF) and green-valley (GV) galaxies, and Equation 3 gives the boundary between GV and passive (Pas) galaxies: \[log(SFR)=0.70\ log(M_{*})-7.52 \tag{2}\] \[log(SFR)=0.70\ log(M_{*})-8.02 \tag{3}\] Using the above equations, we separate our data sets (cluster and field galaxies) in different subpopulations, regarding morphology and star-formation activity. Table 1 summarizes the result of this separation. To illustrate the impact of environment and morphology on the star formation activity, we also display in Figure 2 the stellar mass (central panels) and SFR distributions (right panels) of ETGs (red lines) and LTGs (blue lines). The distributions for all galaxies are shown by the black-filled histograms. As before, the cluster results are in the top panels, while the field results are shown at the bottom. The LTG distributions show fewer massive galaxies (in comparison to the ETGs), both for clusters and the field. However, this difference does not look very large. Despite that, the SFR distributions of LTGs are remarkably different from the ETGs, and those differences are enhanced with the environment (if we compare the cluster and field results). Note the LTG distribution of SFR is very different between the cluster and field datasets. We can also detect an increase of SF ETGs when comparing the cluster and field distributions. Figure 2: Stellar mass _vs._ SFR diagram. In red, we have the plot for the cluster sample and in blue we have the field sample. In both, the dashed lines are equations (1) and (2) of Trussler et al. 2020 (equations 2 and 3 here), delineating the passive and the star-forming regions. This figure already gives a good indication of the predominance of passive galaxies in the cluster sample, and the bi-modality observed for the field sample in Table 1. ### Cluster dynamical state To investigate the possible impact of the cluster's dynamical state on the galaxy properties, we also separated the clusters according to their degree of substructures. Objects with no significant substructure are named relaxed, while the others (with strong signs of substructure) are named non-relaxed. To perform this classification, we applied the Dressler & Shectman test (Dressler & Shectman, 1988). For more details, we refer the reader to Pinkney et al. (1996); Lopes et al. (2006, 2009, 2018). In our cluster sample, within R\({}_{200}\), we have a total of 2878 galaxies in 197 relaxed clusters and 1021 galaxies in 34 non-relaxed clusters. A summary of the different galaxy populations found within the relaxed and disturbed systems is given in Table 2. ## 3 Results The environmental dependence of galaxy populations is clear from Table 1. As expected, the ETGs dominate within clusters (\(\sim 66\%\) of all galaxies). Different results are found for the field galaxies. LTGs comprise \(\sim 52\%\) of all objects in this environment. As for the star-formation activity, we have the following. Inside clusters, the ETGs are overwhelmingly dominated by passive galaxies (\(\sim 95\%\)). In the field, we found this fraction to be reduced to \(\sim 80\%\). For the LTGs, we have a similar trend, but with different values. Within clusters, we found that \(\sim 62\%\) of all LTGs are passive, while in the field, this fraction decreases to \(\sim 27\%\). This behavior is a strong indication of the environmental influence, despite the morphological classification. In order to have further information about these results, we investigated the location of the galaxies within the projected phase-space (PPS) diagram. We also compare the cumulative distributions of different galaxy properties. \begin{table} \begin{tabular}{l c c c c c} & **Passive** & **GV** & **SF** & **GV+SF** & **Total** \\ \hline \multicolumn{6}{c}{Relaved cluster sample} \\ **ETG**: & 1864 & 62 & 45 & 107 & 1971 \\ & (94.17\%) & (3.15\%) & (2.28\%) & (5.57\%) & (68.49\%) \\ Ellipticals & 1542 & 29 & 21 & 50 & \\ Lenticulars & 322 & 33 & 24 & 57 & \\ **LTG**: & 544 & 89 & 274 & 363 & 907 \\ & (59.98\%) & (9.81\%) & (30.21\%) & (40.02\%) & (31.515) \\ \hline \multicolumn{6}{c}{Non-relaxed cluster sample} \\ **ETG**: & 568 & 15 & 11 & 26 & 594 \\ & (95.62\%) & (2.53\%) & (1.85\%) & (4.37\%) & (58.18\%) \\ Ellipticals & 461 & 10 & 3 & 13 & \\ Lenticulars & 107 & 5 & 8 & 13 & \\ **LTG**: & 283 & 27 & 117 & 144 & 427 \\ & (66.28\%) & (6.32\%) & (27.40\%) & (33.72\%) & (41.48\%) \\ \hline **Total** & **3259** & **193** & **447** & **640** & 3899 \\ & (83.58\%) & (4.95\%) & (11.46\%) & (16.41\%) & \\ \hline \end{tabular} \end{table} Table 2: Division for the 3899 bright galaxies (\(M_{r}\leq M^{*}+1\)) in our cluster sample according to the cluster dynamical state, galaxy morphology, and star-formation activity. Figure 3: Projected phase-space diagrams for ETGs in relaxed clusters (left side panels) and non-relaxed clusters (right side panels). Rhee et al. (2017) regions are delineated by the dashed black lines, and as mentioned in Rhee et al. (2017) is an indication of different times since infall. From top to bottom, we show passive (Pas), _green valley_ (GV), star-forming (SF), and a combination of GV and SF galaxies, respectively. ### Location in the Phase-Space Diagram and the impact of the dynamical state As discussed by many authors (e.g., Mahajan et al., 2011, Oman et al., 2013, Rhee et al., 2017, Pasquali et al., 2019, Pasquali et al., 2019), the projected phase-Space diagram (PPS) can be utilized as a good indicator of the cluster assembly process. Hence, we can use it to estimate how long a galaxy is inside a cluster. Based on a cosmological hydrodynamic simulation, Rhee et al. (2017) verified that although the positions of galaxies in the PPS are not as precise as would be the case for the real phase-space diagram, it is still possible and reliable to utilize the PPS as an indicator of the infall time. The authors created a PPS from mock data and divided the diagram into five regions (A to E). These regions are divided to be dominated by four major groups: first infallers (A), recent infallers (B, C), intermediate infallers (D), and ancient infallers (E). Fig. 6 of Rhee et al. (2017) gives good visual support to the understanding of these regions. Each one of these categories has a time interval for the time since infall: * First Infallers (A): Not fallen yet * Recent Infallers (B and C): \(0.00<t_{inf}<3.63Gyr\) * Intermediate Infallers (D): \(3.63<t_{inf}<6.45Gyr\) * Ancient Infallers (E): \(6.45<t_{inf}<13.7Gyr\) In Figures 3 and 4, we show the PPS of our data, for ETGs and LTGs, respectively. The data is divided into relaxed (left side panels) and non-relaxed clusters (right side panels), and also subdivided into the subpopulations present for each morphology: Pas (first row), GV (second row), and SF (third row). We further added a combination of GV and SF galaxies (fourth row). The dashed lines in each diagram delimit the regions introduced by Rhee et al. (2017). Figure 3 shows the differences in the distributions of ETGs in the PPS as a function of their star-formation activity. Pas-ETGs have a higher concentration in the E region (dominated by galaxies in the ancient infall time interval). On the contrary, the GV-ETGs and SF-ETGs tend to avoid this region. These galaxies are predominantly found in the regions labeled C and D (dominated by galaxies in the recent/intermediate infall time interval). The tendency is also present when we compare the Pas-ETGs with the combined sample of GV+SF-ETGs. The distinction present in Figure 3 indicates that Pas and GV and/or SF-ETGs inhabit different regions of the PPS, thus having distinct times since infall: shorter periods for GV/SF-ETGs in comparison to the Pas-ETGs counterparts. As those galaxies have the same morphology but different infall times and different levels of star-formation activity, we can formulate some hypotheses: (i) the GV/SF-ETGs enter the clusters as such, and we are witnessing their residual star-formation; (ii) the GV/SF-ETGs are going through a new star-formation event, bringing them back from the passive phase - rejuvenation process that could be triggered by gas infusion (Yi et al., 2005; Marino et al., 2009; Kaviraj et al., 2009, 2011). Another aspect worth noticing here is the apparent absence of similar behavior for the LTGs in Figure 4. There is a large spread in the distribution of LTGs in the PPS for the three subpopulations(Pas, GV, and SF). The regions they occupy are similar (mainly C and D), pointing to similar infall times. It is also interesting to note that these regions are similar to those occupied by the GV and SF-ETGs. That could indicate the GV and SF ETGs and LTGs could be subject to similar environmental effects, despite their morphology. On the other hand, the similar location of the passive and GV/SF-LTGs may indicate that the former did not have time yet to go through a Figure 4: Projected phase-space diagrams for LTGs in relaxed clusters (left side panels) and non-relaxed clusters (right side panels). Rhee et al. (2017) regions are delineated by the dashed black lines, and as mentioned in Rhee et al. (2017) is an indication of different times since infall. From top to bottom, we show passive (Pas), _green valley_ (GV), star-forming (SF) and a combination of GV and SF galaxies, respectively. morphological transformation but may have already quenched their star formation. Note that although the GV and SF-LTG subpopulations occupy similar regions in the PPS to the Pas-LTGs, they do have small distinctions. For instance, we find different fractions of Pas/GV/SF-LTGs in region E. In the case of relaxed clusters, the Pas-LTGs have \(35.85\%\ \pm\ 2.06\%\) of its galaxies inside region E, while the SF-LTGs have \(24.82\%\ \pm\ 2.61\%\). For the non-relaxed clusters, the scenario is similar, Pas-LTGs have \(33.92\%\ \pm\ 2.81\%\), and the SF-LTGs have \(23.08\%\ +\ 3.90\%\). Besides the location in the PPS in Figures 3 and 4, in Figure 5 we compare the local density parameter (\(\Sigma_{5}\)). We do so to investigate if the cluster action in the GV and SF-ETGs is a more local influence than a global one. For ETGs in clusters, the passive galaxies inhabit the same environment as the GV counterpart, while being statistically distinct from the SF counterpart. But, the GV and the SF seem to inhabit the same environment. Pas-LTGs occupy regions with higher local densities than SF-LTGs and GV+SF-LTGs. For the Pas-LTGs and GV-LTGs, the distinctions are not so strong, with the KS and AD tests very close to the confidence level. That could indicate that these passive objects may have been inside the clusters for slightly longer periods compared to the GV and SF galaxies, especially for ETGs. Another possibility is that these passive galaxies have been already pre-processed in less massive groups before infalling into the clusters (Jaffe et al., 2012, 2015; Haines et al., 2015; Mahajan, 2013). Despite that, for the cluster sample, the results point towards a transition of environments where the GV galaxies are in a local environment very similar to the Pas ones (but not quite identical since the KS test and AD test give values close to the confirmation of the null hypothesis), and the SF galaxies are located in a less dense environment than both of them. Figures 3 and 4 already hint in this direction by showing that Pas-ETGs preferentially inhabit the central regions of the cluster more than their SF counterparts, where the local density is higher than the outer parts of the cluster. Having established that the infall time of the passive galaxies (even more for Pas-ETGs, but also true for Pas-LTGs) is different from the other populations and that there may be small differences among the latter, we then proceed to investigate the distributions of different galaxies and environmental properties. To do so, we compare different properties by applying the Kolmogorov-Smirnov (KS) test to our sample (Table 3). The \(p-value\) obtained by comparing the different subpopulations in the relaxed and non-relaxed samples (displayed in Table 3) indicates that the dynamical state of the cluster bears no significant influence on the properties studied in this work (with few exceptions that can be observed in the Table). This result is also supported by Table 2. Although Table 2 shows a small discrepancy in the GV and SF fractions of galaxies in relaxed and non-relaxed clusters, it is not a major difference, even more, because the number of galaxies in the non-relaxed clusters is much smaller than available in the relaxed sample. The indication provided by the p-values obtained with the KS test points to a scenario where, if the cluster environment influences the galaxy \begin{table} \begin{tabular}{l|c c c c c c} \hline & \multicolumn{5}{c}{ETGs} \\ & M\({}_{*}\) & \(\Sigma_{5}\) & D\({}_{n}(4000)\) & H\(\delta\) & sSFR & \(\nabla(g\) - i) \\ \cline{2-6} Pas\({}^{cluster}\) vs. Pas\({}^{friend}\) & \(2.978e-6\) & \(1.163e-10\) & \(6.661e-15\) & \(1.110e-16\) & \(1.110e-16\) & \(4.219e-15\) \\ GV\({}^{cluster}\) vs. GV\({}^{friend}\) & \(0.078\) & \(0.000\) & \(0.4692\) & \(1.331e-4\) & \(0.9039\) & \(8.523-8\) \\ SF\({}^{cluster}\) vs. SF\({}^{field}\) & \(0.2319\) & \(1.110e-16\) & \(0.0554\) & \(0.0159\) & \(0.073\) & \(0.0132\) \\ GV+SF\({}^{cluster}\)\({}_{VS.GV}\)+SF\({}^{field}\) & \(0.087\) & \(3.331e-16\) & \(4.806e-6\) & \(1.489e-20\) & \(1.521e-7\) & \(4.922e-8\) \\ \hline & \multicolumn{5}{c}{LTGs} \\ & M\({}_{*}\) & \(\Sigma_{5}\) & D\({}_{n}(4000)\) & H\(\delta\) & sSFR & \(\nabla(g\) - i) \\ \cline{2-6} Pas\({}^{cluster}\) vs. Pas\({}^{friend}\) & \(1.228e-6\) & \(2.109e-15\) & \(1.631e-6\) & \(2.109e-15\) & \(2.109e-15\) & \(5.815e-38\) \\ GV\({}^{cluster}\) vs. GV\({}^{friend}\) & \(7.741e-5\) & \(9.992e-16\) & \(0.012\) & \(0.6756\) & \(0.014\) & \(2.781e-7\) \\ SF\({}^{cluster}\) vs. SF\({}^{field}\) & \(5.551e-16\) & \(5.551e-16\) & \(5.145e-6\) & \(0.016\) & \(8.978e-5\) & \(7.923e-10\) \\ GV+SF\({}^{cluster}\)\({}_{VS.GV}\)+SF\({}^{field}\) & \(2.442e-15\) & \(3.331e-16\) & \(7.488e-4\) & \(1.196e-6\) & \(0.003\) & \(4.108e-15\) \\ \hline \end{tabular} \end{table} Table 3: P-values for the KS tests comparing our different sub-samples. The first batch of tests compares galaxies inside relaxed clusters with galaxies inside non-relaxed clusters (comparing stellar mass, the local galaxy density (\(\Sigma_{5}\)), \(4000\) Å break, H\(\delta\), specific star formation rate, and color gradient). The second batch of tests compares galaxies in the combined cluster sample (relaxed+non-relaxed clusters) with galaxies in the field. Consider that \(\leq 0.05\) the null hypothesis is refuted **with \(2\sigma\) significance** (distinguishable sample – red background), and for p-value, \(>0.05\) the null hypothesis is accepted **with \(2\sigma\) significance** (indistinguishable sample – green background). evolution, the conditions for this influence exist independent of the cluster dynamical state. In order to give more confidence to this result, we also made the same comparison shown in Table 3 using the Anderson-Darling test. The Anderson-Darling test is a variation of the KS test that gives more weight to the tails of the distribution than the KS test (more centrally focused). The results were almost identical, at a 5% significance level, to the ones obtained from the KS test. For the ETGs, the only divergence was for the D\({}_{n}\)(4000) when comparing Pas-ETGs in the relaxed and non-relaxed cluster where the KS test gives a p-value for the samples being drawn from the same distribution, but the AD test does not. For the LTGs, the discrepancy is present in three cases. First, in the comparison for stellar mass between GV-LTGs in relaxed and non-relaxed samples, the KS test rejects the null hypothesis, but the AD test accepts it. The second discrepancy for LTGs is for the local density for the comparison between GV+SF-LTGs in relaxed and non-relaxed, where the KS test accepts the null hypothesis, but the AD test does not. The third discrepancy in LTGs is for the color gradient in the comparison between Pas-LTGs in relaxed and non-relaxed samples, where the KS rejects the null hypothesis and the AD test accepts. Figure 6 shows the distribution of stellar mass for the cluster sample in the top panels (considering different subpopulations of ETGs). Since the KS test indicates that the samples are statistically similar, we combine both relaxed and non-relaxed samples to compare the mass distribution of each one of the subpopulations. For the cluster sample, we can see that the GV and SF galaxies (also GV+SF galaxies) are systematically less massive than their passive counterparts. Two caveats should be considered when looking at the comparisons in Figures 3 to 6. The first is the large time intervals for each of the Rhee et al. (2017) regions. Naturally, a finer tune in those regions would give a better grasp of the galaxies' infall intervals and their preferred locations in the PPS as a function of morphology and star-formation activity. The second is that when we compare galaxies in clusters with different dynamical states, we have to be careful since in a few cases we have just a few objects (especially for the GV and SF-ETGs in non-relaxed clusters), leading to small statistic confidence. In this sense, it would also be possible that we are only ruling out only large distinctions, not the subtle ones. Taking these caveats into account, it seems that the positioning of different galaxy populations in the PPS could point to small differences between the relaxed and non-relaxed clusters, but a larger sample is needed in order to reach definitive conclusions. To further investigate the influence of the cluster environment in the galaxy evolution, in the next section, we compare the cluster and field samples. ### The impact of the global environment - cluster _vs._ field On the contrary direction from the comparison between the relaxed and non-relaxed environments, the comparison between cluster and field samples bears significant differences (also Table 3. When we compare the galaxies in the field with those in the combined cluster sample (a combination of the relaxed and non-relaxed samples), the great majority of the p-values for the KS test have values below 0.05 (displayed in the bottom of Table 3). There are some cases where the p-value is \(>0.05\), and it is important to highlight them - the stellar mass for GV, SF, and GV+SF-ETGs accept the null hypothesis, the same occurs for the D\({}_{n}\)(4000) and sSFR for the GV and SF-ETGs comparisons. Overall, field and cluster samples are distinguishable between themselves. We applied the Anderson-Darling test to these samples to confirm the results obtained by the KS test. At a 5% significance level, the great majority of the KS tests are confirmed by the AD test, with four discrepancies for the ETGs and one for the LTGs. The discrepancies for the ETGs occur for M\({}_{\rm{*}}\), both for the GV-ETGs and the GV+SF-ETGs, as well as for D\({}_{n}\)(4000) and sSFR in the case of the SF-ETGs. For the LTGs, the discrepancy occurs for the sSFR of GV galaxies. While the KS test refuses the null hypothesis, the AD test accepts it. Besides the summary displayed in Table 3, we also show the comparison of some properties between the field and cluster populations in Figures 7 and 8. Of the properties shown in Table 3, two of them give us a clue about the star-formation history of the galaxies. They are the D\({}_{n}\)(4000) and H\(\delta\) parameters. In this case, we are using the MPA-JHU DR7 (Kauffmann et al., 2003a) calculation of the H\(\delta\) Lick index, and the D\({}_{n}\)(4000) (also from the MPA-JHU catalog), considering the Balogh et al. (1999) definition. In both cases, the Figure 5: Cumulative Distribution Functions of the local galaxy density (\(\Sigma_{5}\)) comparing the cluster subpopulations of ETGs and LTGs. In the first row of panels, the cluster ETGs are the ones being compared. In this instance, red is Pas-ETGs, green is GV-ETGs, orange is SF-ETGs and salmon is GV+SF-ETGs. In the second row of panels, cluster LTGs are the ones being compared. In this instance, in blue are the Pas-LTGs, in green are the GV-LTGs, in purple are the SF-LTGs, and finally, in salmon the combined GV+SF-LTGs. Since the p-value for the local density parameter in Table 1 shows that the distribution of log (\(\Sigma_{5}\)) is drawn from the same distribution for all subpopulations in the relaxed and non-relaxed samples (with an exception for SF-LTGs), the present plot was made by combining the subpopulation from the relaxed sample with their counterpart in the non-relaxed sample. The numbers in the top part of the CDF plots are the p-values for those properties (see Table 3). calculation was done taking into account corrections for emission lines. D\({}_{n}\)(4000) is a measurement of the 4000 A break, while H\(\delta\) is a measurement of the Balmer absorption. The first is related to how long ago the galaxy stopped the star formation (a big break indicates a longer time). This is the case since the break is created due to photon absorption by metals in the stellar atmosphere, and in hot massive stars, the metals are ionized, so the absorption does not happen. The second, H\(\delta\), measures bursts of star formation that ended in an interval of \(0.1-1\)\(Gyr\). After a star formation burst, the galaxy's light is dominated by O and B stars that have weak Balmer absorption. Once their main sequence lifetime is completed, the galaxy light becomes dominated by A-F stars, that have prominent Balmer absorption. It is important to keep in mind that both indexes depend on metallicity, but that is relevant only for old ages (more than \(10^{9}yr\) after the burst, Kauffmann et al., 2003). The authors also verified that when those indexes are used together, they provide a powerful probe of the recent star formation history of a galaxy. Hence, we can use both indexes to discriminate between young and old stellar populations. Galaxies with D\({}_{n}\)(4000) \(<\) 1.55 and H\(\delta\)\(>\) 2.0 are characterized by young stellar populations, while the opposite is true for old stellar populations (see Kauffmann et al., 2003). In Figure 7 we display the distributions of those two parameters in the bottom panels, while at the top we show their variation with stellar mass. The left panels contain the results regarding D\({}_{n}\)(4000), while on the right we have the results for H\(\delta\). The analysis is displayed for ETGs, classified as passive, GV, or SF, located in the field and clusters. Independent of the environment, we can see a clear variation from the SF to the Pas population, with the GV in the middle. That suggests the Pas ETGs stopped their star formation longer ago, showing no signs of recent activity. The opposite is true for the SF-ETGs, according to their D\({}_{n}\)(4000) and H\(\delta\) values. In between, we have the GV population, with residual star formation. On what regards the field _vs_ cluster comparison, we can see that for the transitional galaxies (GV and SF ETGs) there is a small difference in the D\({}_{n}\)(4000) values, but a significant offset in the H\(\delta\)_vs_ M\({}_{*}\) plane. That reinforces the scenario of strong(residual) recent activity for the SF(GV) ETGs, which is reduced according to their global envi Figure 6: Cumulative distribution function for stellar masses. The top panels compare the cluster ETGs (continuous lines), while the bottom panels compare the field ETGs (dashed lines). In both cases, red tones correspond to Pas-ETGs, green tones are GV-ETGs, orange tones are SF-ETGs, and salmon tones the combine GV+SF ETGs. The numbers in the top part of the CDF plots are the p-values for those properties (see Table 3). ronment. However, it is important to notice the field H\(\delta\) distributions (bottom right panel, with no mass distinction) always show higher values than the cluster ones. These results are confirmed by the KS-test, (the p-values are listed on the right side of the bottom panel, or in Table 3). Regarding the Pas ETG, although there is a significant difference in the distributions in the two environments (field and cluster), they are consistent with quenched galaxies, that stopped their star formation long ago. And, as shown by Kauffmann et al. (2003b), for older stellar populations, the metallicity starts to be an important factor. So, for passive galaxies, it is difficult to know if the distinction between the cluster and field samples is driven by age or metallicity (or both). Another important aspect we detect in the top panels of Figure 7 is the variation with stellar mass. As pointed out by Kauffmann et al. (2003b), there is a strong dependence of \(\rm D_{n}(4000)\) and H\(\delta\) with \(\rm M_{*}\), with a clear distinction at \(\rm M_{*}\sim 3\times 10^{10}\)\(\rm M_{\odot}\). We see that is true in our case, for all the populations shown. However, the effect becomes stronger according to the population, being steeper for the SF-ETGs, especially for H\(\delta\). On what regards the environment, we can see that for low mass (\(\rm Log\ M_{*}/\rm M_{\odot}<10.5\)) SF-ETGs, there is no environmental distinction. That appears for the higher Figure 7: \(\rm D_{n}(4000)\)_vs._\(\rm M_{*}\) (top left panel) and \(H\delta\)_vs._\(\rm M_{*}\) (top right panel) planes. The contour curves illustrate the cluster distribution. The dots are the median value for the binned mass for clusters, while the triangles are the median value for the binned mass for the field sample. As can be seen, for fixed stellar mass, the median values of \(\rm D_{n}(4000)\) (H\(\delta\)) for GV and SF galaxies are always lower (bigger) than for Pas. The same behavior for the SF in comparison to GV is observed. In what regards the cluster _vs._ field comparison, small distinctions are observed in this plane. In the bottom panels, we show the Gaussian fit for each of the distributions. The distribution shows clearly a distinction between the Pas, GV, and SF-ETGs. On the other hand, when comparing cluster _vs._ field, for the \(\rm D_{n}(4000)\) the distinction is nonexistent for GV and SF-ETGs (but present for Pas-ETGs). While, for H\(\delta\), they are all distinct from each other. This result is given by the p-value of the KS test. mass galaxies. For the most massive SF-ETGs, we actually find that cluster objects have no sign of recent activity (according to H\(\delta\)), but that is not the case for the field counterparts. We have also investigated the distributions of the color gradient (provided by the KIAS catalog \(-\) Choi et al., 2010), which gives the color variation inside the galaxy. Although there is a degeneracy between age and metallicity in regard to color, some authors believe that the main driver of the color gradient is the metallicity gradient inside the galaxy (e.g., Peletier et al., 1990; Saglia et al., 2000; Tamura et al., 2000; Tamura and Ohta, 2000; Kobayashi, 2004). In any case, the nature of the formation of the color gradient is not up for debate in the current work, as the important aspect here is the existence of the color gradient _per se_. It is important because we can use the color gradient as a proxy to identify the different Figure 8: Distribution of \(\nabla(g-i)\) for each of the subpopulations in the cluster (continuous lines) and field (dashed lines) samples. Red tones represent Pas-ETGs, green tones GV-ETGs, and orange tones SF-ETGs. The numbers in the legend are the mean value of the Gaussian fit, while the top right numbers are the p-values for the comparison between the cluster and field subpopulations. As can be easily seen, the color gradient distributions are different when we compare Pas, GV, and SF ETGs between themselves, and also when we compare the cluster populations with their counterparts in the field. Figure 9: Distributions of the color gradient for the ETGs, split into elliptical (thin black dashed lines) and S0 (thin gray dashed lines) galaxies. The colored histograms represent the distribution for the whole sample. The top panels present the distribution for cluster ETGs, and the bottom panels present the distribution for field ETGs. The vertical lines represent the median values (listed in all panels) of the Gaussian fit for each distribution. types of formation and evolution regimes: monolithic _vs._ hierarchical. In a monolithic scenario, the potential well of the galaxy's central region acts as a retainer of gas. The accumulation of gas in the inner part causes a greater enrichment in the region, leading to a more negative gradient. While, in a hierarchical scenario, due to the mixture and infusion of gas related to merger processes, there is a dilution of the chemical content, causing a less steep gradient (Tortora et al., 2010; Kobayashi, 2004). In the case where the age is the main factor in the formation of the color gradient in galaxies, the scenario depicted above is still valid. The center of the galaxy will be older than the outskirts in the monolithic scenario, and in the hierarchical scenario, the infusion of new gas will lead to new star-formation, equalizing the color gradient in the same manner we observe in the case where metallicity is the main factor for the color gradient formation. Figure 8 displays the distribution of the color gradient for our ETG sample. As in other figures, the continuous lines give the distribution of cluster samples, while the dashed lines show the distribution of field samples. The red tones represent the Pas-ETGs, the green tones the GV-ETGs, and the orange tones the SF-ETGs. This figure shows that GV and SF-ETGs have systematically more negative values of \(\nabla(g-i)\) than Pas-ETGs, for both environments. From Figure 8 it is also possible to compare the different ETGs subpopulations according to the environment, cluster _vs._ field. In all cases, the distribution of \(\nabla(g-i)\) for field galaxies displays systematically more negative values than for cluster galaxies. This result indicates that the environment in which galaxies inhabit acts as a regulator of their evolutionary paths. Field ETGs are more likely to experience monolithic evolution, while cluster ETGs probably have their evolution explained by the hierarchical scenario. Of course, the evolution probably does not follow exclusively one of the scenarios, being more likely that the evolution of galaxies needs both of them to explain all of their properties. However, the distinction in the color gradient can indicate a higher likelihood that the galaxies in a more recent past had been evolving monotonically or hierarchically. The same trends demonstrated in Figure 8 for ETGs are observed for LTGs. The comparison of \(\nabla\)(g - i) between cluster and field LTGs is displayed in Table 3. As for the ETGs, the GV and SF-LTGs (and the combined sample) have more negative color gradient values than Pas-LTGs. We also found that field LTGs have more negative values than their cluster counterparts. In Figure 9 we still show the \(\nabla\)(g - i) distributions for the ETGs inside clusters and in the field. However, now we distinguish between elliptical (black lines) and lenticular (gray lines) galaxies. For Pas-ETGs in clusters, the distributions of ellipticals and S0s are very similar, displaying only a small difference in the median value of the distributions. A similar behavior is observed for the Figure 11: Star-forming fractions variations with clustercentric distance (normalized by R\({}_{200}\)). The continuous black line shows the variation of the overall fractions of star-forming galaxies without morphological distinction. In this figure, we are considering as SF galaxies the combination of GV and SF galaxies. The orange dashed curve only considers SF-ETGs, and the purple dashed line displays the results for SF-LTGs. The gray square in the extreme right part of the figure is the fraction for all star-forming galaxies in the field. We have also indicated the field values for SF-ETGs and SF-LTGs, with the orange and purple squares, respectively. The error bars indicate the standard error for proportions. Figure 10: Comparison of CDFs for Asymmetry and Smoothness for all subpopulations. As in previous figures, continuous lines represent cluster samples, and dashed lines represent field galaxies. Pas-ETGs are in red (first column), SF-ETGs in orange (second column), GV-ETGs in green (third column), and GV-SF-ETGs in salmon (fourth column). The top numbers show the p-value of the KS test and the statistic and critical value for the 95% confidence level for the Anderson-Darling test (the null hypothesis is accepted when \(stats\ <\ C_{value}\)). Pas-ETGs in the Field, but the median values of the distributions show a slightly larger difference. For the GV, SF, and GV+SF-ETGs, the scenario is quite different. For each of these subpopulations, the lenticulars and elliptical galaxies show distinct distributions, having different peaks. In the field, the separation is even larger. Figure 9 demonstrates that lenticular galaxies are responsible for the systematical shift in the distribution of V(g - i) or the GV, SF, and GV+SF-ETGs subpopulations in relation to the Pas-ETGs. It also shows that most of the shift observed in the comparison between cluster and field ETGs is due to the lenticular galaxies having a stronger tail in the field compared to clusters. That indicates that the likelihood of elliptical and lenticular galaxies following a similar evolutionary path in both environments is small. In Figure 10 we follow up the discussion of hierarchical _vs._ monolithic scenario observed in Figures 8 and 9, by comparing the distributions of Asymmetry (A) and Smoothness (S) of our subsamples (measured by Barchi et al., 2020). A is a measurement of how symmetrical a galaxy is, in general, spiral galaxies present higher values of asymmetries than elliptical galaxies (Conselice, 2003). Recent merger remnants may (especially major mergers) present the highest values of asymmetries (Palmese et al., 2017), and the value would decrease with time, while the merger remnant reaches a new dynamical equilibrium. On the other hand, smoothness, S is a measurement of how much of the galaxy's light is contained in small clumps (Conselice, 2003). Therefore, smoothness is higher for unperturbed early-type galaxies. Figure 10 shows the CDFs of these two properties, comparing the cluster (solid lines) and the field (dashed lines) samples. The red tones are for Pas-ETGs, the green tones are for GV-ETGs, the orange tones are for SF-ETGs, and the salmon tones are for GV+SF-ETGs. Figure 10 indicates a subtle but statistically significant difference in the value of A between cluster and field ETGs, for both passive and star-forming ones - confirmed by the KS and the Anderson-Darling tests in a 95% confidence level - but not for the GV-ETGs. For this case, the null hypothesis is accepted. That is also true for S. These results indicate that cluster ETGs are more asymmetrical and have a less smooth light distribution than field objects. Due to the nature of the hierarchical formation, it is expected that the galaxies will tend to be more asymmetric and have the light more concentrated in small pockets across the galaxy (consequently, less smooth). This will cause higher values of A and lower values of S. Thus, the results observed in Figure 10 partially support the scenario where cluster early-type galaxies have a higher likelihood of having followed a hierarchical formation scenario in comparison to those in the field by showing that cluster ETGs are more asymmetrical and less smooth (i.e., more clumpy). We reinforce here that these statements are purely based on galaxy morphological parameters, which might be biased by the object's apparent magnitude (i.e. mass) and apparent size, as well as being affected by the presence of companions (i.e. the environment they live in). More detailed galaxy maps are needed to put stronger constraints on the galaxy's evolutionary paths and its relation with the environment. In a follow-up study, we intend to apply this study to IFU data, to retrieve the spatially resolved star-formation history. To further attest the cluster environmental influence on galaxy evolution, we display in Figure 11 the variation of the fractions of star-forming galaxies with normalized clustercentric distance, up to 5\(\times\)R\({}_{200}\). As mentioned in SS2 we have 9931 cluster members within 5\(\times\)R\({}_{200}\). In this case, we consider as SF all galaxies above the Eq. 3 that separate the passive from the green-valley galaxies in Figure 2. The black continuous curve describes the fractional variation of all star-forming galaxies without morphological distinction. The purple dashed curve displays the results for the SF-LTGs, while the orange dashed curve shows the SF-ETGs sample. The field fraction (for each case) is displayed by the squares on the right side of the figure, for comparison. We can see a positive variation of the fraction of star-forming cluster galaxies up to \(\sim\) 3\(\times\)R\({}_{200}\) (reaching a plateau to larger distances). As the SF fractions within clusters are always smaller than the field fractions (even when we go to large distances), these results suggest that galaxies are pre-processed in smaller units (groups) before being accreted by large clusters (Haines et al., 2015). ## 4 Discussion and Conclusions One of the main goals of the present work is to use transitional galaxies, **such as GV and** SF-ETGs as a proxy for galaxies migrating from the BC to the RS and investigate the influence of different environments in this transition. In general, we expect early-type galaxies to be red and dead objects. Nonetheless, several works have been showing that this is not the whole truth. The presence of SF-ETGs objects needs to be accounted for (Lee et al., 2006; Kannappan et al., 2009; Schawinski et al., 2009). To do so, we have analyzed a sample of 3, 899 cluster galaxies and 11, 460 field galaxies in the local Universe. We further divided the cluster sample between galaxies belonging to relaxed clusters (2, 878 galaxies) and non-relaxed clusters (1, 021 galaxies). Hence, a proper understanding of these populations can shed some light on the evolutionary path of galaxies. According to Vulcani et al. (2015), SF-ETGs comprise about 6% of the general field ETG population. Kavira et al. (2011) point out that _ETGs account for around half the stellar mass budget in the local Universe_. They also state that SF-ETGs contribute to \(\sim\)14 percent of the cosmic star-formation budget. We have shown in Figure 3 that the SF-ETGs arrived at the cluster in a later epoch than Pas-ETGs (for both relaxed and non-relaxed clusters). This is also true for the GV-ETGs and for the combined GV+SF-ETG sample. They also have an infall period similar to the one observed in the late-type population (star-forming or quenched). While the GV and SF-ETGs are dominated by galaxies with recent or intermediate time since infall (same for the LTGs), the Pas-ETGs are dominated by galaxies with ancient infall time. This behavior is consistent with other results in the literature (Jaffe et al., 2015, 2016; Lotz et al., 2019), that indicate that the interstellar gas of the galaxies is removed after the first passage into the center of the cluster, preventing further star-formation activity to happens. While the segregation in the PPS according to the star-formation activity is clear for ETGs, that is not the case for LTGs. Figure 4 indicates small differences between the location of Pas, GV, SF, and GV+SF-LTGs in the PPS. We find the fractions of galaxies inside the E region are higher for Pas-LTGs, in comparison to what was observed for GV-LTGs, for the SF-LTGs, and for the GV+SF-ETGs (Figure 3). As we saw for the ETGs, this result indicates that the Pas-LTGs may have been within clusters for slightly longer periods than the other LTG subpopulations. We also show in SS3.1 that the dynamical state of the cluster does not have a major influence on the properties studied in this work (Table 3). This result agrees with the findings of Sampaio et al. (2021), that using a different methodology to classify the dynamical state of clusters with M\({}_{200}>10^{14}M_{\odot}\), found that the investigated properties are globally similar in both environments. Although, the galaxies' infall rate of non-relaxed clusters is larger than the one for relaxed clusters. Unlike what was obtained in SS3.1, we have shown in the sub section 3.2 (comparison of cluster and field galaxies) that the cluster environment influences most of the galaxies' properties. As shown in Table 3, for LTGs the scenario is very clear. With exception to the H\(\delta\), all the comparisons made in Table 3 rejected the null hypothesis that the samples were drawn from the same distribution. On the other hand, for the ETGs, the scenarios are more complex, especially for the D\({}_{n}\)(4000) and for the sSFR. According to the KS test, the GV and the SF-ETGs from the cluster and the field are drawn from the same distribution, meaning that stellar population age and the star-formation taking place at the cluster and field GV/SF-ETGs are similar. However, when the comparison considers the combined sample (GV+SF-ETGs), the scenario changes. For this enlarged sample, all the properties are distinct between cluster and field (except only for stellar mass). The difference is probably mostly due to the reduced number of galaxies in the individual GV and SF-ETGs samples, yielding low statistics for the individual comparison. This result indicates that the combined cluster sample of GV+SF-ETGs has an older stellar population - given by the D\({}_{n}\)(4000), a more relevant recent star-formation (given by H\(\delta\)), and a more suppressed ongoing star-formation (given by the sSFR). Since their stellar masses are statistically similar, we can't tribute this difference to a mass effect. This result is in line with several works that have pointed out that different phenomena impact infalling galaxies, affecting their star formation activity (Haines et al., 2013; Zinger et al., 2018; Wang et al., 2018; Lotz et al., 2019). In Figure 7, we show evidence for a combination of the environment and mass action to quench the star formation activity. We see that for SF-ETGs the environment matters above Log M\({}_{*}\)/M\({}_{\odot}\)\(\sim\) 10.5. The recent star formation activity (indicated by H\(\delta\)) shows different results, between the field and cluster, only above this mass. In general, the SF-ETGs reduce their activity as they grow in mass, but this process is accelerated for cluster galaxies. Another result suggested by Table 3 and demonstrated by Figures 8 and 9, is the difference in the color gradient distributions, related to the galaxy environment. As previously argued, the color gradient can give clues to the formation scenario followed by a galaxy. We interpret the results from these two figures (8 and 9) as an indication that cluster galaxies show a higher likelihood of having followed the hierarchical formation scenario, while field galaxies are more likely to follow the monolithic scenario. On top of that, the results from Figure 10 (regarding the Asymmetry and Smoothness parameters) suggest that pre-processing of galaxies is an important step before they enter the cluster environment. Before a galaxy becomes a cluster member, it could be part of a galaxy group, and as a group member, the galaxy can suffer similar processes to the ones in the cluster environment (e.g., starvation, fly-byes, mergers, harassment, and others; Haines et al., 2015). Due to the high-velocity dispersion, it's less likely that galaxies go through mergers inside clusters (within R200). Thus, it is more probable the results we are observing are the result of mergers that took place before the galaxies become cluster members, where the velocity dispersion is lower than the observed in the interior of the cluster (Hickson, 1997). In our subsequent project, we intend to investigate the regions where such influences occur by extending our investigation to several R200. Preliminary confirmation of the pre-processing effect is displayed here, in Figure 11. Figure 9 offers an attempt to answer the reasoning that the field and cluster samples have different distributions of the color gradient, especially for star-forming ETGs. The bulk of the distinction comes from what we classified as lenticular galaxies. According to this classification, lenticular galaxies are pulling the distribution of field samples to a stronger negative color gradient, indicating a preference for monolithic formation. Our result diverges from other recent works in the literature, like the one of Coccato et al. (2020, 2022). In both papers, using IFU data, the authors investigate the different formation scenarios that lenticular galaxies can evolve from. Using the kinematic information obtained by the IFU data, they point out the existence of two types of lenticular galaxies: rotationally-supported and pressure-supported ones. They interpreted each of these types as an indication of the formation processes. Rotationally-supported lenticular would have formed through fast processes that consume their gas (like ram-pressure and starvation). On the contrary, pressure-supported lenticular galaxies would have formed through minor-mergers processes that modified their kinematic characteristics. The authors found an environmental dependency for each of the different types of lenticular galaxies: cluster lenticulars are more rotationally-supported, while field lenticulars are more pressure-supported (Coccato et al., 2020). Furthermore, as pointed out in Coccato et al. (2022), _"faded spiral" pathway is the most efficient channel to produce S0s, and it becomes more efficient as the mass of the group or cluster or local density of galaxies increases. The merger pathway is also a viable channel, and its efficiency becomes higher with decreasing local density or environment mass._ It is important to consider that the morphology classification used in Coccato et al. (2020, 2022) is not the same for the present work. Also, their stellar mass and luminosity intervals are more constrained to higher luminosity/stellar masses. Furthermore, in Coccato et al. (2020, 2022), the authors do not distinguish the lenticular galaxies into passive and star-forming galaxies as we did in the present work. This consideration is relevant when comparing both works since, as was demonstrated in Figures 8 and 9, the distinction observed for the evolutionary paths (according to the color gradient distributions) is connected to the star-formation level of the galaxies. The combination of the results obtained with the color gradient (Figures 8 and 9), the Asymmetry and Smoothness (Figure 10), and the star-forming/passive fraction variation with cluster-centric distance (Figure 11) offer strong argumentation to the need for pre-processing of galaxies before they enter the cluster. The same argument was pointed out by Haines et al. (2015), where the authors investigate a sample of star-forming galaxies coming from 30 massive clusters in a redshift interval of 0.15 \(<z<\) 0.30. The authors found that the fraction of star-forming galaxies increases with cluster-centric radii, but remains below the field value even at 3R\({}_{200}\). This result can not be reproduced by a scenario where star-formation suppression only occurs in infalling field galaxies, justifying the need for the galaxies to suffer pre-processing before the cluster infall. As we have shown, the cluster environment has important consequences for the evolution of galaxies, but is not the only one responsible for the differences observed within the galaxy population. The pre-processing of galaxies must be taken into account when studying galaxy evolution. We intend to give sequence to this work by extending the present analysis into the outer parts of clusters. ## Acknowledgements This work would not be possible without the funding of Coordenacao de Aperfeicoamento de Pessoal de Nivel Superior (CAPES), which supported DB with a Ph.D. fellowship. PAAL thanks the support of CNPq, grants 433938/2018-8 e 312460/2021-0. ALBR thanks the support of CNPq, grant 316317/2021-7, and FAPESB INFNA PIE 0013/2016. AC acknowledges the FAPERJ grant E_407/2021 - Apoio ao Joven Pesquisador Fluminense sem vincul EM ICTS do Estado do RJ - 2021 - E-26/200.607 and 210.371/2022(270993). We acknowledge the anonymous referee for the very helpful suggestions. ## Data Availability The data underlying this article will be shared on reasonable request to the corresponding author.
2310.14117
ZTD$_{JAVA}$: Mitigating Software Supply Chain Vulnerabilities via Zero-Trust Dependencies
Third-party software components like Log4J accelerate software application development but introduce substantial risk. These components have led to many software supply chain attacks. These attacks succeed because third-party software components are implicitly trusted in an application. Although several security defenses exist to reduce the risks from third-party software components, none of them fulfills the full set of requirements needed to defend against common attacks. No individual solution prevents malicious access to operating system resources, is dependency-aware, and enables the discovery of least privileges, all with low runtime costs. Consequently, they cannot prevent software supply chain attacks. This paper proposes applying the NIST Zero Trust Architecture to software applications. Our Zero Trust Dependencies concept applies the NIST ZTA principles to an application's dependencies. First, we assess the expected effectiveness and feasibility of Zero Trust Dependencies using a study of third-party software components and their vulnerabilities. Then, we present a system design, ZTDSYS, that enables the application of Zero Trust Dependencies to software applications and a prototype, ZTDJAVA, for Java applications. Finally, with evaluations on recreated vulnerabilities and realistic applications, we show that ZTDJAVA can defend against prevalent vulnerability classes, introduces negligible cost, and is easy to configure and use.
Paschal C. Amusuo, Kyle A. Robinson, Tanmay Singla, Huiyun Peng, Aravind Machiry, Santiago Torres-Arias, Laurent Simon, James C. Davis
2023-10-21T21:23:09Z
http://arxiv.org/abs/2310.14117v2
# Preventing Supply Chain Vulnerabilities in Java with a Fine-Grained Permission Manager ###### Abstract Integrating third-party packages accelerates modern software engineering, but introduces the risk of software supply chain vulnerabilities. Vulnerabilities in applications' dependencies are being exploited worldwide. Often, these exploits leverage features that are present in a package, yet unneeded by an application. Unfortunately, the current generation of permission managers, such as SELinux, Docker containers, and the Java Security Manager, are too coarse-grained to usefully support engineers and operators in mitigating these vulnerabilities. Current approaches offer permissions only at the application's granularity, lumping legitimate operations made by safe packages with illegitimate operations made by exploited packages. This strategy does not reflect modern engineering practice -- we need a permission manager capable of distinguishing between actions taken by different packages in an application's supply chain. In this paper, we describe Next-JSM, the first fine-grained ("supply chain aware") permission manager for Java applications. Next-JSM supports permission management at package-level granularity. Next-JSM faces three key challenges: operating on existing JVMs and without access to application or package source code, minimizing performance overhead in applications with many packages, and helping operators manage finer-grained permissions. We show that these challenges can be addressed through bytecode rewriting; appropriate data structures and algorithms; and an expressive permission notation plus automated tooling to establish default permission. In our evaluation, we report that Next-JSM mitigates 11 of the 12 package vulnerabilities we evaluated and incurs an average 2.72% overhead on the Dacapobench benchmark. Qualitatively, we argue that Next-JSM addresses the shortcomings of the (recently deprecated) Java Security Manager (JSM). ## 1 Introduction The Software Supply Chain in Java has become a devastating vector for Remote Code Execution (RCE) and other critical vulnerabilities [17]. These supply chain vulnerabilities give malicious actors access to alter a program's execution, execute shell scripts, or exfiltrate sensitive files [16, 17, 31, 53]. Within the last 2 years, critical vulnerabilities in popular Java packages such as Log4J, Spring, and Apache commons-text have exposed millions of Java applications to malicious exploits [28, 41, 45]. We need systems that enable engineers and operators to protect their applications from vulnerabilities in their software supply chains. Software supply chain vulnerabilities cannot easily be mitigated by the current generation of permission managers. This class of software, embodied in access control systems [52, 50, 26, 29] and sandboxes [14, 23, 30, 35, 49], is a popular approach for mitigating the impact of application vulnerabilities. However, existing systems for Java applications enforce permissions at an inappropriate granularity, _e.g._, at the granularity of an application [7] or a Docker container [13]. They are unable to enforce the Principle of Least Privilege _within_ an application's dependencies, which is required to prevent supply chain vulnerability exploitation. We posit that the insufficiency of existing permission managers is due to their design at a wrong granular level (the application level). We believe a fine-grained ("supply-chain-aware") permission manager, _i.e._, with permissions at the granularity of packages, is necessary to protect modern applications. A permission manager at the package granularity should improve an application's security by enforcing the least privilege principle on the application's supply-chain components (dependencies). This goal introduces three challenges: **(C1:)** It needs to operate on existing JVM and function without access to the source code of Java applications or their dependencies; **(C2:)** It needs to minimize performance overhead in applications with many dependencies; and **(C3:)** It needs to help operators identify and manage these fine-grained permissions. We propose Next-JSM, the first supply-chain-aware permission manager for Java applications. Next-JSM allows specifying permissions that are scoped to specific dependencies in an application. It features an extendable bytecode rewriting engine that intercepts classes within the JDK and instruments their security-sensitive methods to monitor API usage and enforce the specified permissions (**C1**). Next-JSM uses appropriate data structures and algorithms to facilitate efficient retrieval of a package's permissions in constant time with respect to the number of packages (**C2**). For permission management, Next-JSM supports tracing to determine the default minimum permissions of an application as well as an expressive permission notation (**C3**). We evaluated Next-JSM in several ways. Next-JSM can prevent many real-world supply chain vulnerability exploits (11/12 protected). Next-JSM incurs substantial performance overhead on microbenchmarks, but low performance overhead on the Dacapobench (average 2.72% overhead). Next-JSM achieves constant-time overhead regardless of the number of packages being monitored. Using Next-JSM's capability tracing system, we show that several real applications use packages that can make use of more security-sensitive operations than the applications need -- our expressive permission language allows application engineers to deny those behaviors in their context. We compared Next-JSM to the nearest related tool, the Java Security Manager (JSM), and report greater usability along the dimensions for which JSM was deprecated in 2021. In summary, our main contributions are: * We designed Next-JSM, the first software supply chain-aware permission manager for Java. Next-JSM offers permissions on a dependency basis rather than an application basis, and addresses usability flaws of the deprecated JSM. * We demonstrated Next-JSM's defense capabilities on a collection of real-world supply chain vulnerabilities in Java packages and evaluated its performance overheads. ## 2 Background In this section, we review the general concept of software supply chain vulnerabilities (SS2.1) and the specific case of the Log4J/Log4Shell vulnerability (SS2.2). ### Software supply chain vulnerabilities The software supply chain involves a collection of resources (hardware and software), storage, and distribution mechanisms that contribute to the development of a final software product [12]. From an application's perspective, its supply chain is the set of software packages included as dependencies and the various channels through which it obtains its dependencies [40]. In modern commercial software development, supply chains are the norm. Large portions of a typical software project are implemented by combining third-party components. For example, in 2023 Synopsis analyzed 1703 commercial software projects across 17 sectors of industry [47]. They reported that 96% of the surveyed projects contained third-party (open-source) software, with an average of 595 open-source components imported as dependencies per project. Similarly, Kumar reported that over 90% of the top one million Alexa-ranked websites rely on external dependencies [33]. Modern software projects depend, directly or indirectly, on hundreds or thousands of distinct packages developed and maintained by different engineers. Beyond the risk posed by vulnerable dependencies, these vulnerable dependencies persist in the supply chain long after patches are available. This increases the exposure timeline of the affected applications. For example, in 2023 Synopsis reported that, despite the publicity received by the Log4J vulnerability, 11% of the Java codebases they scanned contained vulnerable versions of Log4J one year after the patch was available [47]. Corroborating this finding, Zhang _et al._[54] measured patch lag across Java's Maven ecosystem. In the case of Log4J, they found it took 308 days for 50% of Log4J dependents to get patched, and after 15 months of exposure, newly affected Log4J dependents were still being published, with 94% importing gLog4J transitively. Similar behaviors were reported across other vulnerable Maven packages. They attributed this cause of persistent vulnerabilities to the failure of direct dependents to either update or release patched versions of their packages. Given the risks of vulnerable dependencies and the challenge of vulnerability patch propagation, our work is directly in line with industry experts' call for "zero trust" for software dependencies [25]. ### Overreach in Dependency Behavior: The Case of Log4Shell Our thesis is that many software supply chain vulnerabilities result from a gap between the integrator's expectation of a package's behavior, and the package's _potential_ behavior. We term this phenomenon **dependency behavioral overreach**. Here we illustrate it through the Log4J vulnerability, based on the analysis conducted by the US CISA [18]. The Log4J vulnerability, CVE-2021-44228, was published in 2021. Nicknamed Log4Shell, the vulnerability occurred in the popular Log4J logging library. It allowed remote code execution, enabling malicious actors to take control of affected systems. The severity of the vulnerability was compounded by its ubiquity -- Log4J was used, directly or indirectly, by thousands of other packages and millions of applications worldwide. The Log4J vulnerability was caused by Log4J's support for the JNDI lookup functionality while logging externally-supplied data. JNDI allows a Java application to interact with and retrieve data from an LDAP server. Given an attacker-controlled LDAP or RMI server, JNDI has the capability to execute malicious code returned by the LDAP server. The Log4J library is usually used for logging attacker-controlled data, such as components of an HTTP request header or user names of an application's users. Hence, if such data contains a JNDI URL, the Log4J library accesses the URL and can execute any code returned by the library. Our key observation about the Log4J vulnerability is that many users did not need the JNDI functionality -- and the corresponding code execution behavior -- in their logging library. Such functionality could be disabled without adverse effect on the application's functionality. However, preventing the application from issuing any network accesses would be too aggressive, since other components of the application might rely on the network. Unfortunately, existing permission managers for Java do not permit application engineers and operators to distinguish permissions at the package level of granularity. We discuss this and related work next. ## 3 Related Works ### Sandboxing and Permission Managers Sandboxing, Process Isolation, Permission Managers, and Access Control Systems is a well-studied area that has contributed to the security of operating systems, web and mobile systems, programming languages, and other software systems. Figure 1 summarizes different types of permission managers based on their targets. Operating Systems-level isolation and access control techniques restrict an application's access to the file system and system call access in the operating system. SELinux [8] and AppArmor [1] involves using policy files to define what resources an application should access. Containers such as Docker [13] build on these OS-level mechanisms to offer similar guarantees. These approaches do not have insight into application internals, limiting the enforcement of the least privilege principle on the dependencies. Several other sandboxing mechanisms create an isolation layer between the application and third-party code. The Firefox renderer [37] uses RLBox to isolate third-party libraries to render audio, video, images, and other content. NativeGuard [46] isolates native libraries of Android applications into a second application where unnecessary privileges are eliminated. Gama _et al._[27] proposed a self-healing sandbox for executing third-party code. Cage4Deno [14] and Sandcrust [35] isolate the execution of subprocesses and untrusted C programs within the Deno and Rust runtime respectively. These techniques are all designed for C/C++ codes and are tailored to protect the application from C/C++-related vulnerabilities such as crashes and memory corruptions. More recently, intra-process isolation techniques have been applied to non-C/C++ codes. BreakApp [49] decomposes a Javascript app and runs each library in its own sandbox, following defined policies. This is conceptually related to Next-JSM, but differs substantially in detail. Another family of approaches for JavaScript prevent the introduction of new functionality at runtime [26, 39, 50], but are inappropriate when an exploit depends on functionality built into the application. Next-JSM's dynamic analysis approach allows it to support finer-grained permissions and is tailored to use cases where there is a legitimate but unused functionality in a Java package that can be exploited. For Java applications, the Java Development Kit (JDK) provides the Java Security Manager (JSM) to restrict the access of Java applications to sensitive lower-level APIs. Like other application-level sandboxing techniques, the JSM's design makes it unsuitable to enforce the least privilege principle on an application's dependencies or protect it from its supply chain vulnerabilities. Unfortunately, this JSM was recently deprecated due to its lack of use and insufficiency in protecting against relevant vulnerabilities. We compare our approach qualitatively to JSM in SS6.3. ### Automated Policy Generation Once a permission model is established, a policy must be developed and maintained for an application. Various automated approaches have been proposed for generating policies for sandboxes. The Linux Systrace functionality monitors system call executions and generates a policy based on the observed system call invocations. Wan _et al._[51], Jamrozik _et al._[32], Pailoor _et al._, and Bapan _et al._[15] applied similar methods to monitor system calls and generate policies for various other software systems such as Android applications and containers. Monitoring processes at the system call level is effective for generating policies for operating systems-level sandboxing approaches. However, information about the application's Figure 1: Selected Permissions Managers at the operating systems, applications, and package granularity. The red bars with question marks demonstrate the gap this project fills. While Javascript has techniques combating supply-chain vulnerabilities, there are no corresponding techniques in Java. third-party dependencies is unavailable at the operating system or application level ("semantic gap"). Hence, the available automated policy generation approaches are insufficient for generating policies for a supply-chain-aware permission manager. To automatically generate policies for intra-application sandboxing, MIR [50], Ferreira _et al._[26] and Ohm _et al._[39] employ static analysis to identify usage of sensitive APIs by the individual packages. Such techniques are unsuitable for Java, where third-party dependencies' source code is unavailable. Our work proposes instead a runtime policy generation approach to specify the capabilities that Java dependencies exhibit in the context of the application. ### Other Supply Chain Security Approaches Beyond privilege reduction, many other approaches have been proposed to reduce the risk from vulnerable or malicious components of the software supply chain [40]. As discussed by Okafor _et al._[40], these works focus on promoting transparency [4, 6, 10, 11], validity [29, 34, 48, 38], and separation [20, 24] within the supply chain. These techniques reduce the risk of introducting malicious or vulnerable dependencies. However, they cannot protect the application from zero-day vulnerabilities in their current packages. By enforcing a least-privilege principle on dependencies, Next-JSM helps application engineers mitigate their zero-day risk. ## 4 Motivation and Threat Model ### Vulnerabilities in the Java Ecosystem We assessed the _prevalence_ of supply chain vulnerabilities by vulnerability growth trend within the last 20 years. Figure 2 shows a graph of CVEs disclosed against Maven packages, based on CVE publication year. We observe a jump of vulnerabilities in the Java supply chain within the last 4 years. The actual risk faced by applications is greater still, as many vulnerabilities are not disclosed as CVEs [12, 36, 44]. ### System and Threat Model Our **system model** is that an application depends on a vulnerable package, either directly or transitively. User-supplied data from the application is input to the vulnerable package. The vulnerable package has the built-in capability to perform some security-sensitive operation(s) on the host system, such as accessing the local file system, connecting to a remote server, or executing executable files, shell commands, or arbitrary functions located in the classpath. The package can either perform such action directly (such as invoking the responsible APIs to execute the action) or indirectly (_e.g._, calling another package to do so). The input received by the vulnerable package from the application is able to influence the conduct of any of these sensitive operations. Given this system model, our **threat model** is that a malicious actor can interact with the application and control the data passed to the vulnerable package. This model is realistic, occurring regularly when applications depend on third-party packages. For example, in the Log4J case, applications either depend on Log4J for logging or on a packages that uses Log4J. Log4J, through its use of the JNDI APIs, can connect to a remote server and instantiate arbitrary classes returned by the server. The server URL it connects to can be controlled by the log message the application sends to it. Applications commonly log externally supplied data, such as HTTP header fields and form entries. Hence, given all these properties, The Log4J vulnerability represents a threat vector that can be mitigated by Next-JSM. However, this threat model does not consider native libraries written in lower-level languages, such as C/C++, which a package may use directly to perform sensitive operations. This is not a conventional programming practice in Java, as the Java Development Kit provides a vast set of APIs for interacting with operating system resources. Furthermore, this threat model doesn't include threats posed by intentionally malicious packages. As discussed in SS3, the software supply chain community has proposed many approaches for preventing the introduction of malicious packages to the supply chain. Next-JSM shares a similar threat model with other analogous works designed for the Javascript ecosystem [49, 50]. It assumes vulnerable packages in the application's dependencies and the package's capability to perform security-sensitive operations. However, the granularity of protection it provides to the application is a significant difference. BreakApp[49] and MIR[50] block access to specific Javascript APIs and do not consider the arguments of these APIs. Hence, if a package has access to read a single file, it gets access to read all files. Next-JSM provides a fine-grained granularity such that it can Figure 2: Annual CVEs in Maven packages, until Aug. 2023. still protect access to specific files even when the package has access to some other files. ## 5 Next-JSM: Design and Implementation ### Design Requirements In addition to supply chain vulnerability protection, a supply-chain-aware permission manager should also be usable. Here, we identify and discuss four vulnerability protection and usability requirements that such a permission manager should meet. While the first three requirements address challenges and concerns introduced by a finer-grained supply-chain-aware Permission Manager, the last two requirements directly address the reported usability flaws [43] of the deprecated Java Security Manager. 1. **Stable and Acceptable Performance Overhead (SS6.2):** A Supply-chain-aware permission manager should introduce only an acceptable worse-case impact on an application's latency and throughput. Such an impact should be stable relative to the number of packages in the application. 2. **Package Capability Inference (SS5.3):** A Supply-chain-aware permission manager should be able to infer a package's capabilities and help the application maintainer understand the consequence of restricting a specific package's permissions. 3. **Non-fatal Enforcement Option (SS5.4.3):** A Supply-chain-aware permission manager should provide a safe (non-fatal) enforcement option to ensure application availability. In such a mode, access violations are reported to the application maintainer without interrupting the application's execution. 4. **Easy Programming Model (SS5.2.2):** To mitigate the _difficult programming model_ flaw of the JSM, a supply-chain-aware Permission Model should not require the explicit specification of permissions for all packages in an application in order to guarantee a successful application runtime. 5. **Flexible Permission Model (SS5.2.3):** To mitigate the _Brittle Permission Model_ flaw of the JSM and ensure usability, A supply-chain-aware permission model should support partial security and negative permissions. An application maintainer should be able to enforce only the set of permissions he deems necessary. They should also have the option to grant permissions for all but specifically unwanted operations. Next we discuss relevant aspects of our design. ### A Supply Chain Aware Permission Model Next-JSM aims to protect applications from the exploitation of vulnerabilities in the packages they depend on. At its core is a permission model expressing the permissions that should be granted to each package in the application. The model comprises specific properties that simplify its specification and improve its usability. #### 5.2.1 Components of the Permission Model Following the Access Matrix terms defined by Sandhu _et al._[42], Next-JSM's permission model comprises four components that enable it to clearly specify a package's permissions - Permission Subject, Resource Types, Resource Operations, and Resource Objects. Hence, each permission consists of a subject-operation-object tuple, while the resource types define the properties of the objects and specify the set of operations allowed on them. The relationship between these components is shown in the abstract permission model section of Figure 3. Permission Subjects:Permission Subjects refer to the packages that an application depends on. In Java, the names of classes are prefixed by the name of the packages they belong to. For example, Log4J's Logger class handles logging and has its fully qualified name as _org.apache.logging.log4j.Logger_. The package name is _org.apache.logging.log4j_, and the class name is _Logger_. Resource Types:Resource Types refer to the different kinds of operating systems resources that an application or package can access. They also determine the set of permissions that can be enforced by Next-JSM. Next-JSM currently supports 3 types of resources: File System, Network, and Runtime. These resource types are _configurable_, allowing an application maintainer to enforce only the set of permissions he considers necessary. They are also _extendable_ as Next-JSM can be extended to support and enforce new permissions. Resource Objects:These are the individual items the subject aims to access. They include files or directories, Network URLS, or shell commands. Resource Operations:This refers to the various actions that can be performed on the individual resource types. #### 5.2.2 A Context-sensitive Permission Model Next-JSM exhibits a context-sensitive permission model. This means that a package's access to a specific resource depends on its set of permissions and the permissions of its callers. This is achieved through two properties of Next-JSM's permission verification process, as demonstrated in Listing 1. Callstack-based Permission Verification:Next-JSM's permission model requires that **all** packages involved in an attempt to perform an operation on a controlled resource must have the appropriate permissions to perform the operation on the specific resource object. Hence, as shown in the permission verification algorithm, Next-JSM walks the stack at the point of invocation, performs verification checks on each class's package, and denies the permission request if any package fails the verification. Package Permission Inheritance:To prevent the need to grant explicit subject-operation-object permission tuple to each package, Next-JSM's permission model supports **permission inheritance**. The permissions defined for a package are inherited by all transitive dependencies used by the package, except the transitive dependency has its own overriding set of permissions defined. Hence, as shown in Figure 3, P3 is able to access the specific file when invoked by P1, even though it has no explicit file permissions. On the other hand, the operation is denied when P2 invokes P3. Permission Inheritance may have the unwanted consequence of enabling a confused deputy attack, where a vulnerable package uses an authorized package to access a sensitive resource. However, it is still necessary to enable Permission Inheritance as the set of transitive dependencies, which will require permission specification, exponentially grows. To mitigate the risk of confused deputy attacks, Next-JSM Permission Inheritance property can be configurable. #### 5.2.3 Permission Specification Next-JSM's permission model supports the use of a reader-friendly format for specifying permissions. Listing 2 shows a sample permissions file with permissions specified for a single package. The provided format supports 3 levels of permissions specification, with varying granularity. * Resource-type level: A package can be granted total access or total restriction to a specific resource type. In Listing 2, package _com_foo_baz_ has no access to the network system. * Resource-operation level: A package is granted access to perform specific operations on a resource type. In Listing 2, package _com_foo_baz_ can read files but can not write to any file. Figure 3: Overview of the design of Next-JSM, a supply-chain-aware Permissions Manager for Java. It comprises two components. The Capability Tracing component monitors an instrumented application’s usage of sensitive APIs and generates a Capabilities Specification (CS) file. The CS File aids the specification of a permissions file. The Permissions Enforcement component uses the Permissions file to authorize access to sensitive operating systems resources. ``` 1funcheckPermission(resourceType,resourceObject,operation): 3 4packagePerms -{} 5hasPermEnabled - False 6hasPermRestricted -False 7 8callStack - getCallStack() 9 10forclassincallStack: 11packageName,perms - getPackagePermissions(class) 12packagePerms[packageName] - perms 13 14forpackagePerminpackagePerms: 15hasPerm - packagePerm.contains 16(resourceType,resourceObject,operation) 17if(hasPerm==0) 18hasPermEnabled - True 19elseif(hasPerm==1) 20hasPermRestricted - True 21 22ifhasPermEnabledandnothasPermRestricted 23returnTrue 24 25returnFalse ``` Listing 1: Permission verification operation. Permissions are checked for all classes in the stacktrace. Authorization requires \(\geq\)1 class be permitted and no class be restricted. ``` 1{ 2*com.foo.ba2*:{ 3*fs:false, 4*fs.read*:true, 5*fs.write*:false, 6*fs.allowedPaths":[], 7*fs.deniedPaths":[], 8*net*:false, 9*net.connect*:false, 10*net.accept*:false, 11*net.allowedUrls":[], 12*net.deniedUrls":[], 13*runtime*:false, 14*runtime.exec*:false, 15*runtime.allowedCommands":["whoami"], 16*runtime.deniedCommands":[] 17}, 18... 19} ``` Listing 2: Permission Specification file. It supports three levels of permission granularity - resource type level (_e.g._,'fs'), resource operation level (_e.g._,'fs.read') and resource object level (_e.g._,'fs.allowedPath) * Resource-object level: In the most fine-grained level, the specific items a package can or cannot access are specified. This enables the use of both positive and negative permissions, as defined by Sandhu _et al._[42]. In Listing 2, package _com.foo.baz_ can only execute the _whoami_ command at runtime. ### Capability Tracing Next-JSM's capability tracing component is designed to aid system administrators and security personnel in the specification of appropriate permissions to protect their applications. The output of capability tracing is a Capability Specification file. This file contains the specific resource objects accessed by each package and the operations carried out on the item. The file can either be used as a permissions file for permissions enforcement or can be used as a source of data to understand consequences and guide decision-making on permission restriction. The Capability Tracing operation comprises three stages. Monitoring:Capability Tracing uses the instrumentation hooks attacked by Next-JSM on controlled APIs to monitor these APIs' usage and corresponding capabilities. It collects information about the resource object being accessed and the packages in the stack trace at the point of invocation. This data is first stored in an in-memory buffer to optimize performance and routinely written to a file for long-term storage. Merging:Next-JSM's Capability Tracing component maintains a single and simple representation of an application's recorded capabilities by merging each new recording with the existing capability specification file. This stage is triggered immediately after the API usage recording is written to a file. It takes the new API usage recording and the existing capability specification file, and for each package in the new API usage recording, it updates the package's capabilities in the Capability Specification File. Minimization:To support long-term capability tracing and avoid excessive usage of disk for storing the capability specification document, Next-JSM's Capability Tracing component features an optional minimization stage that performs file path reduction on the various file and directory paths in the Capability Specification file. When enabled, if a package accessed up to \(k\) files from the same directory, this stage replaces these files with the directory path in the capabilities specification file. This aims to reduce the size of the capabilities specification file when there are packages making frequent and unique file accesses in the application. ### Permission Enforcement Next-JSM is designed to enforce the set of permissions specified for each package in an application. Similar to other permission managers, Next-JSM intercepts access to instrumented methods and uses a security monitor to verify if access is permitted. For each resource type, we refer to the class in the core JDK library that provides access to the protected resource as a security-sensitive class. We refer to the APIs in the specific classes the package interacts with as instrumented methods due to the presence of instrumentation. Table 1 shows the instrumented classes and methods for each resource type and operation pair. Next-JSM permission enforcement approach is in three phases. _Bytecode Transformation:_ At an application's startup, Next-JSM intercepts security-sensitive classes as they are being loaded into the JVM and uses its bytecode rewriting engine to instrument the methods performing sensitive operations with calls to its permission verification component. _Permission Context Generation:_ Also, at startup, it reads the provided permissions specification file and builds a permissions context for the application comprising the packages and the set of permissions they own. _Permission Verification:_ At runtime, during the invocation of any instrumented method, Next-JSM performs the verification of permissions following the algorithm described in Listing 1. #### 5.4.1 The Bytecode Rewriting Engine Next-JSM instruments the sensitive methods using its Bytecode Rewriting Engine (BRE). The BRE is designed to be configurable and extendable and comprises the following stages. Setup and Configuration:The set of permissions enforced by Next-JSM is designed to be configurable. An application maintainer may choose to enforce only the Runtime execution permission while delegating the file and network access permissions to other complementary permission managers at the operating systems level. As a result, Next-JSM configures the BRE to instrument only the classes and methods related to the configured set of enforced permissions. To configure, BRE requires the name of the Class and method to instrument, as well as some details about the properties of the class that will aid its instrumentation. Interception of Security-Sensitive Classes:The BRE intercepts classes as they are loaded into the JVM and checks if the intercepted class is a security-sensitive class It passes the bytecodes of security-sensitive classes to the bytecode transformer subcomponent for instrumentation. The instrumented bytecode is then loaded into the JVM in place of the original bytecodes. Bytecode TransformationFor each security-sensitive class, the BRE identifies the set of configured methods in the class. For each method, it locates the position of the method argument representing the resource object using the information provided at the configuration stage. Finally, it adds a sequence of bytecodes representing an invocation of the Permission verification component at the top of the method. It passes the located method argument as an argument to the permission verification component, together with arguments specifying the resource type and resource operation. #### 5.4.2 The Permissions Context Given a Permission Specification document containing the set of permissions specified for each package, we create a permissions context that aids the efficient retrieval of a package's permission during the permissions verification stage. This is necessary as a packaged Java application does not contain mapping information between the classes in its dependencies and the packages they belong to. Instead, as discussed in SS5.2.1, the names of classes in the bytecode are prefixed by their package names. As shown in Figure 4, we store each package's permission at the leaf node of a modified trie data structure, where each node on the path to that leaf node represents a dot-separated component of the package's name. During permission verification, we attempt to retrieve the set of permissions for each class on the stack trace. We use the dot-separated components of the class name to search for the permission set defined for the class's package. Due to our use of the modified trie data structure, **the search time complexity for package set of permissions is constant with respect to the number of packages used by the application**. This is because the permission verification operation does not need to iterate through the list of packages in the application to map a package to a class name or to search for the package's set of permissions. In Figure 4, the permission set for the class _com.foo.bar.ClassA_ in the _com.foo.bar_ package can be retrieved in at most 3 steps - searching for com, foo, and baz through the tree and, if they are all present, retrieving the \begin{table} \begin{tabular}{l l l} \hline \hline **Operation** & **Sensitive Class** & **Instrumented** \\ & & **Method** \\ \hline File Read & FileInputStream & \textless{}Constructor\textgreater{} \\ File Write & FileOutputStream & \textless{}Constructor\textgreater{} \\ Network Connect & Socket & connect() \\ Runtime Execution & ProcessBuilder & start() \\ \hline \hline \end{tabular} \end{table} Table 1: Sensitive classes and instrumented methods used to enforce different permissions. permission set saved at baz. It is linear on the number of dot-separated components in the class name. In our threat model (SS4.2), the attacker sends a malicious payload that exploits an existing package and so does not control static properties such as the class name. #### 5.4.3 Permission Verification and Enforcement Permission verification and enforcement occur when an instrumented method is invoked. The instrumentation invokes the Permissions Verification component, using the name of the resource being accessed and the intended resource operations as arguments. The Permissions Verification component is implemented following the algorithm described in Listing 1. It verified that every class in the stack trace is from a package that has the required permission to perform the intended operation. If any class's package lacks the necessary permission, the violation is flagged, and the access is denied. To limit the disruption of an access denial to an application's execution, the enforcement component provides two configurable modes. In the block mode, the enforcement component is designed to throw an exception that the calling package is expected to have handled. In Java, programs attempting to interact with the operating system are forced to catch and handle specific exceptions that the operation may throw. Next-JSM's enforcement component is designed to throw one of these safe exceptions during an access violation. In the report mode, the enforcement component does not interfere with the application's execution and, instead, invokes a callback that the application would have configured. #### 5.4.4 Implementation Next-JSM is implemented as a Java agent [3] with 2,284 lines of Java code. It is supplied as a command-line argument when running the Java application. As a result, Next-JSM does not require a modification of the application or the Java Virtual Machine, which ensures its portability and usability. Next-JSM's arguments determine the capability tracing or permissions enforcement choice. We use Java's instrumentation API to intercept and transform security-sensitive classes as they are being loaded into the JVM. We implemented the Bytecode Rewriting Engine using the ASM library. Compared to other bytecode transformation libraries like Javassist and ByteBuddy, ASM provided us the flexibility that Next-JSM required. Finally, our instrumentation in the core JDK classes uses the reflections API to invoke the permissions verification method, as the core JDK classes can not directly invoke application methods. ## 6 RPM: Evaluation Using the following research questions, we evaluate Next-JSM's utility, effectiveness, and performance. * Does Next-JSM prevent real-world Java RCE exploits? * How does Next-JSM impact an application's performance? * Does Next-JSM address the shortcomings of the Java Security Manager? ### RQ1: Protection against Supply Chain Vulnerabilities We evaluate Next-JSM's effectiveness in mitigating the exploitation of RCE vulnerabilities in Java packages. #### 6.1.1 Methodology We evaluated Next-JSM's capability to protect applications from supply chain vulnerabilities using a sample of 12 vulnerabilities selected from the supply chain vulnerabilities analyzed in SS4.1. We first sorted the vulnerabilities according to their year of publication and severity score (CVSSv3). Starting from the top, we selected the first 12 vulnerabilities for which we could find publicly available exploits on GitHub. We believe this sample represents the characteristics of vulnerabilities that are recent, have critically severe consequences, and are easily exploitable. We built a sample application for each selected vulnerability that uses the vulnerable dependency. We first exploited each application by executing the available exploits. Then, we tried performing the same exploits while running the application with Next-JSM. #### 6.1.2 Results Table 2 shows the results of our exploitation of the selected vulnerabilities. The fourth column indicates the channel through which a malicious code can be delivered to the application, the fifth column indicates the capabilities required for Figure 4: Illustration of permission context, indicating how package permission sets are stored for efficient retrieval. the exploit to succeed, and the last column shows if an exploit was successfully mitigated by Next-JSM. As shown in the table, these vulnerabilities provide attackers with code execution, either through the execution of shell commands, executable files, or Java code, network access, and file write capabilities. Next-JSM is designed to prevent these exploitations by controlling access to the APIs the attacker will depend on. Our results show that Next-JSM can effectively block the execution of 11/12 exploits. We are unable to prevent the exploitation of CVE-2023-38889. The package inherently requires the capability to execute the _bash_ command with externally provided arguments. The exploit involves a code injection attack where a malicious shell command is inserted into the expected _bash_ command as an argument. Next-JSM does not process the contents of the bash command being executed and does not detect the presence of the injected malicious bash command. ### RQ2: Next-JSM Performance Overheads We report the performance cost of Next-JSM on various microbenchmarks and profiling measurements. #### 6.2.1 Performance -- Methodology **Cost by operation:** We measured the cost of each security-sensitive operation with and without Next-JSM's instrumentation. We used the Java Microbenchmarking Harness (JMH) library [5], a popular microbenchmarking library developed and maintained by OpenJDK to address many microbenchmarking pitfalls [22]. **Predicted performance trends:** We measured the performance impact when increasing the number of packages in the class (_expected: constant_) and the depth of the call stack during method invocation (_expected: linear_). **Real applications:** To investigate the impact of our permission model on a real application, we conducted profiling measurements on the applications in the Dacapobench suite. For each application, we measured the load time and runtime overhead introduced. We ran each application 10 times and returned the average measurement. #### 6.2.2 Performance -- Results **Cost by operation:** Table 3 shows the results of our microbenchmark tests on the file read, file write, socket connection, and shell execution operations. The results show that the overhead introduced by the instrumentation is significant for operations with very short durations. As it is a constant over \begin{table} \begin{tabular}{c c c c c c} \hline \hline **CVE ID** & **Vulnerable Package** & **Severity** & **Exploit Medium** & **Capabilities Required** & **Exploit Mitigated** \\ \hline CVE-2023-38889 & Alluxio & 9.8 & Bash command & Shell Execution & ✗ \\ CVE-2022-23221 & H2 Console & 9.8 & SQL Init Script & Network Access \& & ✓ \\ & & & & Shell Execution & \\ CVE-2023-39021 & Wix-embedded- & 9.8 & Executable file & Shell Execution & ✓ \\ & mysql & & & & \\ CVE-2023-26119 & HtmlUnit & 9.8 & XML file & Shell Execution & ✓ \\ CVE-2023-39020 & Stanford Parser & 9.8 & Executable File & Shell Execution & ✓ \\ CVE-2022-33980 & Apache Commons- & 9.8 & Java code & Code Execution & ✓ \\ & Configuration & & & & \\ CVE-2022-22963 & Spring Core & 9.8 & Http Request data & Code Execution & ✓ \\ & & & & field & \\ CVE-2022-22963 & Spring Cloud & 9.8 & Http Header & Code Execution & ✓ \\ & & & containing Java code & & \\ CVE-2022-25914 & Jib-core & 9.8 & Executable file & Shell Execution & ✓ \\ CVE-2022-36944 & Scala-library & 9.8 & Serialized byte & File Write & ✓ \\ & & & stream & & \\ CVE-2022-42889 & Apache & 9.8 & Java code & Code Execution & ✓ \\ & Commons-text & & & & \\ CVE-2021-44228 & Log4J & 10.0 & LDAP URL & Network Access \& & ✓ \\ & & & providing Java class & Code Execution & \\ & & & file & & \\ \hline \hline \end{tabular} \end{table} Table 2: Summary of Next-JSM’s performance in mitigating supply chain vulnerability exploits. It prevented the execution of sensitive Java APIs during each failed exploitation attempt. It mitigated exploits of 11/12 critical and recent vulnerabilities. head, the overhead is negligible for operations with longer durations, usually not differentiable from the JVM noise. Predicted performance trends:Figure 5 shows the microbenchmark results of Next-JSM's permission verification function. The execution time is constant with the number of packages and lines with the depth of the call graph. Table 4 shows the runtime results of our profiling results on applications from the Dacapobench suite. The runtime overhead on most applications is negligible, with an average runtime overhead of only 2.72%. Eclipse is the only outlier, with an overhead of 51% due to intensive file system access. We measured a start-up cost of \(\sim\)250ms for a small permission file. This is the overhead incurred when parsing the permissions file, building the permissions context, and modifying the bytecodes of the configured classes. This overhead does not depend on the size of the applications nor the frequency of their usage of sensitive permissions. ### RQ3: Addressing the JSM's Shortcomings We assessed Next-JSM's abilities to address the reported flaws of the Java Security Manager. Our analysis is qualitative. We depend on Next-JSM's design, our analysis of the JSM source code, and official documentation of the JSM flaws. #### 6.3.1 Vulnerability Mitigation The JSM is designed to mitigate the exploitation of Java vulnerabilities by preventing an application's access to dangerous Java APIs. These capabilities depend on the application's policy. To prevent supply chain vulnerabilities with JSM, the policy must prevent dependencies from accessing capabilities they do not need (without restricting the capabilities they do \begin{table} \begin{tabular}{l l l|l l} \hline \hline **App** & **\# of** & **Without** & **Overhead** & **Overhead** \\ **Name** & **Perms** & **Next-** & **(With** & **(With** \\ & & **JSM (s)** & **Next-** \\ & & & **JSM)** \\ \hline avrora & 6 & 10223.7 & 0.01\% & -0.32\% \\ batik & 16 & 3936.8 & -0.69\% & -1.30\% \\ biojava & 6 & 11560.5 & -0.93\% & -1.21\% \\ eclipse & 23852 & 24464.5 & **51.28\%** & 0.31\% \\ fop & 94 & 2988.2 & 3.27\% & -0.07\% \\ graphchi & 97 & 14936.1 & 2.22\% & 0.77\% \\ h2 & 10 & 25890.7 & -8.02\% & -2.93\% \\ jme & 14 & 8326.3 & 0.07\% & 0.14\% \\ jython & 1570 & 10581.2 & 5.15\% & -0.09\% \\ kafka & 182 & 10059.5 & 1.09\% & 0.28\% \\ luindex & 11 & 30184.5 & 0.00\% & 5.68\% \\ lusearch & 6142 & 7468.1 & -2.13\% & -2.43\% \\ pmd & 12 & 4986.5 & -0.61\% & 0.82\% \\ sunflower & 13 & 4385.7 & 1.26\% & 0.75\% \\ tomcat & 1196 & 8377.8 & 1.78\% & 0.92\% \\ tradebeans & 42 & 30699 & -0.21\% & -0.06\% \\ tradesoap & 37 & 22345.4 & 0.87\% & 1.01\% \\ xalan & 8718 & 2240.8 & -0.22\% & -3.37\% \\ zxing & 2505 & 3246.6 & -2.60\% & -4.85\% \\ \hline **Medians** & 40 & — & 0.01\% & 0.01\% \\ \hline \hline \end{tabular} \end{table} Table 4: Execution time overheads (Dacapobench-Java). The fourth column indicates overhead with Next-JSM configured for permission enforcement. The final column indicates overhead when monitoring only shell execution. These measurements include noise indicated by the “performance benefits” of Next-JSM on several applications. Only Eclipse had overhead substantially beyond this noise, related to its heavy use of file system operations. Figure 5: Microbenchmarking results for the permission verification function, varying the number of packages in the application and the call stack sizes. As predicted, the execution time is constant with respect to the number of packages, and linear with respect to the call stack lengths. \begin{table} \begin{tabular}{l l l} \hline \hline **Operation** & **Without** & **With Next-JSM (us)** \\ \hline File Read & \(7.66\pm 0.07\) & \(24.15\pm 0.50\) \\ File Write & \(31.4\pm 0.50\) & \(30.51\pm 0.50\) \\ Socket Connect & \(76.73\pm 6.00\) & \(78.71\pm 5.80\) \\ Shell Execution & \(304.33\pm 1.76\) & \(353.66\pm 2.27\) \\ \hline \hline \end{tabular} \end{table} Table 3: Microbenchmark of security-sensitive operations. Uncertainty is reported over 5 warmup runs, 5 measurement runs, and 3 forks, with each run executing for 10 seconds and the average execution time returned. need). Hence, permissions must be restricted to the various classes performing each operation. With access to the source code, this is a hard problem as the application maintainer must know the classes in his dependencies and the resources they access. _Without_ access to the source code, achieving this level of precision requires inspecting JAR files. As a result, JSM users typically fall back to specifying application-level permissions or granting all permissions [21, 43], which do not protect against supply chain vulnerabilities. Next-JSM's supply-chain-aware permission model enables package-level permission specification that prevents packages from possessing permissions they do not need. By enforcing this least privilege on the application's dependencies, Next-JSM can protect against supply chain vulnerability exploitation. #### 6.3.2 Usability Table 5 shows a usability comparison between the deprecated JSM and Next-JSM. The comparison characteristics were collected from the JSM deprecation document JEP 411 [43] We have referenced the relevant sections where we discussed how Next-JSM satisfied each requirement. As shown in the table, The JSM possesses many flaws that make crafting adequate policies challenging. Application maintainers are forced to specify permissions for all permission types and objects, even when they do not pose any threat to the application. The absence of library-specific permissions implies the code in dependencies runs with the full permissions granted to the application by default, violating the least privilege principle. In addition to these, the absence of a capability tracing mechanism implies that an application maintainer requires knowledge of a package's implementation to specify the necessary permissions. A supply-chain-aware permission model allows Next-JSM to mitigate most of these flaws in its design. The application maintainer can enforce only the permission types he considers necessary, reducing the performance overhead introduced. Furthermore, the capability tracing component serves as a helpful aid in specifying the permissions needed by the packages. By mitigating these flaws, Next-JSM is positioned to replace the Java Security Manager adequately. ## 7 Discussion and Future Work ### Limitations Next-JSM is the first Permission Manager designed to prevent the exploitation of Java supply chain vulnerabilities. Our evaluation in SS6.1 shows that Next-JSM effectively protects typical supply chain vulnerabilities. However, we acknowledge several limitations in the approach. * **If exploit leverages necessary functionality**: Our thesis is that supply chain exploits often leverage package functionality that an application does not need. This permits the application owner to disable that functionality with no negative effect. However, some exploits may make use of package functionality on which the application depends. This was the case in CVE-2023-38889, in which a package's main use case involves shell execution, and the attacker injects shell commands into the string. While permission management systems can in principle be extended to consider not just the system call issued but also the arguments to it, doing so entails substantial per-call implementation and a more complex permission language. Alternatively, the application engineer may determine that the benefit of disabling the functionality outweighs the cost, _e.g.,_ if they have appropriate error handling for failures in the package. * **If indirect exploit**: Our design assumes that the exploit will be conducted within the execution context of the vulnerable package. It is possible that an attacker could propagate the exploit to another thread or process, _e.g.,_ via inter-process communication or a side channel. Next-JSM does support -- to a limited degree -- the case of asynchronicity (_e.g.,_ as supported natively by Java file system APIs). The current approach restricts the use of the Thread.create() API. A more advanced implementation would propagate permissions to the new Thread. \begin{table} \begin{tabular}{c c c} \hline **Usability Properties** & **JSM** & **Next-JSM** \\ \hline \multirow{2}{*}{Permission Model} & _Brittle_ & _Flexible_ §5.2.3 \\ & Must grant all permissions types & Can enforce specific permission types \\ & Must specify all accessible objects & Supports negative permissions \\ \hline \multirow{2}{*}{Programming Model} & _Difficult_ & _Easy_ §5.2.2 \\ & Application require’s package’s permissions & Application requires only its own permissions \\ & by default & \\ & Package inherits all application’s permissions & Package has its own permissions \\ & All classes require explicit permission & Support for Permission inheritance \\ \hline Capabilities Inference & _Not Available_ & _Available_ §5.3 \\ \hline \end{tabular} \end{table} Table 5: Usability comparison between Next-JSM and the (deprecated) JSM. * **If application-level sensitive operations**: Next-JSM is focused on security-sensitive system calls, as these are the primary exploit of current software supply chain vulnerabilities. However, if an attacker has knowledge of the target application, then there may be application-specific APIs that would be similarly harmful (_e.g.,_ debiting a bank account). As Next-JSM uses binary rewriting, our design would allow an application engineer to add arbitrary classes and function calls to the permission scheme. ### Emergency Permission Restriction Several research studies have shown the time gap between vulnerability discovery, the availability, and propagation of patches [19, 54] (cf. SS2.1). With the discovery of the Log4J vulnerability, many application maintainers were reported to have shut down their applications while waiting for a patch, resulting in customer dissatisfaction and revenue loss. Furthermore, application maintainers may hesitate to install a patch immediately as it may be incomplete. Next-JSM can be _emergency first-aid_ in this scenario. Permissions can be immediately specified to restrict the vulnerable package's access to sensitive resources, reducing the possibility of exploitation. Other forms of commercial _emergency first aid_ exist to protect applications from zero-day vulnerabilities. For example, Chainguard [2] and Snyk [9] offer patched versions of packages. However, the patches take time to develop, and they are usually provided only for popular packages or high-impact vulnerabilities. While Next-JSM does not fix the vulnerability, it ensures that the vulnerability is not exploitable. ### Integration with Java Security Manager Next-JSM uses its Bytecode Rewriting Engine to insert hooks that enable its permission manager to interact with the Java application. However, until recently, such hooks were provided by the core Java classes as part of the Java Security Manager. We suggest that the Java team deprecate only the Policy and AccessController classes, which represent their implementation of a permission manager, and leave the Security Manager APIs and the hooks as a framework that supports the use of custom permission managers. When implemented this way, Next-JSM can be refactored to rely on the JSM's already-existing hooks while providing a permission manager that protects against supply chain vulnerabilities. Beyond supporting integration with the JSM, the presence of hooks in the core classes of other JVM-based languages will also enable Next-JSM's support for these languages. Due to Next-JSM's compatibility with the Java Virtual Machine, the Capability Tracing and Permission Enforcement component will also be compatible with JVM-based languages like Scala, Groovy or Kotlin. Applications built using these languages can invoke Next-JSM to prevent the exploitation of supply chain vulnerabilities. The use of Bytecode Rewriting provides an additional performance benefit. It enables the selective instrumentation of sensitive methods depending on the configured permissions. This avoids the overheads due to an application's use of a controlled resource, in the event that the application maintainer deems it not security-sensitive. ### Integrate Permissions into Supply Chains If package-aware permission managers such as Next-JSM were widely adopted, Software Bills of Materials (SBOMs) could become more powerful. SBOMs are a first step toward greater visibility into an application's dependencies and supply chain. At present, an SBOM for a package or application informs downstream users only of the upstream dependencies involved. Application engineers may be unaware of a package's full range of functionality and the risks entailed, because SBOMs omit descriptions of potential capabilities. Supported by automated capability tracing, package developers could document the minimal set of permissions they require, possibly coupled to configurations that enable more powerful features with richer permission sets. ## 8 Conclusion Existing defenses for Java do not allow an application engineer to control the security-sensitive system calls accessed by the application's dependencies. We argue that in the modern era of software development, engineers need to be able to manage the accesses made not only by their business logic but also by the third-party components they have integrated. To address this need, we evaluate the feasibility of a package-granularity permission management scheme for Java. We design, implement, and evaluate Next-JSM, the first package-granularity permission manager for Java. Our evaluation shows that Next-JSM is effective at mitigating CVEs (11/12 tried), offers low performance overheads (average \(\sim\)3%), and supports permission management through automation and a flexible permission language. The software supply chain accelerates engineering, but requires supply chain-aware security management tools. Next-JSM fills this gap for Java applications.
2303.02895
The Multiview Observatory for Solar Terrestrial Science (MOST)
We report on a study of the Multiview Observatory for Solar Terrestrial Science (MOST) mission that will provide comprehensive imagery and time series data needed to understand the magnetic connection between the solar interior and the solar atmosphere/inner heliosphere. MOST will build upon the successes of SOHO and STEREO missions with new views of the Sun and enhanced instrument capabilities. This article is based on a study conducted at NASA Goddard Space Flight Center that determined the required instrument refinement, spacecraft accommodation, launch configuration, and flight dynamics for mission success. MOST is envisioned as the next generation great observatory positioned to obtain three-dimensional information of large-scale heliospheric structures such as coronal mass ejections, stream interaction regions, and the solar wind itself. The MOST mission consists of 2 pairs of spacecraft located in the vicinity of Sun-Earth Lagrange points L4 (MOST1, MOST3) and L5 (MOST2 and MOST4). The spacecraft stationed at L4 (MOST1) and L5 (MOST2) will each carry seven remote-sensing and three in-situ instrument suites, including a novel radio package known as the Faraday Effect Tracker of Coronal and Heliospheric structures (FETCH). MOST3 and MOST4 will carry only the FETCH instruments and are positioned at variable locations along the Earth orbit up to 20{\deg} ahead of L4 and 20{\deg} behind L5, respectively. FETCH will have polarized radio transmitters and receivers on all four spacecraft to measure the magnetic content of solar wind structures propagating from the Sun to Earth using the Faraday rotation technique. The MOST mission will be able to sample the magnetized plasma throughout the Sun-Earth connected space during the mission lifetime over a solar cycle.
N. Gopalswamy, S. Christe, S. F. Fung, Q. Gong, J. R. Gruesbeck, L. K. Jian, S. G. Kanekal, C. Kay, T. A. Kucera, J. E. Leake, L. Li, P. Makela, P. Nikulla, N. L. Reginald, A. Shih, S. K. Tadikonda, N. Viall, L. B. Wilson III, S. Yashiro, L. Golub, E. DeLuca, K. Reeves, A. C. Sterling, A. R. Winebarger, C. DeForest, D. M. Hassler, D. B. Seaton, M. I. Desai, P. S. Mokashi, J. Lazio, E. A. Jensen, W. B. Manchester, N. Sachdeva, B. Wood, J. Kooi, P. Hess, D. B. Wexler, S. D. Bale, S. Krucker, N. Hurlburt, M. DeRosa, S. Gosain, K. Jain, S. Kholikov, G. J. D. Petrie, A. Pevtsov, S. C. Tripathy, J. Zhao, P. H. Scherrer, S. P. Rajaguru, T. Woods, M. Kenney, J. Zhang, C. Scolini, K. S. Cho, Y. D. Park, B. V. Jackson
2023-03-06T05:10:54Z
http://arxiv.org/abs/2303.02895v3
# The Multiview Observatory for Solar Terrestrial Science (MOST) ###### Abstract The \({}^{1}\)NASA Goddard Space Flight Center, Greenbelt, MD, United States \({}^{2}\)The Catholic University of America, Washington, DC, United States \({}^{3}\)Center for Astrophysics \(|\) Harvard &Smithsonian, Cambridge, MA, United States \({}^{4}\)NASA Marshall Space Flight Center, Huntsville, AL, United States \({}^{5}\)Southwest Research Institute, Boulder, CO, United States \({}^{6}\)Southwest Research Institute, Boulder, CO, United States \({}^{7}\)Jet Propulsion Laboratory, Pasadena, CA, United States \({}^{8}\)Planetary Science Institute, Tucson, AZ, United States \({}^{9}\)University of Michigan, Ann Arbor, MI, United States \({}^{10}\)Naval Research Laboratory, Washington, DC, United States \({}^{11}\)University of Massachusetts Lowell, Lowell, MA, United States \({}^{12}\)University of California, Berkeley, CA, United States \({}^{13}\)Lockheed Martin Advanced Technology Center, Palo Alto, CA, United States \({}^{14}\)National Solar Observatory, Boulder, CO, United States \({}^{15}\)Stanford University, Stanford, CA, United States \({}^{16}\)University of Colorado, Boulder, CO, United States \({}^{17}\)George Mason University, Fair Fax, VA, United States \({}^{18}\)University of New Hampshire, Durham, NH, United States \({}^{19}\)Korea Astronomy and Space Science Institute, Daejeon, Republic of Korea \({}^{20}\)University of California San Diego, La Jolla, CA, United States * **Correspondence:** Nat Gopalswamy [email protected] _Submitted to Journal of Atmospheric and Solar Terrestrial Physics, February 9, 2023_ ## Abstract We report on a study of the Multiview Observatory for Solar Terrestrial Science (MOST) mission that will provide comprehensive imagery and time series data needed to understand the magnetic connection between the solar interior and the solar atmosphere/inner heliosphere. MOST will build upon the successes of SOHO and STEREO missions with new views of the Sun and enhanced instrument capabilities. This article is based on a study conducted at NASA Goddard Space Flight Center that determined the required instrument refinement, spacecraft accommodation, launch configuration, and flight dynamics for mission success. MOST is envisioned as the next generation great observatory positioned to obtain three-dimensional information of solar wind structures such as coronal mass ejections, stream interaction regions, and the solar wind. The MOST mission consists of 2 pairs of spacecraft located in the vicinity of Sun-Earth Lagrange points L4 (MOST1, MOST3) and L5 (MOST2 and MOST4). The spacecraft stationed at L4 (MOST1) and L5 (MOST2) will each carry seven remote-sensing and three in-situ instrument suites. MOST will also carry a novel radio package known as the Faraday Effect Tracker of Coronal and Heliospheric structures (FETCH). FETCH will have polarized radio transmitters and receivers on all four spacecraft to measure the magnetic content of solar wind structures propagating from the Sun to Earth using the Faraday rotation technique. The MOST mission will be able to sample the magnetized plasma throughout the Sun-Earth connected space during the mission lifetime over a solar cycle. ## 1 Introduction The Sun is an ordinary star, but it is unique and vital to life on Earth. The magnetic variability of the Sun affects human technology in space and on ground. The Sun is the only star that can be observed in detail through both remote-sensing and in-situ techniques and hence contributes toward the understanding of stellar phenomena. Unprecedented advances in heliophysics made possible by great observatories such as the Solar and Heliospheric Observatory (SOHO, Domingo et al. 1995) and Solar Terrestrial Relations Observatory (STEREO, Kaiser et al. 2008) have demonstrated the need for comprehensive observations that can enable the science of a large swath of the community. These observatories helped us accumulate a wealth of knowledge on solar and heliospheric structures. However, many fundamental questions remain unanswered: What are the changes that occur in the convection zone before active regions emerge? Why does flux emerge on a large-scale forming active regions? How do magnetic fields become energizes to erupt and what processes initiate the eruptions? How do solar eruptions result in particle acceleration, alone and in combination with flare reconnection? How does shock geometry and magnitude evolve and how does this relate to solar energetic particles (SEPs) and radio bursts? What is the radial profile of shock-driving coronal mass ejection (CME) density and shock strength from the nose to trailing edge? How do CMEs and corotating interaction regions (CIRs) evolve in the inner heliosphere? What are the implications of the interchange reconnection taking place between open and closed field lines? What is the internal magnetic structure of CMEs that cause geomagnetic storms at Earth? Clearly, many of these questions involve solar magnetic fields at various layers of the solar atmosphere and we do not have sufficient knowledge about them. Simulations show a dramatic improvement in accurately capturing solar wind structure when provided with improved magnetic observations including observational coverage of the poles (Petrie et al., 2018; Pevtsov et al., 2020). Clear improvements are already achieved when the Sun can be observed from Sun-Earth L1, L4, and L5 providing coverage of over \(>\)65% of the solar surface. Wider and longer duration Doppler coverage of the solar surface from Sun-Earth L1, L4 and L5 will provide the necessary signal-to-noise for helioseismic localization of non-axisymmetric changes in flow patterns in the convection zone. In the photosphere, plasma controls the magnetic field, while the control switches to the magnetic field in the chromosphere. Thus, extending the magnetic field measurements to the chromosphere provides information on the magnetic roots of large-scale coronal structures and adds fidelity to coronal/heliospheric models. Currently, we obtain the magnetic flux over only a 60\({}^{\circ}\) to 90\({}^{\circ}\) wedge observable from the Sun-Earth line, while what is ideally needed is over the entire solar surface. While coronal magnetic field measurement techniques are maturing but largely lacking, substantial progress can still be made with routine photospheric and chromospheric magnetic field measurements. Far away from the Sun, magnetic fields are measured in situ by spacecraft at Sun-Earth L1. Parker Solar Probe and Solar Orbiter provide information on several locations in the inner heliosphere, but not systematically. Faraday rotation (FR) provides a different and unique way to measure magnetic field in large-scale coronal and heliospheric structures by transmitting and receiving spacecraft radio signals through such structures. By suitable frequency and antenna choices, one can probe structures over the Sun-Earth distance. This paper outlines the concept of a mission called the Multiview Observatory for Solar Terrestrial Science (MOST) that will provide comprehensive imagery and time series data needed to understand the magnetic connection between the solar interior and the atmosphere. MOST will build upon the successes of SOHO and STEREO with new views from Sun-Earth L4 and L5 and from the vicinities of those points. In this paper, we present the results of a mission study undertaken at NASA's Goddard Space Flight Center (GSFC) that focused on optimized science payload, instrument accommodation, flight dynamics, and launch system. This paper is organized as follows. We describe MOST goals, objectives, and the science traceability matrix in section 2. An overview of the MOST mission is given in section 3 with the scientific payload described in section 4. Synergy among instruments and modeling are presented in section 5. The payload accommodation is described in section 6. Section 7 highlights flight dynamics and orbital selection. Standard subsystems not included in this study are mentioned in section 8. The project lifecycle is discussed in section 9, followed by summary and conclusions in section 10. ## 2 Materials and Methods In this section, we identify the science questions and develop objectives that need to be achieved to answer these questions. To achieve the objectives we identify the instrument and mission requirements. These tasks are performed by identifying gaps in the past measurements and characterize the optimal set of instruments. We develop high level design of the required instruments improving on the past designs and employing new technologies that have become available in the recent past. We design the spacecraft to accommodate the instruments, the fairing to accommodate the spacecraft, and the launch vehicle. We perform flight dynamics analysis and select the orbit for the mission. Finally we estimate the cost of the mission based on a previous study. ### Goals, Objectives, and the Science Traceability Matrix The MOST mission concept draws heavily on the success of great observatories such as SOHO and STEREO and combines the capabilities to build the next generation great observatory. SOHO and STEREO have demonstrated the value of sustained observations that have greatly added to our knowledge of the variable solar-terrestrial system (Duberstein, 2020). This advance can be accelerated by devising a new mission that implements additional capabilities that were not included in SOHO and STEREO. Since the primary cause of variability in the solar-heliospheric system is solar magnetism, measuring the magnetic field at the Sun and in the surrounding heliosphere is of utmost importance. Therefore, the primary science goal of MOST is to understand the magnetic coupling of the solar interior to the heliosphere. As noted in the introduction, there are many unanswered fundamental questions that form the basis for formulating the science objectives of the MOST mission. The fundamental questions can be grouped into a set of three high-level science questions related to solar and heliospheric magnetic fields, solar eruptions, and the solar wind. The mission objectives and the underlying science questions are listed in the MOST Science Traceability Matrix (STM, see Table 1). \begin{table} \begin{tabular}{p{85.4pt}|p{85.4pt}|p{85.4pt}|p{85.4pt}|p{85.4pt}} \hline **Science Question** & **Objectives** & **Measurement Requirements** & **Instrument Requirements** & **Mission Requirements** \\ \hline 1. How do active regions evolve before and after emerging to the emerging to the solar surface? & 1.1 Derive the physical properties of the convection zone & Dopplergrams (velocities better than 20 m/s in each 0.5 Mm\({}^{2}\) pixel) from viewing angles separated by 30\({}^{\circ}\)-90\({}^{\circ}\) & Sun-pointed telescope to obtain full disk & Identical telescopes on MOST1\&2. \\ \cline{2-4} & 1.2 Determine the complete life cycle of active regions & Photospheric and chromospheric line-of-sight (LOS) magnetograms from viewing angles separated by 30\({}^{\circ}\)-90\({}^{\circ}\) & Sun-pointed telescope to & Identical telescopes on MOST1\&2. \\ \cline{2-4} & & LOS & Sun-pointed telescope to telescope on & Identical telescopes on MOST1\&2. \\ \cline{2-4} & & & & \\ \cline{2-4} & & & & \\ \cline{2-4} & & & & \\ \hline 1.3 Determine the global magnetic field distribution on the Sun & LOS & Sun-pointed telescope to & Identical telescopes on MOST1\&2. \\ \cline{2-4} & & & & \\ \cline{2-4} & & & & \\ \cline{2-4} & & & & \\ \hline \end{tabular} \end{table} Table 1: MOST Science Traceability Matrix. \begin{tabular}{|p{85.4pt}|p{85.4pt}|p{85.4pt}|p{85.4pt}|p{85.4pt}|} \hline & & to cover at least 65\% of the solar surface & pixels; 90-min cadence & Earth/L1 assumed \\ \hline Each question in the STM (column 1) can be answered by achieving a set of science objectives listed in column 2. The measurement requirements towards achieving the objectives are listed in column 3 including the nature of the sensor to be used. The requirements on scientific instruments that make the necessary measurements are listed in column 4. Finally, column 5 sets the mission requirements. ### Mission Overview MOST will be a 4-spacecraft mission with one each at L4 (MOST1) and L5 (MOST2) and the other two (MOST3 and MOST4) at variable locations along Earth orbit (see Figure 1). MOST1 and MOST2 will each carry seven remote-sensing and 3 in-situ instruments. All four spacecraft will carry a novel radio package known as the Faraday Effect Tracker of Coronal and Heliospheric structures (FETCH) that will systematically probe the magnetic content of transient interplanetary structures including coronal mass ejections (CMEs) and stream interaction regions (SIRs). The Faraday rotation measurements will provide magnetic content of these structures at various heliocentric distances from the outer corona to Earth's vicinity. Photospheric and/or chromospheric magnetograms will cover \(>\)70% of the solar surface providing synchronic maps needed for accurately modeling the corona and solar wind. EUV, coronagraph, radio spectrograph, and heliospheric imager observations from multiple viewpoints provide 3-D information on CMEs/CME-driven shocks, SIRs, and other solar wind structures. Hard X-ray imagers will provide the flare aspects of solar eruptions to complement the CME aspects. In-situ instruments provide ground truth to the remote-sensing observations. MOST will generate the following science data products: magnetograms, Dopplergrams, EUV images, hard X-ray images, coronagraph images, heliospheric images, radio dynamic spectra and time series, Faraday rotation time series, time series of solar wind plasma parameters, solar wind magnetic field vectors, and solar energetic particle intensity and spectra. The data products have proven to be the optimal set needed to track the flow of energy from the Sun into the heliosphere and various physical processes that result from the energy flow. MOST, a large 10-year mission, is well aligned with NASA's Heliophysics objectives and will provide an unprecedented opportunity to achieve the scientific objectives with broad participation from the heliophysics community. MOST mission assumes that imagery and time-series data will be available from the Sun Earth line (ground-based observatories and space-based observatories at Sun-Earth L1). If not available, a spacecraft similar to MOST1 or MOST2 can optionally be deployed at Sun-Earth L1. ## 3 Results In this section we describe the optimal set of instruments, their placements in the spacecraft, launch configuration and vehicle, flights dynamics, life cycle of the mission, and costs. ### The Science Payload The seven remote-sensing and three in-situ instruments to be carried by each of MOST1&2 are listed in Table 2. In each case, improvements over previously flown instruments are noted as "New" in column 1. The instruments are optimized to obtain maximum information on the Sun-heliospheric system in accordance with the STM given in Table 1. The instrument suites provide imagery and time series data to reveal magnetic connectivity across solar and heliospheric domains. Actively probed Faraday rotation studies form a hybrid between in-situ methods, which provide detailed field information at each sampled point, and imaging methods, which provide mostly distributions of material density across space. Data from a combination of MOST instruments are needed for investigations that lead to achieving the science objectives. We note that most of the instruments trace their heritage to SOHO and STEREO. These instruments will be refined and improved by incorporating new developments in sensor technology. There are new instruments such as the Magnetic and Doppler Imager (MaDI) that was not included in Figure 1: Overview of the MOST mission with the four constituent spacecraft at L4 (MOST1), L5 (MOST2), ahead of L4 (L4’, MOST3) and behind L5 (L5’, MOST4). MOST1&2 will have identical remote-sensing and in-situ instrument suites. MOST3&4 will carry only radio equipment for Faraday rotation measurements. The approximate MOST1-MOST2 an MOST3-MOST4 distances are shown at the left indicating the long signal paths for spacecraft radio signals (red numbers) and their closest approach to the Sun (blue numbers). The red lines in the right indicate FETCH signal paths. The yellow double arrows indicate communication links. The five Lagrange points (L1 – L5) of the Sun-Earth gravitational system are shown for reference. STEREO. The Hard X-ray Imager (HXI) and the FETCH instrument are the other remote-sensing instruments not included in SOHO or STREO instrument suites. ### The Magnetic and Doppler Imager (MaDI) \begin{table} \begin{tabular}{|p{142.3pt}|p{142.3pt}|} \hline **Instrument, Heritage, and Improvements** & **Purpose** \\ \hline **Magnetic and Doppler Imagers (MaDI)** & To study surface (photosphere, \\ _SOHO, Solar Orbiter, SDO_ & chromosphere) and subsurface magnetism by combining magnetic and Doppler measurements. Also routinely obtain chromospheric magnetograms \\ \hline **Inner Coronal Imager in EUV (ICIE)** & To study active regions, coronal holes, post- \\ _SWAP, SUVI_ & eruption arcades (PEAs), coronal waves, and coronal dimming by capturing the magnetic overlap with coronagraph FOV & connection between the photosphere and the corona \\ \hline **Hard X-ray Imager (HXI)** & To image thermal and non-thermal component of flares and study the relationship with radio bursts and CME flux ropes \\ \hline **White-light Coronagraph (WCOR)** & To track quiescent and transient coronal structures seamlessly from ICIE FOV and connect to the heliospheric imager FOV \\ \hline **Heliospheric Imager with Polarization (HIP)** & To track solar features into the heliosphere, their impact on Earth, provide line-of-sight electron column densities for FETCH analysis \\ \hline **Faraday Effect Tracker of Coronal and Heliospheric structures (FETCH)** & To determine the magnetic field structure and evolution of solar wind structures in the Sun-Earth connected space \\ \hline **Radio and Plasma Wave instrument for MOST (M/WAVES)** & To track shocks and electron beams from Sun to 1 au, determine the source region configuration of type III storms and the implications of seeds particles accelerated at the storm source \\ \hline **Solar Wind Plasma Instrument (SWPI)** & To infer solar magnetic structures at 1 au and CIR evolution \\ \hline **New: CME speeds up to 2500 km/s** & To infer solar magnetic structures at 1 au, CIR evolution \\ \hline **Solar Wind Magnetometer (MAG)** & To infer solar magnetic structures at 1 au, CIR evolution \\ \hline **Solar High-energy Ion Velocity Analyzer (SHIVA)** & To determine spectra of electrons, and ions from H to Fe at multiple spatial locations and use energetic particles as tracers of magnetic connectivity \\ \hline \end{tabular} \end{table} Table 2: Science instruments and their purpose The Magnetic and Doppler Imager (MaDI) will measure the photospheric/chromospheric magnetic and velocity fields, map the photospheric magnetic field and help study the magnetic field (active region) evolution and its connections to physical conditions in the tachocline through seismology. The Doppler images from MaDI at L4 and L5 can be combined with those obtained by similar instruments on the Sun-Earth line (ground-based and Sun-Earth L1) for localized probing of the whole of the convection zone. The advantage of multiview is that two of the three field components of both velocity and magnetic fields, can be obtained over common area just using the line-of-sight component. Further, we can resolve the ambiguity in field directions in the overlapping area between two magnetographs. Magnetograms obtained from L5 view can help space weather forecasting by observing active regions and coronal holes well before they rotate to the Earth view. Magnetograms from L4 view will provide a more direct view of SEP source regions. Significant fraction of SEP events, which affect near-Earth environment originate from active regions near or even behind solar west limb as observed from Earth. Having SEP source region observations from L4 would also allow extending a so-called safe zone for spacecraft travelling to Mars. Here, safe zone as an area in heliosphere covered by a robust modeling of space weather. Surface magnetic field measurements from all vantage points will extend the coverage of more than 70% of the entire solar surface compared to less than a quarter of the surface at present. Combined observations from L5, L1/Earth, and L5 will also significantly improve the visibility of solar poles, which is critical for modeling of steady state solar wind. Magnetographs such as the Michelson Doppler Imager (MDI, Scherrer et al., 1995) on SOHO or the Helioseismic and Magnetic Imager (HMI; Scherrer et al., 2012) on the Solar Dynamics Observatory (SDO) are traditional instruments with complex optical systems. There have been efforts to reduce the SWaP (Size Weight and Power) of these traditional designs including the Photospheric Magnetic Field Imager (PMI, Staub et al., 2020) based on Solar Orbiter's Polarimetric and Helioseismic Imager (SO/PHI, Solanki et al., 2020) design, and the Compact Magnetic Imager (see Fig. 2 from Hurlburt and Berger, 2021) based on HMI design. More recently, Compact Doppler Magnetograph (CDM) instrument (Hassler et al., 2022; Gosain et al., 2022) based on Global Oscillations Network Group instrument (GONG, Harvey et al., 1996; Hill, 2018) was proposed for the Solaris mission (Hassler et al., 2020). CDM is demonstrated to be TRL6 with a mass estimate of only 16 kilograms with 20% margin (Hassler et al., 2022). CDM uses innovative design where a group of three solar lines is used to increase the signal-to-noise ratio (SNR) of the measurements while simultaneously providing immunity to measurements from large spectral shifts resulting from high spacecraft velocity. While the aforementioned traditional instrument designs are well understood and have proven very successful in Doppler-magnetography, the mass constraints typical of deep space missions requires exploring new alternative designs with total mass of only few kilograms. One such technology for MaDI is based on the photonics chips and is described below. MaDI will make use of the latest developments in magnetography based on recent progress in photonics and electronics. The Imaging Photonic Spectropolarimeter for Observing the Sun (IPSOS, Hurlburt, Vasudevan and Chintzoglou, 2022) shown in Figure 2 is based on the recently demonstrated laboratory prototype (Hurlburt, 2021; Hurlburt et al., 2023) where the bulk of optical elements are contained in a single multilayer wafer instead of the traditional mechanical filter components. Instead of using a telescope to guide the solar image through a spectropolarimeter, (SP) IPSOS first feeds the solar signals into an array of heterodyne SPs on a photonic integrated chip (PIC) fed by a tunable laser. The laser also maintains coherence between the relative phases of the SPs. The outputs of the spectropolarimeters are then combined computationally to create a magnetogram. The optical package is reduced to a single wafer while the electronics exploit compact, low-power RF Systems on a Chip (RFSoCs). Given the deep-space locations of the MOST instruments, it is advantageous to have reduction in cost, mass, and risk. The IPSOS concept provides these reductions because there will be no assembling of major components; instead, the components will be printed using standard lithographic techniques. The IPSOS instrument meets all MaDI's requirements with a spatial resolution and cadence similar to SOHO/MDI. IPSOS has a maximum baseline of 18 cm which fits on a standard 8-inch silicon wafer. The speed at which we can collect sufficient u-v samples drives the number of apertures and the overall power and mass of the instrument. Table 3 displays the projected SWaP of the IPSOS instrument for MOST while Figure 2a shows what it would look like if built today. Since the SWaP of the optical components are negligible, IPSOS can easily support multiple arrays for different spectral bands. The two additional apertures in are tuned to capture data in the 1083 nm He I and 854 nm Ca II chromospheric lines. Following generations will be even lower SWaP as technology matures, leading to the wafer-like vision in Figure 2b. Data products will include magnetic and Doppler imaging in the photosphere and chromosphere. **Table 3.** IPSOS Instrument Characteristics **Figure 2. (a, left) First- and (b, right) second-generation concepts for the Imaging Photonic Spectropolarimeter for Observing the Sun (IPSOS). This version uses 9 cm apertures which can be scaled up to 18 cm to meet MaDI requirements.** ### The Inner Coronal Imager in EUV (ICIE) Inner Coronal Imagers such as Hinode's X-ray Telescope (XRT, Golub et al., 2007), SOHO's Extreme-ultraviolet Imaging Telescope (EIT, Delaboudiniere et al., 1995) and SDO's Atmospheric Imaging Assembly (AIA, Lemen et al., 2012) have demonstrated the importance of observing the Sun close the solar surface. SOHO's CME watch program that combined EIT images with the Large Angle and Spectrometric Coronagraph (LASCO, Brueckner et al., 1995) images have contributed enormously to the early evolution of CMEs (Dere et al., 1997; Gopalswamy and Thompson, 2000). STEREO's Extereme Ultraviolet Imager (EUVI, Wuelser et al. 2007) further confirmed the usefulness of EUV images in understanding the 3-D structure of quiescent transient coronal structures and their relation to heliospheric structures such as CMEs and SIRs. Recent instruments such as the Sun Watcher using Active Pixel system detector and image processing (SWAP, Berghmanns et al., 2006) and the Solar Ultraviolet Imager (SUVI, Seaton and Darnel, 2018) have demonstrated that the extended corona can be imaged in EUV in a much wider field of view (FOV, out to \(\sim\)3 Rs). ICIE extends the wide FOV EUV imager and the design is similar to ISS Coronal Spectrographic Imager in the EUV (COSIE, Golub and Savage, 2016). ICIE will identify: coronal structures from solar limb/disk into the coronagraph FOV, changes in open field connectivity from 1-3 Rs, streamer plasma inhomogeneities, filaments, CMEs, EUV waves/shocks, coronal dimmings, and current sheets associated with CMEs. \begin{tabular}{|l|l|l|} \hline Parameter & Value & Comment \\ \hline Mass & 6 kg & Estimated \\ \hline Volume & 6 liters & Estimated \\ \hline Average Power & 20 W & Estimated \\ \hline Real-time Data Rate & 0.14 Mbits/sec & Requirement \\ \hline Field of View & 53 arc-min & Requirement \\ \hline Maximum baseline & 18 cm & Required for spatial resolution \\ \hline Measurement Type & Solar Magnetic Fields and Doppler Velocity & Spatial scale of 1.0 arc-sec/pixel. SNR \(>\) 2000. \\ \hline Measurement Wavelength & 1564.8 nm & Fe I 1564.8 nm, He I 1083 nm, and Ca II 854.2 nm \\ \hline TRL & 4 & Based on MICRO HTIDeS project \\ \hline \end{tabular} ### The Inner Coronal Imager in EUV (ICIE) Inner Coronal Imagers such as Hinode's X-ray Telescope (XRT, Golub et al., 2007), SOHO's Extreme-ultraviolet Imaging Telescope (EIT, Delaboudiniere et al., 1995) and SDO's Atmospheric Imaging Assembly (AIA, Lemen et al., 2012) have demonstrated the importance of observing the Sun close the solar surface. SOHO's CME watch program that combined EIT images with the Large Angle and Spectrometric Coronagraph (LASCO, Brueckner et al., 1995) images have contributed enormously to the early evolution of CMEs (Dere et al., 1997; Gopalswamy and Thompson, 2000). STEREO's Extereme Ultraviolet Imager (EUVI, Wuelser et al. 2007) further confirmed the usefulness of EUV images in understanding the 3-D structure of quiescent transient coronal structures and their relation to heliospheric structures such as CMEs and SIRs. Recent instruments such as the Sun Watcher using Active Pixel system detector and image processing (SWAP, Berghmanns et al., 2006) and the Solar Ultraviolet Imager (SUVI, Seaton and Darnel, 2018) have demonstrated that the extended corona can be imaged in EUV in a much wider field of view (FOV, out to \(\sim\)3 Rs). ICIE extends the wide FOV EUV imager and the design is similar to ISS Coronal Spectrographic Imager in the EUV (COSIE, Golub and Savage, 2016). ICIE will identify: coronal structures from solar limb/disk into the coronagraph FOV, changes in open field connectivity from 1-3 Rs, streamer plasma inhomogeneities, filaments, CMEs, EUV waves/shocks, coronal dimmings, and current sheets associated with CMEs. Figure 3 shows the ICIE optical design. It is a compact (70 cm \(\times\) 20 cm \(\times\) 20 cm) light-weight (\(\sim\)40 kg) design with a passband in the wavelength range 17.1 to 20.5 nm. ICIE will use a CMOS camera (3k \(\times\) 3k, 10 \(\upmu\) pixels). ### The Hard X-ray Imager (HXI) The hard X-ray Imager (HXI) investigates solar flares by providing diagnostics of the hottest (\(>\)8 MK) flare plasmas and flare-accelerated electrons above 10 keV. The hard X-ray images help clarify the flare structure thought to be associated with the PEA in EUV. The two views provide more opportunities to observe loop-top hard X-ray sources (Masuda et al., 1994). The HXI design is based on the Spectrometer/Telescope for Imaging X-rays (STIX) on board Solar Orbiter (Krucker et al., 2020). There is no major difference between HXI and STIX but the use of two views from L4 and L5 will help obtain the 3-D structure of flare structures and their relation to core dimming and PEA observed in EUV. Figure 4 shows the STIX design to be adapted for HXI on MOST. HXI consists of three major elements from the front to the back of the instrument: (i) a pair of X-ray transparent entrance windows, (ii) the imager consisting of two widely separated grids for Fourier-transform bigrid imaging, and (iii) Detector Electronics Module containing electronics and cadmium telluride detectors, and an X-ray attenuator. Details on the instrument can be found in Krucker et al. (2020). ### The White-light Coronagraph (WCOR) The White-light Coronagraph (WCOR) will build upon the success of SOHO and STEREO coronagrahs by improving the instrument with recent technology. White-light coronagraphs have Figure 4: The HXI design based on Solar Orbiter’s STIX showing the three major components: X-ray window, imager, and the detector electronic module. (Adapted from Krucker et al., 2020). Figure 3: Optical design of the ICEI instrument showing the entrance filter, primary mirror, fold mirror and the camera. become a key instrument in heliophysics investigations because of their ability to image the extended solar atmosphere using Thomson-scattered photospheric light (e.g. Koutchmy, 1988). Coronagraphs provide the essential observations of the structure and dynamics of the outer corona and near-Sun interplanetary medium. WCOR will obtain polarized and total brightness images of the Sun's corona with a FOV in the heliocentric range of 2 to 15 Rs. WCOR data determine 3-D geometry, morphology, kinematics and mass of expanding CMEs and provide global configuration of the outer corona. The design of WCOR is based on the science objectives and key measurement requirements discussed above. The optical and mechanical designs are shown in Figure 5. WCOR is designed to provide a similar but improved performance from Sun Earth Connection Coronal and Heliospheric Investigation (SECCHI) COR2. The improvements include: (1) reduced external occulter (EO) cutoff, which reduces the vignetting for the field near the inner edge of FOV; (2) a field occulter is added to further improve the diffraction suppression; (3) a larger format polarization detector array is used to not only eliminate the need for a polarization wheel mechanism, but also to capture all polarization information simultaneously. The polarization detector overcomes the image smear introduced by wheel-based polarization mechanism and has been sucessfully used in the Baloon-borne investigation of the temperature and speed of the electrons in the corona (BITSE, Gopalswamy et al., 2021). \begin{table} \begin{tabular}{|l|l|} \hline **Parameter** & **Value** \\ \hline Pixel size (\(\upmu\)m) & 10 \\ \hline Detector (Teledyne E2V) & 4k x 4k \\ \hline Chip size (mm) & 40 x 40 \\ \hline FOV (Rs) & 2.5-15.0 \\ \hline FOV (\({}^{\circ}\)) & 4 x 4 \\ \hline Effective focal length (mm) & 273.7 \\ \hline Plate scale (”/Super pixel, 20 \(\upmu\)m) & 15 \\ \hline Wavelength range (nm) & 650 - 750 \\ \hline EO inner cutoff (Rs) & 2.0 \\ \hline IO inner cutoff (Rs) & 2.5 \\ \hline Distance (A0 - A1 (mm) & 600 \\ \hline A1 diameter (mm) & 34 \\ \hline \end{tabular} \end{table} Table 4: WCOR specifications Figure 5: The optical (top) and mechanical (bottom) designs of WCOR. The coronagraph is externally occulted, the occulter being a threaded, tappered right frustum. The filter wheel is to switch between broadband (650-750 nm) and H-alpha (656 nm) filters. The overall length of the coronagraph is \(\sim\)1 m. The detector is a 4k \(\times\) 4k CCD with 10 \(\upmu\) pixels. WCOR specifications are given in Table 4. Use of the polarization detector results in a spatial resolution of 15"/super pixel (2 x 2 polarizer bin). The Nyquist resolution is 30", as in COR2. The modulation transfer function is above 0.5 except the FOV severely vignetted by the EO. The diffraction brightness B (relative to the mean solar brightness Bs) at 2.5 Rs is \(\sim\)\(4\times 10^{-9}\), which is about an order of magnitude lower than the F-corona brightness. The SNR analysis considered the external brightness from K-corona, F-corona, diffraction, and internal scattering, and internal brightness from read noise and dark current responsible for generating the total photoelectrons per second in the CCD to fill 80% of the full-well depth of 10,000 electrons. It was determined that at 3 Rs the integration time that satisfied the above requirement was 6 seconds with a SNR of 23 in the brightest pixel (aligned with tangential K-corona) and 5 least bright pixel (aligned with radial K-corona) in the super-pixel comprising of four pixels. ### The Heliospheric Imager with Polarization (HIP) Heliospheric imaging pioneered by STEREO/SECCHI has revolutionized our understanding of the large-scale structure of the inner heliosphere (Socker et al., 2000). The Heliospheric Imager (HI) instrument has been successfully used in several missions such as STEREO (Eyles et al., 2009), Parker Solar Probe's Wide-Field Imager for Solar Probe (WISPR, Vourlidas et al., 2016), and the Solar Orbiter Heliospheric Imager (SoloHI, Howard et al., 2020). The Wide Field Imager (WFI) currently under development to be flown on the Polarimeter to Unify the Corona and Heliosphere with Polarization (PUNCH, DeForest et al., 2022) has added a new dimension to heliospheric imagers: polarization. The Heliospheric Imager with Polarization (HIP) will follow the design reported in Lavraud et al. (2016). In addition to the polarization capability, HIP will have better sensitivity and a steady view from Sun-Earth L4 and L5. The better sensitivity will help distinguish between the flux rope and shock in fast CME events at large distances from the Sun. Polarimetry is critical to identifying feature chirality and substructure, and to tracking event trajectories in 3-D; and adds precision to overall background subtraction, improving line-of-sight (LOS) density estimates, which are important for Faraday-rotation measurements of the interplanetary magnetic field (IMF) using HIP and FETCH together. HIP is environed to have a wider FOV to include Earth within the HOP FOV. Figure 6 shows the two cameras (HIP-1 and HIP-2) similar to the STEREO counterparts with a combined FOV whose outer boundary is beyond Earth and inner boundary overlapping with the coronagraph FOV (see Figure 7). HIP makes use of extensive heritage from the polarimetry, background subtraction, and background subtraction. Figure 6: A schematic showing the two cameras HIP-1 and HIP-2 with 30\({}^{\rm o}\) and 50\({}^{\rm o}\) FOV, respectively. Key subsystems such as baffle systems, electronic box, and radiators are noted. The cameras will use 2k x 2k detectors similar to those being developed for PUNCH. The spatial resolution is \(\sim\)4 arcmin and the image cadence is \(\sim\)10 min. [Adapted from Lavraud et al., 2016]. Figure 7: Overlapping FOVs of HIP-1 and HIP-2 with the WCOR FOV. HIP will do polarized heliospheric imaging from 10-20 Rs to 65\({}^{\circ}\)from the Sun. The HIP-2 FOV extends beyond Earth. subtraction, and post-processing techniques developed for PUNCH, and the HIP data pipeline provides polarimetric background-subtracted images of the solar wind. ### The Faraday Effect Tracker of Coronal and Heliospheric structures (FETCH) Faraday effect refers to the rotation of the the plane of polarization of a linearly polarized wave traveling through a magnetized plasma (Collett, 1992). The extent of rotation (\(\Delta\chi\)) depends on the electron density distribution (\(n\)) and magnetic field (\(B\)) component along the line of sight (\(s\)) in cgs units: \[\Delta\chi=\lambda^{2}\left(\frac{\mathrm{e}^{3}}{2\pi\mathrm{m}_{\mathrm{e}} \mathrm{c}^{4}}\right)\int\limits_{0}^{S}n(s)\mathbf{B}(s).\,ds=\, \lambda^{2}RM, \tag{1}\] where RM is the rotation measure determined by the line-of-sight integration of the product \(nB\). Thus, with measuring the FR angle, we can invert equation (1) to estimate the density and magnetic field along the line of sight. Since the HIP field of view overlaps with the spatial domain where FETCH makes measurements, one can obtain the line-of-sight integrated density from HIP observations independent of magnetic field, so that magnetic field structure can be deduced.The technique is well known and has been extensively used in the past to observe FR of signals from distant radio sources and from spacecraft (see Kooi et al., 2022, and references therein). FR measurements at 1 au have shown to be an excellent tool for inferring the magnetic field of the solar corona, including CMEs (Mancuso and Garzelli, 2013) and the background solar wind (Bird, 2007). Detailed knowledge of the magnetic field content of solar transients such as CMEs and CIRs as they propagate along the Sun-Earth line, is crucial for effectively forecasting space weather. The estimated CME magnetic field and its orientation, well before it reaches 1 AU, can be used to determine the geo-effectiveness of the CME. Many background radio sources emitting linearly polarized signals can help determine the FR in CMEs (Howard et al., 2016). However, the existing methods for measuring FR through a CME is currently limited to ground-based measurements with powerful radio telescopes such as Very Large Array (VLA). Even though FR measurements have been made for decades for probing the solar wind (and CMEs) to obtain the plasma densities and magnetic fields using external linearly polarized sources in the sky, the multiple LOSs observations envisioned to be performed by MOST are yet to be carried out. Liu et al. (2007) demonstrated a method to measure the magnetic field orientation of CMEs using FR measurements. The authors proposed time-dependent FR mapping for calculating CME propagation away from the Sun to resolve its geometry. Jensen and Russell (2008) showed that by fitting a force-free flux rope model to observations from a spacecraft, one could obtain various information about the flux rope, such as its orientation, size, and velocity. Combining this with electron density measurements (see section 5), one can obtain the magnetic field strength. These authors also emphasized on the need of multiple LOSs FR observations to obtain the proper flux rope geometry and remove structural ambiguity. Bird (2007) utilized satellite signals as background radio sources, however, these observations were also carried out with large ground-based radio telescopes. Ground-based radio observations are affected by ionospheric plasma, which introduces additional FR on its own. Furthermore, because the Sun is a bright source of radio emission, only a few powerful antennas such as the Green Bank Telescope or VLA are capable of viewing distance radio sources on the flank of the expanding CME structure. The structures behind the nose of the CME cannot be sampled by directly pointing at the Sun. FETCH will overcome these shortcomings by transmitting and receiving in space. While Faraday rotation provides information about column-integrated density and magnetic field, recent state-of-the-art modeling of FETCH observations demonstrates that information about the distribution of these parameters could be derived. The flow of the plasma across the FETCH LOS consists of a small offset in sampled plasma between simultaneously counter crossing signals between two spacecraft. They take as long as 16 minutes to traverse between MOST 3 & 4 for example, sufficient time for a half-solar radii difference to develop; the scale size of the radio observation column is a tenth of this. This difference in the total electron content is measurable and can be used in a coarse tomographic analysis of the resulting time series. Tomography can also be used between the four lines-of-sight via constraining an MHD model or with simplifying assumptions regarding plasma outflow and structural coherence characteristics. Spacecraft-to-spacecraft FR was first demonstrated by using the radio transmissions from the Radio Plasma IMAGER (RPI; Reinisch et al., 2000) on the IMAGE satellite to the Wind and Cluster satellites (Cummer et al., 2001; 2003). The distance scales in the experiment were up to 15 Earth radii in the Earth's magnetosphere, which is several orders of magnitude smaller than the \(>\)1 au distances over which FETCH will make measurements. Figure 8: FETCH block diagram showing notional digital backend electronics design and signal transmission-reception scheme. FETCH will transmit in a single polarization (vertical V or horizontal H) and receive at both polarizations simultaneously while the transmitter is off. Transmission will be in one polarization at 165 MHz and the other at 225 MHz. Receiving will be at both polarizations and frequencies. The main FETCH subsystems for the MOST mission are shown in Figure 8. The current baseline FETCH design includes a log-periodic dipole antennas operating at two frequencies: 165 and 225 MHz. The antenna will have a length of 3.3 m and a maximum width of 1 m. The antenna can be stowed into a canister for launch and then deployed. The transmitted FETCH signal will be chirp compressed to boost the signal gain (Bernfeld et al., 1965). A detailed list of parameters for the FETCH system is given in Table 5 including the transmitter (Tx) and receiver (Rx) elements. ### The Radio and Plasma Wave instrument for MOST (M/WAVES) Solar radio emission at frequencies below the ionospheric cutoff cannot be observed from the ground. These frequencies contain most important information on eruptive phenomena and the interplanetary medium through which solar disturbances propagate. The radio and plasma wave experiment (WAVES, Bougeret et al., 1995) onboard the Wind spacecraft demonstrated the importance of observing at all frequencies below the ionospheric cutoff down to \(\sim\)20 kHz (i.e., from decameter to hectometer to kilometer wavelengths). The lowest frequency corresponds to the local plasma frequency at the observing spacecraft, while the highest frequency corresponds to \(\sim\)2 Rs (the middle corona). The advent of the frequency range 1-14 MHz by Wind/WAVES resulted in a number of discoveries, especially because of the coronagraph images provided by SOHO in the overlapping spatial domain (see Gopalswamy, 2011, for a review). STEREO/WAVES (Bougeret et al., 2008) provided similar spectral coverage with a slightly higher upper cutoff (\(\sim\)16 MHz) and different antenna system (three monopole stacer antennas, Bale et al., 2008) on each of the two STEREO spacecraft. The major advantage of the two views is that triangulation can be used to identify the location of a shock or electron beam emitting \begin{table} \begin{tabular}{l c} \hline \multicolumn{3}{c}{**FETCH System Parameters**} \\ \hline Parameters & Values & Note \\ \hline Frequency (MHz) & 165.0, 225.0 & \\ Tx Peak Power (W) & 200.0 & \\ Tx Chirp Pulsewidth (s) & 2.0 & \\ Tx Bandwidth (Hz) & 1000.0 & \\ \hline Dutycycle (\%) & up to 50.0 & \\ Antenna Base Element Size (m) & 1.0 & largest dipole element dimension of LPDA \\ Antenna Boom Length (m) & 3.3 & Dimension along boresight \\ Antenna Beamwidth(deg) & \(\pm\) 30.0 & \\ Antenna Gain (dB) & 10.5 min & \\ Integrated Cross-pol Isolation (dB) & 40.0 & \\ Rx Bandwidth (Hz) & 1000.0 & \\ Rx Noise Figure (dB) & 2.0 & \\ Integration Time (s) & 100.0 & \\ Signal to Noise Ratio (dB) & 8.0 & \\ \hline Data Rate (kbps) & 100.0 max & \\ Power (W) & 515.0 & Assuming two 200 W SSPAs with S0k efficiency, 500\% Tx dutycycle and 100\% contingency \\ \hline Weight (kg) & 88.0 & Assuming 10 kg for deployable antenna and 20 \\ \end{tabular} \end{table} Table 5: List of FETCH system parameters radio waves (Krupar et al., 2012; Makela et al., 2016). WAVES observations help track type II radio bursts, type III radio bursts, and radio noise storms that provide information on the disturbances as well as the magnetic and density structure of the heliosphere. The MOST/WAVES (M/WAVES) experiment will closely follow the design of S/WAVES. Figure 9 shows the three mutually orthogonal antenna elements to be used for M/WAVES: in the stowed configuration with one of the antennas deployed. The antennas will make electric field radio measurements from 10 kHz to \(\sim\)25 MHz in at least 3 channels. Plasma waves measurements of the quasi-thermal noise spectrum with \(<\mu\)V sensitivity and \(\Delta\)f/f \(<\) 4% spectral resolution to resolve the electron plasma frequency and thermal plateau. Rapid waveform measurements of individual antenna voltages will be made. These will be useful in characterizing plasma waves associated with SEP electron events and dust impact signatures on the MOST spacecraft. The only improvement in M/WAVES is that the antenna elements will be redesigned to reduce the surface area to minimize the effect of dust impact. Another improvement is the increase in sensitivity at low frequencies, so the plasma line due to the quasithermal noise can be observed better to obtain plasma density in the vicinity of the observing spacecraft. ### Solar Wind Plasma Instrument (SWPI) The Solar Wind Plasma Instrument (SWPI) is based on the Ion and Electron Sensor (IES, Burch et al. 2006) that completed operations successfully on the Rosetta mission. SWPI has a compact dual measurement sensor to measure ion and electron velocity distribution functions. Figure 10 shows the main components of IES along with the instrument block diagram. Particles enter the grounded entrance grid and are deflected according to energy- and elevation angle-dependent curved bipolar deflector electrodes into field-free apertures. Particles then enter the top-hat region Electrostatic Analyzer (ESA) segments and get focused onto microchannel plates (MCPs), with delay line anodes. Figure 9: (left) Stowed configuration of the three mutually orthogonal antenna elements and pre-amplifier enclosure. The red block is the pre-deployment retaining cover. (right) one of the antenna elements in the deployed state (only part of the 6-m antenna element is shown) [Adapted from Bale et al., 2008; Bougeret et al., 2008] The Sensor heritage is from Rosetta IES, while the Electronics are based on the Solar Wind Plasma Sensor (SWiPS) instrument on the NASA/NOAA SWFO-L1 mission and Magnetic Anomaly Plasma Spectrometer (MAPS) on the Lunar Vertex lander mission. Flight model builds for both instruments are underway with completion expected in summer 2023. ### Solar Wind Magnetometer (MAG) For the accomplishing the MOST scientific objectives, it is critical to measure the magnetic fields of CMEs, shocks, and other magnetic field structures reaching 1 au. The magnetic field indicates the structure of the CMEs arriving at the spacecraft at L4 and L5, including the differences between the two spacecraft locations. The magnetic field also indicates the magnitude variation of the shocks arriving at the spacecraft, including the difference between the two locations. The three-coordinate vector measurements of the magnetic field indicate changes that have occurred relative to the Parker Spiral. Measurements of ion and electron density and solar wind speed provide additional information about the magnetic field structures. CIRs arrive at L5 first, then at Earth, and finally at L4, thus helping understand the evolution of CIRs. The MAGs on MOST will be duplicates of the Parker Solar Probe (PSP) magnetometers, which are part of the FIELDS experiment (Bale et al., 2016). PSP MAGs are triaxial fluxgate magnetometers built by Goddard Space Flight Center and similar to those successfully flown aboard MAVEN, Van Allen Probes and GOES-18. The PSP MAGs operate at a maximum cadence of 297.97 samples/sec. Four different dynamic ranges provide a full-scale resolution of +/- 1024 nT, +/- 4096 nT, +/- 16384 nT and 65536 nT, determined by the ambient magnetic field. The smaller dynamic ranges provide smaller sampling resolution, starting with 0.03125 nT/ADU in the +/- 1024 nT range and 0.125 nT/ADU, 0.5 nT/ADU and 2.0 nT/ADU in the respectively larger dynamic ranges. PSP MAGs are functioning quite well providing numerous observations of Alfvenic magnetic field switchbacks (i.e., Kasper et al., 2019; Bale et al., 2021), and observations of the structure of the near-Sun space magnetic field (Bale et al., 2019). The Figure 10: Cross-sectional view of the SWPI sensor (left) and the instrument block diagram (right). The sensor-head is made up of the common entrance grid and deflector plates, as well as electro-static analyzers (ESA), micro channel plate (MCP) detectors and front-end electronics (FEE) for electrons and ions. The sensor head is attached to an electronics box (Ebox). The Ebox uses a low voltage power supply (LVPS) to distribute incoming power from the spacecraft to all instrument components. A command and data handling board that includes an FPGA with embedded flight software controls the subsystems and processes data from the detectors. The high voltage power supply (HVPS) provides high voltage to the ESAs, deflectors and MCPs. It also includes an FPGA that uses tables to control sweeping of the ESA and deflector voltages necessary to detect particles across the angular and energy ranges. MAG instruments for MOST will be near identical and build-to-print, with only minimal changes due to spacecraft accommodations, making them very cost effective and low risk. Figure 11 shows the PSP MAG sensor. MOST will use two magnetometers, one inboard and one outboard. This would give us one more calibration technique using the two magnetometers to disentangle the contribution of the spacecraft and is similar to how PSP, MAVEN, Juno and GOES have flown recently. ### Solar High-energy Ion Velocity Analyzer (SHIVA) Solar High-energy Ion Velocity Analyzer (SHIVA) is MOST's energetic particle detector needed to understand the origins, energization, and transport of charged particles from the Sun and inner heliosphere. SHIVA will characterize: the energy spectra, event classes, longitudinal features, and composition (from He to Fe) of SEP events. Furthermore, SHIVA will also characterize SEP electrons from ~tens of keV to ultra-relativistic energies and Anomalous Cosmic Rays (ACRs); the latter will advance our understanding of solar influences on the distant heliosphere. Additional science addressed by SHIVA include measurement of the variability of primary galactic cosmic rays and Forbush decreases. Finally, SHIVA will help monitor the radiation environment at Earth and the inner heliosphere, an important space weather contribution. Figure 12 shows a schematic of the SHIVA instrument. SHIVA comprises two sensor stacks: a detector stack made of solid-state detectors (SSDs) behind space-facing avalanche photodiodes (APDs). The APD-SSD combination enables measurement of electrons from ~20 keV to ~5 MeV; protons from ~200 keV to ~100 MeV; heavier ions (He to Fe) from 2 to 200 MeV/nuc in multiple differential energy channels. The energy range can be extended using individual SSD pulse height analysis (PHA), e.g., up to 500 MeV for protons. The energy resolution is <30% and the time resolution is software selectable, typically ~1 min. SHIVA will closely follow the design of the Miniaturized Electron pRoton Telescope (MERiT), which is a low-mass, low-power, compact instrument using an innovative combination of particle detectors, sensor electronics, and onboard processing. MERiT flew on the Compact Radiation belt Explorer Figure 11: (left) Drawing of the two PSP MAG sensors, showing the sensor assembly. (right) Actual photograph of one of the PSP MAGs. As in PSP MAGs, MOST MAG will have a data rate of ~256 samples per second. (CeREs), a 3U CubeSat in the \(\sim\)500 km low Earth orbit (LEO, Kanekal et al., 2019) and the CUSP (CubeSat to measure Solar energetic Particles) CubeSat. Another version of MERiT is under development for the Heliophysics Environmental and Radiation Measurement Experiment Suite (HERMES) on the Lunar Gateway. In addition to CeREs, CUSP and HERMES, SHIVA has heritage from the Relativistic Electron Proton Telescope (REPT) instrument on Van Allen Probes. As mentioned earlier, SHIVA comprises two sensor heads, one viewing along the nominal Parker spiral interplanetary magnetic field lines and the other perpendicular to the field lines. The instrument has a configurable onboard processing to set the cadence and energy resolution. ### Payload Accommodation The spacecraft MOST1&2 with the ten scientific instruments each, the high-gain antenna (HGA), and the solar panels are shown in Figure 13 in stowed and deployed configurations. The spacecraft design used for the Earth Affecting Solar Causes Observatory (EASCO) mission (Gopalswamy et al., 2011a,b) has been adapted for MOST. The spacecraft bus will be a rectangular composite honeycomb structure, with a 62-inch separation system. The spacecraft are three-axis-stabilized. The cluster of remote sensing telescopes (MaDI, ICIE, HXI, and WCOR) is placed together on the Sun-facing side of the spacecraft and are actively pointed to. The HIP instrument is mounted on a platform to clear the HGA. FETCH's log-periodic dipole antenna will be mounted on a boom to prevent any light scattered from the antenna entering into WCOR aperture. MOST3&4 carry only the FETCH equipment, so the spacecraft bus is very simple. Figure 14 shows the stowed and deployed configurations. The log-periodic dipole antenna is attached directly to the spacecraft. Figure 12: Schematic of the SHIVA instrument with the two sensor heads at the top (left) and the cross-sectional view of one of the sensors (right). The geometry factor is 31 cm\({}^{2}\) sr (SSD) and 0.05 cm\({}^{2}\) sr (APD). A cut-away rendering of the sensor head is shown to the right with each detector stack surrounded by an inner tungsten and an outer aluminum shielding (APDs are not shown). The electronics box below the sensor heads comprises front end electronics cards and the on-board processor. The stowed MOST spacecraft are mounted in a Vulcan dual manifest fairing in two groups as shown in Figure 15. The upper pair consists of MOST1&3, while MOST2&4 are in the lower pair. We considered several options for the launch vehicle. Their performances are shown in Figure 15 (right). Based on this plot, we have selected Vulcan as MOST's launch vehicle. Figure 14: MOST3&4 with the FETCH instrument shown in deployed and stowed configurations. The solar electric propulsion assembly and the deployed FETCH antenna are pointed to in the deployed spacecraft. The purple cylinder in the stowed FETCH antenna. The FETCH antenna is mounted directly on the spacecraft (no boom). Figure 13: MOST1&2 with the full instrument suite showing stowed and deployed configurations. All the instruments are marked. The high-gain antenna points to Earth. The remote-sensing instruments except HIP and FETCH point to the Sun. The MAGs will be mounted on the MAG boom shown. The FETCH antenna is ~3.3 m long and the longest dipole is ~1 m long and fits into the canister (0.25 m diameter and 0.3 m long) shown in purple. ### Flight Dynamics and Orbital Selection The flight dynamics (FD) analysis started with the requirement that MOST1 and MOST2 should be parked at L4 and L5, respectively, while MOST3 and MOST4 drift beyond L4 and L5, to a maximum Earth-Sun-Spacecraft angle of 80\({}^{\text{o}}\). The locations of MOST3 and MOST4 are denoted by L4' and L5', respectively to indicate these locations change over the mission lifetime. The analysis considered two cases that differ in their arrival times at L4 and L5 by \(\sim\)1 year. The spacecraft are placed in the desired locations by performing lunar flyby similar to how the STEREO spacecraft were placed into their desired heliocentric drift-away orbits. The main difference is that MOST1 and MOST2 are stopped for a sit-and-stare from L4 and L5, respectively. MOST3 and MOST4 are drifting but bounded by the maximum Earth-Sun-spacecraft angle of \(\sim\)80\({}^{\text{o}}\). This study assumed a wet mass of 600 kg for MOST1&2 and 400 kg for MOST3&4. The analysis considered two types of propulsion: high-thrust (chemical) and low-thrust (solar electric propulsion). From FD point of view, there are four phases to the MOST mission: 1. launch and lunar flyby, 2. transfer phase toward the desired locations (L4, L4') and (L5, L5'), 3. dwell phase when all spacecraft are in place for a steady one year of observations, and 4. drift phase when MOST3&4 drift toward Earth and, at mission's end, each of these two spacecraft occupies the original position of the other. In the launch and lunar flyby phase, two initial conditions were considered, the difference being the arrival times at the final points by \(\sim\)1 year. For this study, chemical propulsion was used for the lunar flyby phase, but the same can be accomplished using a low-thrust electric propulsion. In the drift phase, the drift back towards Earth was modeled with an impulsive burn but could be modeled with a low-thrust architecture as well. All four spacecraft are launched together into a direct lunar transfer orbit (5-day transfer period) as illustrated in Figure 16. Each spacecraft will perform a trajectory correction maneuver (TCM) in order to plan Figure 15: (left and middle) Two views of the MOST launch configuration in Vulcan’s split manifest fairing. MOST3 is on top of MOST1 in the upper pair; MOST4 is on top of MOST2 in the lower pair. (right) Characteristic energy vs. mass for several launch vehicles. We selected Vulcan based on the preliminary estimate of mass. a particular lunar flyby. The flybys place the upper and lower constellations into heliocentric drift-away orbits: towards the L4 point (upper constellation) and toward L5 point (lower constellation). The constellation \(\Delta\)V for various maneuvers along with the time since launch (\(\Delta\)T in units of days, d and years, y) are listed in Table 6 for a drift of 15\({}^{\rm o}\) per year between the upper and lower constellations, the initial configuration is established in 5.72 years. For MOST1-4, the \(\Delta\)V requirements for the full mission are \(\sim\)464, \(\sim\)494, \(\sim\)522, and \(\sim\)521 m/s, respectively. When the initial configuration is desired to be established a year early, the \(\Delta\)V requirements nearly double for each spacecraft. The impulsive maneuver necessary for MOST3&4 to turn around from the dwell location consists of a series of two maneuvers: the first one adjusts the semi-major axis of the orbit to induce the necessary drift rate; the second one circularizes the orbit to stabilize the drift rate with respect to the constellation and Earth. MOST3 and MOST4 will exchange their initial positions after \(\sim\)9.5 years with a delta-v cost of 496 m/s (MOST3) and 499 m/s (MOST4). The turnback and drift achieved by the impulsive maneuvers can also be accomplished by a low-thrust architecture. \begin{table} \begin{tabular}{l|l|c|c|c|c|c|c|c} \hline **Maneuver** & **Maneuver** & \multicolumn{2}{c|}{**MOST1**} & \multicolumn{2}{c|}{**MOST2**} & \multicolumn{2}{c|}{**MOST3**} & \multicolumn{2}{c}{**MOST4**} \\ \cline{3-10} **Number** & **Purpose** & \(\Delta\)**T** & \(\Delta\)**V** & \(\Delta\)**T** & \(\Delta\)**V** & \(\Delta\)**T** & \(\Delta\)**V** & \(\Delta\)**T** & \(\Delta\)**V** \\ \hline 1 & Target Ideal & 1d & 6.9 & 1d & 32.0 & 1d & 6.4 & 1d & 32.0 \\ & Lunar Flyby & & & & & & & \\ \hline 2 & Powered & - & - & 3.5d & 5.9 & - & - & 3.5d & 5.9 \\ & Flyby & & & & & & & \\ \hline 3 & Insertion & 3.88y & 456.8 & 4.67 & 456.5 & 4.84 & 515.9 & 5.72 & 482.8 \\ & (Nullify & & & y & & y & & y & \\ & Drift) & & & & & & & \\ \hline \multicolumn{2}{c|}{**Total**} & \multicolumn{2}{c|}{**463.7**} & \multicolumn{2}{c|}{**494.4**} & \multicolumn{2}{c|}{**522.3**} & \multicolumn{2}{c}{**520.7**} \\ \hline \end{tabular} \end{table} Table 6: Constellation \(\Delta\)V budget Figure 16: Results of a flight dynamics analysis for a separation angle of 15\({}^{\rm o}\) per year between the upper and lower constellations. (a) early after launch, (b) 10 months into the mission, and (c) when the initial configuration of all the spacecraft is established. MOST1-4 arrive in their respective locations at 3.88, 4.67, 4.84, and 5.72 years, respectively (see Table 6). ### MOST Project Life Cycle Figure 17 shows the project life cycle of the MOST mission including the notional phases. It takes about 8 months for phase A studies (preliminary analysis and mission definition), and 11 months for phase B (system definition, preliminary design and review), one year of phase C (final design and critical design review), 26 months of phase D-1 (subsystem development, spacecraft integration and testing), and 1 month of phase D-2 (launch and check out), and 10 years for phase E/F (science operations). During E and F, the constellations will drift toward L4 and L5, taking about 5 years; all the spacecraft will be in the dwell phase for a year followed by MOST3 and MOST4 drifting toward Earth, while MOST1 and MOST2 are places in halo orbits around L4 and L5, respectively. In the extended mission, MOST3 and MOST4 will occupy each other's dwell position after another 5 years (after \(\sim\)11 years from launch). All instruments will start scientific operations in the cruise phase (enroute to L4 ad L5). Reclosable doors will be opened and closed as needed. As summarized in Table 7, MOST is a large mission with a total cost of \(\sim\)$900 M. MOST will be a Great Observatory with a cost less than half the cost of the BepiColombo mission. Given the large swath of the heliophysics community that will use the data, the benefit far outweighs the cost. \begin{table} \begin{tabular}{|l|l|l|l|} \hline **Item** & **Description** & **Cost (M\$)** & **Remark** \\ \hline Instrument cost & 9 instruments \(\times\) 2, & 128 & All but FETCH: \\ & FETCH \(\times\) 4 & & EASCO heritage \\ \hline MOST1\&2 & bases & 168 & 150\(\times\)1.12, EASCO \\ \hline \end{tabular} \end{table} Table 7: Estimated Mission Cost Figure 17: Project life cycle of the MOST mission, showing the extents of various phases and the tasks to be completed. ## 4 Discussion ### Synergy among MOST instruments and Modeling The instrument suites provide imagery and time series data to reveal magnetic connectivity across solar and heliospheric domains. Data from different combinations of multiple, complementary MOST instruments are needed to bring closure on major mission-wide, global-scale science objectives. Table 8 summarizes the specifications of the MOST instruments, which indicate that the data coverage is quite extensive and different combinations can be used for different investigations. For example, to achieve objective 1.2 (Determine the complete life cycle of active regions) we need to use a combination of data from MaDI, ICIE, and HXI to characterize the complete lifecycle of active regions from emergence to dissipation. It has been shown that the farside seismic signatures are correlated with the nearside magnetic signatures (Gonzalez Hernandez et al., 2007) observed when the active regions are on the nearside, in spite of the expected evolution over a period of 1-2 weeks before they reach farside. MOST can greatly improve the situation because farside imaging becomes possible from three views: E60 (L5), W00, and W60 instead of just W00 (Earth view). In addition, the magnetograms at L4 and L5 will greatly reduce the time between the magnetogram and farside helioseismic observations to just a couple of days. Identification and assigning magnetic structure from such combinations will greatly improve solar wind modeling and solar irradiance forecasting (Fontenla et al., 2009). To characterize the global coronal magnetic connectivity from solar surface to the heliosphere and its slow evolution over timescales ranging from the solar rotation to solar cycle we need data from MaDI, WCOR, HIP, FETCH, MAG, and SWPI. In order to characterize the origin and energetics of solar eruptions as they propagate to the heliosphere, we need to use data from all ten MOST instruments. The FETCH instrument is unique in that it provides detailed information on the magnetic field at each sampled point in contrast to imaging methods, which provide mostly distributions of material density across space. The synergy between FETCH and HIP provides an opportunity to constrain the density and line-of-sight magnetic field. **Table 8**. High-level specifications of the science instruments \begin{table} \begin{tabular}{|l|l|l|l|l|l|} \hline **Instrument** & **FOV** & **Spatial** & **Temporal** & **Mass** & **Average** & **Data** \\ & **resolution** & **Resolution** & **(kg)** & **Power (W)** & **Rate (kbps)** \\ \hline \end{tabular} \end{table} Table 8: High-level specifications of the science instruments ### Modeling Numerical modeling and simulations are essential in achieving the objectives of the MOST mission. Models of the background solar wind and the transients propagating in the solar wind are actively pursued (see e.g., Holst et al., 2014; Jin et al. 2017, 2017, 2018; Manchester et al., Sachdeva et al., 2020) because they provide the global context needed for a better understanding of the Sun-heliospheric system. As noted, the photospheric magnetic field is a key driver of the solar wind models. In principle, the corona and solar wind models need the instantaneous magnetic field distribution over the entire solar surface including the poles. Although MOST will not cover the whole 4\(\pi\) steradians, the combination of photospheric magnetograms from L1, L4 and L5 is very close to that ideal objective (Pevtsov et al., 2020). The STEREO mission demonstrated the importance of multiple views of solar features such as prominences, streamers, and CMEs. As major players in the Sun-Earth system variability, CMEs need to be characterized as early as possible, especially in the coronagraph FOV. CME modeling will help with both interpreting the WCOR observations and using them to simulate CME behavior at farther distances. This first part is a critical, but often overlooked aspect of CME modeling. We cannot make any direct measurements of CME properties from coronagraph images. Coronagraphs integrate along the line of sight, compressing 3-D information into a 2-D plane. Therefore, CME properties must be reconstructed using some form of geometric modeling, such as a cone model (e.g., Fisher and Munro, 1984; Zhao et al., 2002; Na et al., 2013, 2017) or graduated cylindrical shell (GCS; Thernisien et al., 2006, 2009). If one attempts to reconstruct a CME using a single viewpoint there is often a degeneracy of plausible CME parameters that lead to "suitable" visual agreement between a wireframe model and the coronagraph image. Figure 18 shows three different fits to the same synthetic coronagraph image (from Verbeke et al., 2022). Despite appearing nearly, the same visually, the CME parameters corresponding to these reconstructions vary by 34\({}^{\circ}\) in angular width and 5\({}^{\circ}\) in latitude. Global heliospheric models solving time-dependent MHD equations are popular and powerful tools to investigate the propagation, evolution, and space weather potential of CMEs throughout interplanetary space (e.g. [17, 16, 18, 19, 20]). These models, particularly those designed for space weather research and forecasting purposes, typically initiate CMEs near 0.1 au (21.5 Rs), i.e. beyond the Alfven point, to limit computational costs while retaining a realistic description of plasma structures in the solar wind (e.g. [17, 16, 18]). Performing accurate estimations of the complete set of CME initial parameters near 0.1 au is critical for realistic modeling of CME propagation through interplanetary space ([19, 20, 21]). Geometric reconstruction of CME flux ropes is routinely done using EUV, coronagraph, and heliospheric imaging. The flux rope's magnetic properties can be derived from the Flux Rope from Eruption Data (FRED) technique, which assigns the total reconnected flux derived from the photospheric magnetogram and EUV images of the PEA ([19]). The derived flux rope parameters can be converted to the inputs required by these heliospheric models, either by assuming no change between 15-21.5 Rs, or by assuming some sort of scaling with distance. Figure 19 shows the synergy among various MOST instruments. While MaDI, ICIE, and WCOR contribute to key input parameters to the global MHD models, HIP and FETCH provide key constraints in validating the models. As summarized in Figure 19, WCOR observations will enable routine determination of the morphology, kinematics, geometry, and thermodynamics of CMEs in the middle-to-upper corona and will be pivotal to the interpretation of low coronal (i.e. from MaDI and ICIE) and heliospheric (i.e. from HIP and FETCH) observations, allowing investigations of CME magnetic structures through stereoscopic observations obtained across various heliocentric distances. Figure 18: Three different fits to the same synthetic coronagraph image. Despite appearing nearly the same visually, the CME parameters such as flux rope width and latitude corresponding to these reconstructions vary significantly (from [16]). ### Other Considerations We have not discussed standard items like power system and avionics. The power system will be designed with the solar panel taking into account of normal observatory operations and the solar electric propulsion. MOST will use star trackers (roll knowledge) and guide telescopes (pointing accuracy) similar to the ones on STEREO. The MOST avionics includes (i) Integrated Avionics Unit: uses typical command and data handling cards; handles data storage using solid-state recorders; manages attitude control, and spacecraft power/battery management. (ii) Redundancy Management Unit: manages primary and redundant power interfaces to Battery/Solar Array. (iii) Gimbal Control Electronics: controls dual-axis gimbals for propulsion and antennas. The flight hardware/software is based on proven in-house or commercial off-the-shelf (COTS) system and no credible technical risk has been identified. Radiation analysis will be performed in the future to minimize the radiation dose to acceptable level using aluminum shielding. Also to be done in the future is to validate all designs with appropriate reliability analyses - Fault Tree Analysis, Failure Mode and Effects Analysis, Parts Stress Analysis, Probabilistic Risk Analysis, and Worst Case Analysis. The Mission Operation Center (MOC) and the Science Operation Center (SOC) of the MOST mission will be located at NASA/GSFC. The MOC will handle the following functions: mission planning and scheduling, orbit determination/control, network and contact scheduling, commanding, spacecraft monitor/control, real-time health/safety processing, trending/analysis, instrument data handling, level 0 product processing, and level 0 data archive. The MOC implementation will use existing tools and software. There are no mission requirements that drive technology; technology required is readily available and operational today for several spacecraft. The MOC will also handle infrequent calibration rolls, momentum dumps, close Figure 19: Schematics of the input parameters typically required by heliospheric MHD models initiating CMEs near 0.1 au (21.5 Rs), including contributions from WCOR observations, and possible synergies with other MOST instruments. instrument doors where needed, deployments (solar array, MAG boom, and FETCH antennas), and orbital maneuvers (solar electrical propulsion thrusts). DSN will be used to cover all the critical events of the mission: separation from launch vehicle, attitude acquisition, solar array deployment, and propulsion system tests. If the separation is not in the view of the ground station, a portable ground station will be used. The mission operation plan includes five elements: (i) Nominal Sequence Planning and Commanding: receive instrument commands from SOC five days per week; uplink command sequences every weekday from the MOC. (ii) Operations staffing: 8 hours per day, five days per week operations by MOC staff; autonomous monitoring when unstaffed; designated operations team members will be alerted in the event of a problem or opportunity. (iii) Operations Training: operations team will participate in spacecraft integration and testing; also performs mission simulations prior to launch to verify readiness. (v) Operations Center Development: Reuses existing facility and software. ## 5 Summary and Conclusions We presented the MOST mission concept that will build upon the successes of SOHO and STEREO missions with multiple new views of the Sun and enhanced instrument capabilities. The MOST mission is envisioned as the next generation Great Observatory to provide necessary imagery and time-series data of the Sun and heliosphere to understand the magnetic coupling between the solar interior and the extended atmosphere. The MOST mission is focused on understanding the global impact of flux emergence from the solar interior - from the inner corona out to 1 au. MOST is a multi-spacecraft mission in Earth orbit around the Sun positioned to obtain three-dimensional information of solar wind structures such as coronal mass ejections, stream interaction regions, and the solar wind itself. MOST will consist of two pairs of spacecraft located in the vicinity of Sun-Earth Lagrange points L4 and L5. The spacecraft stationed at L4 and L5 will carry seven remote-sensing and three in-situ instrument suites. MOST will also carry a novel radio package FETCH carrying transmitters and receivers on all four spacecraft to measure the magnetic content of solar wind structures using the Faraday rotation technique. The MOST mission will be able to sample the magnetized plasma between the Sun and Earth during the mission lifetime. It is expected that MOST will be a significant part of the next generation Heliophysics System Observatory benefiting a large swath of the heliophysics community. The main conclusions of this study can be summarized as follows. 1. Only a Great Observatory with an optimal set of remote-sensing and in-situ instruments can provide all the imagery and time-series data needed for system science. 2. The Sun-Earth system variability is driven by solar magnetism, so it is necessary to measure the magnetic field in the photosphere, chromosphere, corona, and the interplanetary medium leading to breakthroughs on critical questions. 3. FETCH is a novel concept requiring the analysis of spacecraft-to-spacecraft radio signals to provide magnetic field measurements from the outer corona to 1 au. 4. Most of the instruments have high heritage and TRL \(>\)6, except FETCH, which needs further study to optimize signal-to-noise ratio for FR measurements and to minimize the mass and power. MADI based on traditional magnetograph concept (e.g., CDM) is also at TRL \(>\)6, but magnetograph based on the revolutionary IPSOS concept requires further study. 5. The instrument FOVs are optimized to provide a continuous spatial coverage from the Sun to 1 au. 6. The mechanical assembly of the instruments on the spacecraft closely follows the STEREO mission, except for the boom requirement for FETCH. 7. The launch vehicle appropriate to the MOST mission has been found to be Vulcan with a split manifest fairing. MOST1&3 and MOST2&4 will be paired in the launch configuration. 8. Flight dynamics studies indicate that electric propulsion is a viable option. More trade studies will be performed between chemical and electric propulsions. 9. The prime mission has a duration of \(\sim\)11 years (cruise, dwell, and drift). The extended mission will prolong the mission for another five years when MOST3 and MOST4 will switch their dwell positions. 10. MOST will be a large mission costing about $900 M. ## 6 Conflict of Interest The authors have no conflict of interest to declare. ## 7 Author Contributions NG contributed to conception and design of the mission study and wrote the first draft of the manuscript. NH and AP organized the MaDI group and provided instrument design. LG contributed the design of ICIE. SK provided the HXI design. PN, QG, JZ, and NG contributed to the WCOR design. CD contributed to the HIP instrument design. LJ, SF, LL, and NG contributed to the development of the FETCH concept. SB and NG contributed to the update of the WAVES instrument. MD and PM developed the SWPI design. JG provided the MAG design. SK developed SHIVA. WM, CK, and CS contributed to the modeling section. All authors contributed to manuscript revision, read, and approved the submitted version. ## 8 Funding The MOST concept study was funded by NASA Goddard Space Flight Center's Heliophysics Line of Business (LOB), the Internal Research and Development (IRAD) program, and the STEREO project. ## 9 Acknowledgments The team thanks K. Parsay, L. Purves, G. Voellmer, M. Deshpande, and M. Shelton for engineering support. NG thanks the NASA Goddard Space Flight Center's Heliophysics Line of Business (LOB), IRAD program, and the STEREO project for support. The National Solar Observatory (NSO) is operated by the Association of Universities for Research in Astronomy, Inc. (AURA), under cooperative agreement with the National Science Foundation. K-SC is supported by the KASI Qrontier (Promising for the future) L4 project.
2310.13678
Long-Form Speech Translation through Segmentation with Finite-State Decoding Constraints on Large Language Models
One challenge in speech translation is that plenty of spoken content is long-form, but short units are necessary for obtaining high-quality translations. To address this mismatch, we adapt large language models (LLMs) to split long ASR transcripts into segments that can be independently translated so as to maximize the overall translation quality. We overcome the tendency of hallucination in LLMs by incorporating finite-state constraints during decoding; these eliminate invalid outputs without requiring additional training. We discover that LLMs are adaptable to transcripts containing ASR errors through prompt-tuning or fine-tuning. Relative to a state-of-the-art automatic punctuation baseline, our best LLM improves the average BLEU by 2.9 points for English-German, English-Spanish, and English-Arabic TED talk translation in 9 test sets, just by improving segmentation.
Arya D. McCarthy, Hao Zhang, Shankar Kumar, Felix Stahlberg, Ke Wu
2023-10-20T17:31:39Z
http://arxiv.org/abs/2310.13678v2
# Long-Form Speech Translation through Segmentation with ###### Abstract One challenge in speech translation is that plenty of spoken content is long-form, but short units are necessary for obtaining high-quality translations. To address this mismatch, we adapt large language models (LLMs) to split long ASR transcripts into segments that can be independently translated so as to maximize the overall translation quality. We overcome the tendency of hallucination in LLMs by incorporating finite-state constraints during decoding; these eliminate invalid outputs without requiring additional training. We discover that LLMs are adaptable to transcripts containing ASR errors through prompt-tuning or fine-tuning. Relative to a state-of-the-art automatic punctuation baseline, our best LLM improves the average BLEU by 2.9 points for English-German, English-Spanish, and English-Arabic TED talk translation in 9 test sets, just by improving segmentation. ## 1 Introduction With the proliferation of long-form audiovisual content online, translation and captioning become paramount for accessibility. Cascade models remain the dominant approach for speech translation Arivazhagan et al. (2020); Li et al. (2021), decomposing the problem into automatic speech recognition (ASR), post-processing of the transcript, and machine translation (MT). The cascade's MT component typically operates on sentence-like units, with each sentence translated independently of the others. When asked to translate long passages, models regularly fail or degenerate Cho et al. (2014); Pouget-Abadie et al. (2014); Koehn and Knowles (2017). This differs considerably from the expectations for automatic speech recognition models (e.g. Graves, 2012) that can process inputs of unbounded lengths. MT models must either be able to cope with potentially long, multi-sentence inputs or, alternatively, they must be able to determine cutpoints at which the transcript can be segmented into compact, independently translatable units. This work introduces a new, effective approach for the latter. While numerous text segmentation techniques have been proposed to improve spoken language translation (SS6), the problem remains hard and unsolved. Indeed, Li et al. Li et al. (2021) demonstrate that poor sentence segmentation degrades performance almost twice as much as transcript lexical errors. We cast sentence segmentation as a sequence-to-sequence task, rather than a traditional structured prediction task that tags sentence-final tokens. While this lets us leverage large language models, such models' outputs can be ill-formed. Even by using additional data for fine-tuning, residual adapters Tomanek et al. (2021); Chronopoulou et al. (2022), or future discriminators Yang and Klein (2021), simple syntactic constraints can be difficult to enforce. Moreover, all three require modifying the model or storing additional learned parameters. In light of these concerns, we introduce a simple, flexible, and modular approach to generating well-formed task-specific strings at inference time without any additional training. We compactly express constraints on the output format as finite-state machines, then efficiently enforce these via composition. While the approach is simple, it remains unexplored for large language models, and it yields automatic gains on downstream performance, advancing the state of the art for speech translation and thereby applicable to existing systems. Moreover, the approach is sufficiently general that it can be applied to other domains in a _plug-and-play_ manner. We benchmark our approach as a component in a speech translation cascade. Experiments in three language pairs indicate that our approach outperforms both a baseline cascade system that predicts punctuation marks before inferring sentence boundaries and a strong neural structured prediction model. Overall, we improve the BLEU score on the IWSLT test sets by 2.9 points, closing \(\nicefrac{{3}}{{4}}\) of the gap between the previous best and the oracle system. Our contributions are three-fold: 1. We propose a novel LLM-based approach for long-form speech translation, which can be applied to any ASR-MT speech translation cascade system and yield a significant increase in translation quality. 2. To the best of our knowledge, we are the first to investigate the use of finite-state decoding constraints in combination with LLMs to produce consistent improvements. 3. We report additional small but consistent improvements by prompt-tuning or fine-tuning LLMs on ASR transcripts containing lexical and grammatical errors. ## 2 Windowing Approach One major challenge in modeling and inference of long-form transcript segmentation is that the input sequences can be very long. For example, a TED talk can contain more than one thousand words Li et al. (2021). We take a divide-and-conquer approach that operationalizes two straightforward principles in modeling. First, words on the left and right are both useful for deciding if a sentence delimiter should be present at the current word position. Second, distant words are less useful than nearby words. From these two principles, we design a top-level sliding window algorithm to balance the need for bidirectional modeling and efficiency of computation. We divide the passage into windows at both training and test time, with a small context window on each side to inform decisions at window edges (Figure 1). With this top-level inference algorithm, the sequence-to-sequence machine learning problem is now reduced to the window-level. The problem is now to predict a sequence of segmentation decisions \(\mathbf{y}=y_{1},\dots,y_{w}\) for each text _window_ of size at most \(w\) tokens: \(\mathbf{x}=x_{1},\dots,x_{w}\). ## 3 Modeling Approaches A classic approach to discriminative sequence modeling is the conditional random field (CRF) Lafferty et al. (2001); Liu et al. (2005). This conditional graphical model allows incorporating arbitrary features of the transcript, including linguistic variables and word embeddings. ### Structured Prediction Baseline: Bidirectional RNN Model The limitation of the CRF is in the Markov assumption it makes, considering only the immediately previous word's segmentation decision. Even higher-order CRFs can only consider a fixed-size history within \(\mathbf{y}\). Instead, we introduce a neural autoregressive segmenter. It is an encoder-decoder neural network with monotonic hard attention to the bidirectionally encoded input at the current word position, admitting the same rich featurization of \(\mathbf{x}\) as the CRF; its likelihood is \[p_{\theta}(\mathbf{y}\mid\mathbf{x})= \prod_{t=1}^{w}p_{\theta}(y_{t}\mid\mathbf{y}_{<t},\mathbf{x}) \tag{1}\] \[:= \prod_{t=1}^{w}p_{\theta}\big{(}y_{t}\mid\mathbf{y}_{<t},\mathbf{ BiRNN}(\mathbf{x})_{t}\big{)} \tag{2}\] where \(p_{\theta}\) is parameterized by a recurrent neural network followed by a linear projection layer and a softmax to obtain a locally normalized distribution. Exact inference here is intractable (unlike a CRF); we approximate it with beam search. This model and a QRNN-based Bradbury et al. (2017) automatic punctuation model will serve as baselines. ### Large Language Models for Segmentation More recently, the paradigm of pre-training followed by fine-tuning or few-shot learning has achieved great successes across many NLP tasks. The pre-training task is typically a variant of a language model Brown et al. (2020); Chowdhery et al. (2022) or an autoencoder Raffel et al. (2020) where a corrupted version of a sentence is mapped to its uncorrupted counterpart. We can encode segmentation as such a task: reproducing the input with inserted sentence delimiters. Concretely, we encode \(\mathbf{y}\) as \(z_{1},\dots,z_{w}\) where \(z_{t}=\mathrm{Concat}(d_{t},x_{t})\) and \(d_{t}\in\{\epsilon,\blacksquare\}\). For example, we feed i am hungry i am sleepy to the model, and it produces the sentence-delimited string i am hungry \(\blacksquare\) i am sleepy. We use the publicly available T5 (Text-to-Text Transfer Transformer) model Raffel et al. (2020) and the GPT-style Brown et al. (2020) PaLM model Chowdhery et al. (2022) as the foundation for our text-based segmenters. #### 3.2.1 Prompting and Fine-tuning Training examples for this task look like the input output pairs in Figure 2. In fine-tuning, we update the full set of parameters for a given model on such examples to minimize the cross entropy on the output. For T5 models, the input sequence will be fed to the encoder, and the output sequence will be fed to the decoder through teacher forcing. For PaLM models, the input sequence and the output sequence are concatenated and fed to the decoder with an optional prompt as the prefix. For decoder-only PaLM models, a text prompt like the one in Figure 2 or a fine-tuned soft prompt (Lester et al., 2021) in the embedding space prompts the decoder to enter the state for the segmentation task. When we fine-tune PaLM, the entire model is updated for this task so that no prompting is necessary. #### 3.2.2 Decoding Constraints A deficiency of generation with LLM is that the output might not only fail to correctly segment the passage; it might not even contain the same tokens as the passage. We shall say that an output is _well-formed_ if it contains the same token sequence as the input, with zero or one sentence delimiters before each token. While the rich parameterization of such large Transformer models might _learn_ the inherent structure of the output, we provide two solutions to _enforce_ well-formedness. Both approaches share the attractive quality of being _plug-and-play_: they require no additional parameter-learning, and they can be coupled with an already-trained language model. **Levenshtein Alignment for Post-processing** The generation models' ability to produce arbitrary outputs may be seen as a strength: the model could correct transcription errors and remove disfluencies, if so trained. Therefore, we can let the model generate freely without enforcing structural constraints, then enforce well-formedness post-hoc. Kumar and Byrne (2002) describe a WFST for _Levenshtein alignment_ between two strings. We use it to align the generated string with \(\mathbf{x}\). We then project segment boundaries across alignment links from the generated string onto \(\mathbf{x}\) to determine \(\mathbf{y}\). In this way, annotations can be salvaged when LLM does not precisely recreate the input. **Finite-State Constraints in Decoding** A natural strategy to force well-formed outputs is _constrained decoding_(e.g. Zhang et al., 2019). In it, we compose the input FSA \(\mathbf{x}\) and a special FST \(\mathcal{T}\) encoding all possible segmentation decisions, then project the FST to the output tape to obtain a determinized FSA for the output space. The FST \(\mathbf{x}\circ\mathcal{T}\) is shown in Figure 3. An advantage of the finite-state approach is that _any_ constraint expressible as a regular language is possible. Consequently, our implemented system is applicable a large class of tagging and parsing problems in NLP, not just sentence segmentation. For instance, NP chunking (Ramshaw and Marcus, 1995) and BIO tagging, truecasing (Lita et al., 2003), retrokenization, tetra-tagging for syntactic Figure 1: Processing overlapping windows instead of entire transcript passages. \(w\) is the window size used in both training and inference. \(b\) is the total context window size. \(r\) (\(\leq b\)) is the right context window size. The underlines below the windows indicate which local segmentation decisions are taken as global decisions. Portions not underlined (i.e., the context window) are still provided to the segmentation model to inform segmentation of underlined portions. Figure 2: Prompting PaLM to segment a text window (red) based on three examples. s trained decoding (Hasler et al., 2018) can all be framed as finite-state transformations of an input sequence. ## 4 Experiments We evaluate our proposed method for using large language models for long-form speech translation with three sets of experiments: (1) analysis of hyperparameters, (2) comparison with competing methods, and (3) robustness to speech recognition errors. In each case, we are concerned with translation quality as measured by BLEU. We also assess the LLM output directly by qualitative analysis, well-formedness percentage, and (for diagnostic purposes, following Goldwater et al., 2009) segmentation \(F_{1}\) score against the sentence-segmented reference. Our experiments are carried out on the IWSLT speech translation data sets, subjected to the same pre-processing as described in Li et al. (2021). We use the 2014 data for dev and 2015 and 2018 for test. The fourteen reference transcripts in our dev set range from 861 to 1234 words; by contrast, the median length of a sentence in written English is close to 17 words (Kucera and Francis, 1970). We use the publicly available Speech-to-Text Google API1 to generate ASR transcripts. We remove the automatically predicted punctuation and lowercase the ASR transcripts and use English-{German,Spanish,Arabic} MT models trained with the same preprocessing on the source side as Li et al., 2021. The MT model is a Transformer with a model dimension of 1024, hidden size of 8192, 16 attention heads, 6 encoder layers, and 8 decoder layers. We decode with a beam size of 4. In our experiments, the three MT model instances and the ASR model (and thereby its transcripts) are fixed while we vary the sentence segmentation policies. Footnote 1: [https://cloud.google.com/speech-to-text](https://cloud.google.com/speech-to-text) ### Context Window Size In SS2, we introduced the top-level sliding window inference algorithm above all modeling choices. To compare different models fairly, we fix the hyperparameters \((w,b,r)=(40,10,5)\) for the algorithm throughout the experiments. This choice is guided by a linear search over the window lengths \(w\) in the range of \([20,100]\). The overlapping buffer size for both ends is set to 5 based on findings of segmentation for _punctuated_ text (Wicks and Post, 2021). According to Figure 4, translation quality degrades slightly as window size approaches 20. But very large windows do not appear to be beneficial. The observation validates the two guiding principles of our sliding window approach. ### Choice of Prompt The manual prompt in Figure 2 is the one we selected from a few variants for the decoder-only PaLM models. Instead of exploring the unbounded space of prompts, we resorted to the more principled method of prompt tuning (Lester et al., 2021) to optimize the prompt in the embedding space for the segmentation task. For prompt tuning, the only hyperparameter is the length of the embedding prompt (the embedding size is tied to the corresponding model). In Figure 5, we show that for the PaLM models of 62B and 540B, an embedded prompt as short as 10 can achieve much higher \(F_{1}\) than our hand-written prompt. But it is also notable that the gap between prompt tuning and manual prompting shrank from 25 percent to 10 percent as the model size increased from 62B to 540B, indicating the increasingly stronger generalization capability of extremely large language models. Based on Figure 5, we use 30-token soft Figure 4: BLEU for English–German as context window size for segmentation increases. Each dot represents a T5 segmentation model trained with the same window size for inference time. Figure 3: FST representing all possible segmentations for the transcript “i came i saw i conquered”. prompts in the main results. ### Effect of Finite-State Constraints We make a contrast between greedy search and beam search, with either the segmentation FST constraint SS3.2.2 inside the decoder or post-hoc Levenshtein alignment SS3.2.2 for repairing invalid output. We also vary the model types and model sizes to analyze the impact of constrained decoding in different situations. Table 1 shows that constraints are crucial for smaller models in prompt-tuned scenarios. For example, the rate of output being well-formed is only 14.5% using greedy search for the PaLM 8B model. Even when the model size is increased to 62B, the wellformedness rate is still below 90%. The Levenshtein post-alignment algorithm is effective. But the more general finite-state constraint is even more effective. For the 8B model, the improvement in \(F_{1}\) is 1-2% absolute. For the 62B model, the improvement is nearly 3% absolute. On the other hand, if the cost of fine-tuning is acceptable, LLMs can adapt to this task very well. The fine-tuned T5 base model has a wellformedness rate of 99.4% (the rate is even higher for the T5 11B model: 99.8%). But we shall point out that for the results to be useful to downstream applications, either of the two types of constraints is necessary to completely eliminate hallucinations from LLMs. And the FST constraints are more general and more effective as they affect beam search by rejecting non-wellformed hypotheses during search. ### Main Results: LLMs against Structured Prediction Models Using the IWSLT TED datasets as preprocessed by Li et al. (2021), we compare LLM models against their approach, two strong custom structured prediction baselines. We also report the performance of an oracle segmenter. **FixedLength**: Separates the transcript into disjoint segments with the same number of tokens. While this requires no external segmentation model, the resulting segments are non-sentential Tsiamas et al. (2022). **Oracle**: Uses punctuation from the reference transcripts to segment. The segmentation is projected onto Levenshtein-aligned words in the noisy ASR transcripts (SS3.2.2).2 Footnote 2: A true oracle would optimize corpus-level BLEU over all \(2^{n}\) segmentations, but this is intractable. **Punctuate**: An interpretable two-pass segmentation that first infers punctuation Soboleva et al. (2021), then uses a fixed set of inference rules to differentiate sentence-terminal punctuation marks from sentence-internal ones as in "St. John" and "The end." **bIGRU**: **f.t.**: On the IWSTL data, fine-tunes a shallow biGRU model (SS3.1) trained on the C4 data set Raffel et al. (2020) using the same rules in Punctuate to derive sentence boundaries as supervision. The model has 1 left-to-right GRU layer, 1 right-to-left GRU layer, and 1 GRU layer in the decoder. It uses embeddings of character \(n\)-gram projections Zhang et al. (2019). **T5-{base,11B}**: Fine-tunes the base or 11B (xxl) T5 model Raffel et al. (2020) on the IWSLT data. \begin{table} \begin{tabular}{l|l c c c} \hline \hline \multirow{2}{*}{model} & constraint & slack & wellformed & F1 \\ \hline & unconstrained & greedy & 99.4\% & – \\ & & beam=4 & 99.4\% & – \\ T5 base & & Levenshtein & greedy & 100.0\% & 0.786 \\ Fine Tuned & & beam=4 & 100.0\% & **0.788** \\ & FST & greedy & 100.0\% & 0.786 \\ & & beam=4 & 100.0\% & **0.788** \\ \hline \hline & unconstrained & greedy & 14.5\% & – \\ & & beam=4 & 52.7\% & – \\ PalLM 8B & & pre=4 & 100.0\% & 0.715 \\ Prompt Tuned & & beam=4 & 100.0\% & 0.689 \\ & FST & greedy & 100.0\% & 0.717 \\ & & beam=4 & 100.0\% & **0.727** \\ \hline & unconstrained & greedy & 85.9\% & – \\ & & beam=4 & 89.0\% & – \\ PalLM 62B & & pre=4 & 100.0\% & 0.735 \\ Prompt Tuned & & beam=4 & 100.0\% & 0.737 \\ & FST & greedy & 100.0\% & 0.761 \\ & & beam=4 & 100.0\% & **0.764** \\ \hline \hline \end{tabular} \end{table} Table 1: Effect of finite-state decoding constraints and Levenshtein post alignment on segmentation \(F_{1}\). Figure 5: Segmentation \(F_{1}\) on the dev set as prompt size varies. **T5-11B-ASR**: Fine-tunes the 11B T5 model on the ASR output of IWSLT train and dev set. Sentence boundaries are projected from reference transcripts in the same way as Oracle. **PaLM-PromptTuned-{62B,540B}{,-ASR}**: Prompt-tunes the PaLM model (Chowdhery et al., 2022) on the IWSLT data. **PaLM-FineTuned-62B{,-ASR}**: Fine-tunes the 62B PaLM model. The peer-reviewed state of the art for long-form speech translation is Li et al. (2021) on the IWSLT data set for en-de. Compared to Oracle, there is still a large gap of 3 BLEU points which can be closed by improving segmentation alone. Table 2 lists the complete set of results. BiGRU F.T. already beats Li et al. (2021) by more than 1 BLEU point for en-de, proving itself as a strong structured prediction baseline. T5 and PaLM models improve the results furthermore. Within the T5 group, T5-11B improves over T5-base by 3% in segmentation \(F_{1}\) which translates to consistent BLEU score improvement in almost all data sets. Within the PaLM group, the prompt-tuned 540B model is about 5% more accurate than the 62B counterpart. Given the large number of parameters, fine-tuning PaLM models is very expensive. For the completeness of comparison, we include the fine-tune result for PaLM 62B. Its result is on-par with the T5 11B model. This fact indicates that T5's encoder-decoder architecture has an inductive bias advantage over the PaLM model's decoder only architecture for this task, from a parameter efficiency point of view. But the strength of the PaLM family lies in its largest member. The 540B model with a tiny tuned prompt is as effective as the fully fine-tuned T5 11B or PaLM 62B. ### Robustness to Speech Recognition Errors One key difference between cascade speech translation and typical document-level translation is that transcription errors can be introduced, which propagate into the translation. When the input to segmentation models contains speech recognition errors, can such models still predict sentence boundaries accurately? The answer is yes, to a certain extent. To test this, we replace the tuning data from ground-truth transcripts with punctuation-derived sentence boundaries to ASR transcripts that have sentence boundaries projected from their parallel ground-truth transcript counterparts. For example, we will tune the models to predict the segmentation for the passage: _this train leaves at for <SENT> the next train will arrive in ten minutes_, even though there is a lexical error (_for_ versus _four_). Table 2 shows that training on the ASR transcripts is indeed beneficial. On top of the strong results of the T5 11B model trained on ground-truth transcripts, the ASR version obtains another 1% \(F_{1}\) improvement. The same is true for the PaLM 62B prompt-tuned and fine-tuned models. The relative \begin{table} \begin{tabular}{l r r r r r r r r r r} \hline \hline & F1 & \multicolumn{2}{c}{en-de} & \multicolumn{2}{c}{en-es} & \multicolumn{2}{c}{en-ar} & \\ \cline{2-11} Policy & TED 2014 & 2014 & 2015 & 2018 & 2014 & 2015 & 2018 & 2014 & 2015 & 2018 & Avg \\ \hline \multicolumn{11}{l}{_Baselines and Oracle:_} \\ Oracle & 1.000 & 26.66 & 30.24 & 25.21 & 40.38 & 41.72 & 41.84 & 15.66 & 18.18 & 17.59 & 29.62 \\ FixedLength & 0.041 & 20.82 & 23.45 & 19.66 & 32.76 & 34.03 & 34.01 & 12.64 & 14.79 & 13.92 & 23.66 \\ Li et al. (2021) & – & – & 27.00 & 22.00 & – & – & – & – & – & – & – \\ \multicolumn{11}{l}{_Small Structured Prediction Models:_} \\ Punctuate & – & 22.80 & 26.30 & 21.60 & 35.70 & 36.90 & 36.70 & 13.70 & 15.80 & 15.40 & 25.81 \\ BiGRU F.t. & 0.697 & 24.55 & 28.10 & 23.14 & 37.31 & 39.08 & 38.64 & 14.41 & 16.77 & 16.19 & 27.39 \\ \multicolumn{11}{l}{_LMs:_} \\ T5-base & 0.788 & 25.28 & 29.14 & 24.05 & 38.75 & 40.23 & 39.96 & 14.94 & 17.32 & 16.57 & 28.33 \\ T5-11B & 0.821 & 25.63 & **29.63** & **24.27** & **39.16** & 40.64 & **40.05** & **15.31** & 17.60 & 16.48 & **28.66** \\ T5-11B-ASR & **0.836** & 25.71 & 29.28 & 24.22 & 39.11 & 40.47 & 40.02 & 15.24 & 17.58 & 16.66 & 28.59 \\ PaLM-PromptTuned-62B & 0.764 & 25.10 & 28.69 & 23.92 & 38.52 & 40.01 & 39.22 & 15.03 & 17.13 & 16.58 & 28.08 \\ PaLM-PromptTuned-62B-ASR & 0.781 & 25.15 & 29.09 & 23.71 & 38.69 & 40.07 & 39.31 & 15.13 & 17.21 & 16.76 & 28.17 \\ PaLM-FineTuned-62B & 0.820 & 25.71 & 29.19 & 23.97 & 38.96 & 40.56 & 39.74 & 15.07 & 17.66 & **16.90** & 28.51 \\ PaLM-FineTuned-62B-ASR & 0.832 & **25.84** & 29.37 & 24.13 & 39.02 & 40.46 & 39.89 & 15.17 & **17.80** & 16.65 & 28.61 \\ PaLM-PromptTuned-540B & 0.816 & 25.44 & 29.29 & 24.23 & 38.95 & **40.70** & 39.74 & 15.03 & 17.61 & 16.86 & 28.49 \\ PALM-PromptTuned-540B-ASR & 0.835 & 25.52 & 29.37 & 24.15 & 39.08 & 40.67 & 39.98 & 15.11 & 17.64 & 16.61 & 28.56 \\ \hline \hline \end{tabular} \end{table} Table 2: Segmentation F1 scores on dev set and BLEU scores on dev and test sets, translating into German, Spanish, and Arabic. improvement is consistent across different prompt sizes and fine-tuning methods (Figure 6). Still, the small segmentation improvement does not translate into significant BLEU score improvements. ## 5 Error Analysis ### Segment Length Histogram Analysis To understand the improvements and the remaining errors, we first compare the length distribution of the oracle, the small model BiGRU, T5-11B, and PALM-540B. Figure 7 indicates that the more very long (\(\geq 50\)) segments a model has, the lower its \(F_{1}\) and BLEU scores tend to be. Both LLM models were able to reduce the number of very long segments, bringing it closer to the oracle. ### Qualitative Analysis Table 3 shows examples where the T5-11B-ASR model outperforms competing models. In the first two examples, the LLM model is able to capture the larger context and therefore make the correct prediction. The third example typifies the cases where T5-11B, which is fine-tuned on ground-truth transcripts without ASR errors, tends to make more wrong predictions when the input text is not fluent. Table 4 shows typical errors the T5-11B-ASR model makes. In the first two, ASR errors make the transcript difficult to parse. The third one is linguistically ambiguous. In the last one, the model's prediction is actually closer to the ground-truth segmentation than the Levenshtein-(mis)aligned ASR transcript. Overall, LLMs such as T5-11B-ASR made real progress in predictions requiring longer context. However, even though fine-tuning on ASR transcripts improved robustness to disfluent input, overcoming ASR errors remains challenging. ## 6 Related Work Speech translation.While end-to-end systems for speech translation have exceeded the performance of cascade models on short sequences Weiss et al. (2017) even on public data McCarthy et al. (2020), long-form audio is typically translated with cascades. Previous work uses tagging approaches to separate text into independently translatable units. Segmenting long texts into units suitable for translation has been a recurring topic in MT research Li et al. (2021); Tien and Thi Minh (2019); Pouget-Abadie et al. (2014); Doi and Sumita (2003); Goh and Sumita (2011). To bridge the gap between ASR and MT, Li et al. (2021) address long-form speech translation. Claiming that segmentation is the bottleneck, they adapt their MT model to work _with_ automatic segmentations, however inaccurate they may be. We are training our models to minimize the loss of source sentence segmentation. The ultimate objective is improving the downstream translation quality. It is interesting to explore reinforcement learning for segmentation Srinivasan and Dyer (2021), but the state space is vast for the long-form segmentation problem compared to prior work on RL-based segmentation. Finally, one may consider additional sources of data or training examples to improve modeling. Using prosodic features when they are available is viable Tsiamas et al. (2022); however, we show that LLMs close most of the accuracy gap without these. As a contrasting approach, Kumar and Byrne (2002) focus on segmenting an ASR _lattice_, rather than the decoded transcript. Finally, data augmentation Pino et al. (2019); McCarthy et al. (2020); Li et al. (2021) can complement our approach. Figure 6: Contrast of segmentation \(F_{1}\) on the dev set between models trained on gold and ASR transcripts. Figure 7: Histograms of segment lengths for Oracle, BiGRU, PALM PromptTuned 540B ASR, and T5 11B ASR. Text normalization and segmentation.Mansfield et al. (2019) model text normalization as a sequence-to-sequence problem, using <self> tags to bias toward copying, but they place no search constraints to ensure well-formedness. Zhang et al. (2019) also use finite automata intersected with a neurally generated lattice during decoding. Wicks and Post (2021) provide a unified solution for segmenting punctuated text in many languages; however, ground-truth punctuation is not present in speech recognition output. Structured prediction as sequence-to-sequence.Vinyals et al. (2015) show that attention-enhanced sequence-to-sequence models can be trained for complex structured prediction tasks such as syntactic parsing. Raffel et al. (2020) takes a step further to model all text-based language problems in a text-to-text format. Paolini et al. (2021) framed many NLP tasks as translation between augmented natural languages. Constrained decoding.Hokamp and Liu (2017) and Post and Vilar (2018) introduced lexical constraints in neural machine translation beam search. Anderson et al. (2017) formulated lexical constraints as finite-state machines. Deutsch et al. (2019) used an active set method to efficiently compose many automata with beam search. ## 7 Conclusion We have presented new methods for long-form speech translation by coupling source-side large language models with finite-state decoding constraints, allowing large language models to be used for structured prediction with a guarantee for well-formedness in the output space. Finite-state constraints are especially effective when the model is decoder-only, relatively small, or has not been completely fine-tuned (only prompt-tuned, or few-shot-learned) for the structured prediction task. We also observe that even though complete fine-tuning and enlarging model size can reduced the rate of invalid output, models alone are not capable of completely eliminate invalid output. Fine-tuning on in-domain ASR transcripts containing recognition errors and disfluency improves segmentation accuracy over training on clean transcripts. Our qualitative analysis shows the largest category of remaining errors is ASR errors which make transcripts difficult to parse and segment. The fact that LLMs are capable of adapting to ASR errors points to future research directions of contextualized ASR error recovery. \begin{table} \begin{tabular}{p{227.6pt}} \hline \hline **Reference**: designers can materialize their ideas directly in 3d and surgeons can practice on virtual organs underneath the screen \\ **ASR**: designers can materialize their ideas directly in 3d surgeons can practice a virtual audience underneath the screen \\ **T5-11B-ASR**: designers can materialize their ideas directly in 3d-SENT>surgeons can practice a virtual audience underneath the screen. \\ **Reference**: But our two hands still remain outside the screen <SENT> how can you reach inside and interact with the digital information \\ **ASR**: what are two hands still we made outside the screen that <SENT> how can you reach inside and interact with the digital information \\ **T5-11B-ASR**: what are two hands still we made outside the screen that how can you reach inside and interact with the digital information \\ \hline **Reference**: this is really what brought me to using satellite imagery<SENT>for trying to map the past i knew that i had to see differently \\ **ASR**: this is really what brought me to using satellite imagery<SENT>for trying to map the past i knew that i had to see differently \\ **T5-11B-ASR**: this is really what brought me to using satellite imagery for trying to map the past<SENT>i knew that i had to see differently \\ **Reference**: the equivalent of locating a needle in a haystack blindfolded wearing baseball millts<SENT>so what we did i \\ **ASR**: the equivalent of locating a needle in a haystack blindfolded wearing baseball\textless{SENT>minutes so what} \\ **T5-11B-ASR**: the equivalent of locating a needle in a haystack blindfolded wearing baseball minutes<SENT>so what we did is \\ \hline \hline \end{tabular} \end{table} Table 4: Cases where the T5-11B-ASR model’s prediction is wrong. \begin{table} \begin{tabular}{p{227.6pt}} \hline \hline **Reference**:this great renaissance for ancient eyyptian art architecture and religion<SENT>eyplotlogists have always known the site \\ **ASR**: this great renaissance for ancient eyyptian art architecture and religion<SENT>eyplotlogists have always known the site \\ **BiGRU**: this great renaissance for ancient eyyptian art architecture and religion<SENT>eyplotlogists have always known the site \\ **T5-11B-ASR**:this great renaissance for ancient eyyptian art architecture and religion<SENT>eyplotlogists have always known the site \\ **Reference**: looking for layers of human occupation<SENT>and five meters down underneath a thick layer of mud we found a dense layer of pottery \\ **ASR**: looking for layers of human occupation<SENT>and five meters down underneath a thick layer of mud we found a dense layer of pottery \\ **BiGRU**: looking for layers of human occupation and five meters down underneath a thick layer of mud<SENT>we found a dense layer of pottery \\ **T5-11B-ASR**: looking for layers of human occupation<SENT>and five meters down underneath a thick layer of mud we found a dense layer of pottery \\ \hline **Reference**:acually started in 1984 be at a not-host-for-long city found from above \\ **ASR**: how actually actually started in 1984 be at a not lost for long city found from above \\ **T5-11B**: how<SENT>actually started in 1984 be at a not lost for long city found from above \\ **T5-11B-ASR**: how actually actually started in 1984 be at a not lost for long city found from above \\ \hline \hline \end{tabular} \end{table} Table 3: Cases where the T5-11B-ASR model is more accurate. ## Acknowledgments We thank Chu-Cheng Lin and Nicholas Tomlin for comments that improved the presentation of the work. A.D.M. is supported by an Amazon Fellowship and a Frederick Jelinek Fellowship. ## Limitations Large language models are more expensive and slower compared to dedicated smaller models for sentence segmentation. The additional latency introduced in the speech-to-text cascade by such models can be too high for online processing. We use a sliding window algorithm to combine segmentation outputs from adjacent fixed-size text windows. The choice is a sub-optimal heuristic efficiency-accuracy tradeoff. Recently, large language models have become increasingly capable of handling long paragraphs. We can simplify the system by applying large language models directly on long paragraphs. However, as the output length increases, the likelihood of hallucinations increases, making decoding constraints more important. Moreover, there may always be long-form audio whose transcriptions exceed the context length of even the largest language models. Finally, adhering to the cascade architecture--speech-recognition followed by text-to-text translation--introduces the problem of error propagation. Our error analysis has shown that speech recognition errors form the main category within the remaining errors made by our systems. Can text-only large language models systematically correct speech recognition errors without introducing hallucinations? Furthermore, can a speech recognition model that incorporates a large language model jointly recognize and segment the transcription better than a cascade system?
2310.04304
Coding by Design: GPT-4 empowers Agile Model Driven Development
Generating code from a natural language using Large Language Models (LLMs) such as ChatGPT, seems groundbreaking. Yet, with more extensive use, it's evident that this approach has its own limitations. The inherent ambiguity of natural language presents challenges for complex software designs. Accordingly, our research offers an Agile Model-Driven Development (MDD) approach that enhances code auto-generation using OpenAI's GPT-4. Our work emphasizes "Agility" as a significant contribution to the current MDD method, particularly when the model undergoes changes or needs deployment in a different programming language. Thus, we present a case-study showcasing a multi-agent simulation system of an Unmanned Vehicle Fleet. In the first and second layer of our approach, we constructed a textual representation of the case-study using Unified Model Language (UML) diagrams. In the next layer, we introduced two sets of constraints that minimize model ambiguity. Object Constraints Language (OCL) is applied to fine-tune the code constructions details, while FIPA ontology is used to shape communication semantics and protocols. Ultimately, leveraging GPT-4, our last layer auto-generates code in both Java and Python. The Java code is deployed within the JADE framework, while the Python code is deployed in PADE framework. Concluding our research, we engaged in a comprehensive evaluation of the generated code. From a behavioural standpoint, the auto-generated code aligned perfectly with the expected UML sequence diagram. Structurally, we compared the complexity of code derived from UML diagrams constrained solely by OCL to that influenced by both OCL and FIPA-ontology. Results indicate that ontology-constrained model produce inherently more intricate code, but it remains manageable and low-risk for further testing and maintenance.
Ahmed R. Sadik, Sebastian Brulin, Markus Olhofer
2023-10-06T15:05:05Z
http://arxiv.org/abs/2310.04304v1
# Coding by Design: GPT-4 Empowers Agile Model Driven Development ###### Abstract Generating code from a natural language using Large Language Models (LLMs) such as ChatGPT, seems groundbreaking. Yet, with more extensive use, it's evident that this approach has its own limitations. The inherent ambiguity of natural language presents challenges for complex software designs. Accordingly, our research offers an Agile Model-Driven Development (MDD) approach that enhances code auto-generation using OpenAFs GPT-4. Our work emphasizes "Agility" as a significant contribution to the current MDD method, particularly when the model undergoes changes or needs deployment in a different programming language. Thus, we present a case-study showcasing a multi-agent simulation system of an Unmanned Vehicle Fleet. In the first and second layer of our approach, we constructed a textual representation of the case-study using Unified Model Language (UML) diagrams. In the next layer, we introduced two sets of constraints that minimize model ambiguity. Object Constraints Language (OCL) is applied to fine-tune the code constructions details, while FIPA ontology is used to shape communication semantics and protocols. Ultimately, leveraging GPT-4, our last layer auto-generates code in both Java and Python. The Java code is deployed within the JADE framework, while the Python code is deployed in PADE framework. Concluding our research, we engaged in a comprehensive evaluation of the generated code. From a behavioural standpoint, the auto-generated code aligned perfectly with the expected UML sequence diagram. Structurally, we compared the complexity of code derived from UML diagrams constrained solely by OCL to that influenced by both OCL and FIPA-ontology. Results indicate that ontology-constrained model produce inherently more intricate code, but it remains manageable and low-risk for further testing and maintenance. GPT-4, Auto-generated Code, AI-Empowered Model Driven Development, Ontology-Constrained Class Diagram, Object Constraint Language, Cyclomatic Complexity Introduction In the AI era, with Large Language Models (LLMs) trained on diverse code, new opportunities arise for innovation in Model-Driven Development (MDD). MDD is an evolving field that holds promise to improve the efficiency and robustness of software engineering practices [14]. This study introduces an agile MDD approach that leverages the use of existing LLMs such as OpenAI's ChatGPT, to auto-generate complete, deployment-ready software artifacts [1]. Our approach eliminates the intensive time and effort seen in conventional MDD, where it is needed to craft a unique code generator for each deployment or update the generator with every model alteration. Complete software implies that the auto-generated artifacts are not only intricate but also synergistically structured to collectively ensure their designated functionality and meet its specified requirements [17]. Code generation, especially from formal models such as Unified Modeling Language (UML), Systems Modeling Language (SysML), or Business Process Model and Notation (BPMN) diagrams has emerged as an influential paradigm in modern software engineering practices [14]. Class diagrams, a primary component of most object-oriented design methodologies, capture the static structure of software systems by representing classes, their attributes, operations, and their interrelationships. When paired with the appropriate tools, these diagrams can be directly converted into executable code, facilitating a more streamlined software development process [2]. Such automation not only guarantees a solid alignment between the design and its corresponding implementation but also reduces manual coding errors. This leads to improved software quality and quicker market deployment [20]. The dynamic behaviour, interactions, and holistic views of the system play a fundamental role in comprehending its overall functionality. UML offers a suite of diagrams, each with a unique perspective on system modelling. For instance, UML sequence diagrams represents the interactions among objects in a time-sequential manner, capturing the intricacies of object communications[15]. Use case diagrams focus on the system's functionalities from an end-user's perspective, ensuring the system's relevance and usability. Moreover, state diagrams offer insights into the various states an object can have and the triggering events for state transitions. When code generation processes solely rely on the static structure offered by class diagrams, they might miss out on these dynamic and interactional aspects of system behaviour. Thus, for truly comprehensive and complete auto-generated code, there is an imperative need to synergistically combine the static semantic richness with the dynamic perspectives offered by other UML diagrams [16]. However, while class diagrams focus on the structural aspects of a system, they frequently miss capturing intricate rules, constraints, or the specifications inherent to a domain. This is where the Object Constraint Language (OCL) plays a pivotal role in adding the code construction fine details, as it offers a declarative language to specify precise constraints and derived values, that are vital for maintaining the integrity and consistency of the model [1]. Furthermore, the dynamic constraints of the model such as communication between the classes can be constrained as well via a domain-specific ontology language such as FIPA-ontology [18]. Domain-specific ontology enables the common understanding of knowledge that is exchanged via bridging the semantic gap that conventional class diagrams do not include [21]. The integration of class diagrams, combined with OCL constraints and domain-specific ontologies, could spearhead a novel epoch in code generation. This integrated approach would enable the production of code that's not merely structurally accurate but also enriched with semantic details, ensuring the resulting software mirrors both its foundational design and the detailed domain expertise. Evaluation of auto-generated code is an essential step in grasping its software quality [15]. Traditionally, criteria to assess this quality, such as testability, maintainability, and reliability, have been qualitative in nature. This inherent qualitative character has often rendered them relative and open to subjective interpretation. Accordingly, our research tends to adopt more objective, quantifiable criteria. Given our primary objective to auto-generate seamless code from a model, our evaluation lens sharply focused on the structural integrity of this auto-generated code. We utilized cyclomatic complexity as an instrumental metric to provide insights into its structural soundness. Furthermore, a comparative analysis was executed, pitting the behaviors of the generated code--deployed in varied languages--against each other and against the expected behavior from a model's perspective [1]. The paper is structured to guide the reader through our study. Section 2 provides a detailed problem statement, where we pinpoint the current challenge with existing MDD approach that hindering it from become agile. Section 3 breaks down the four layers of the proposed agile MDD approach. Section 4 applies the proposed approach to model a Multi-Agent System (MAS) of an Unmanned Vehicle Fleet (UVF) and Mission Control Center (MCC). After modeling the structure of the case-study using a comprehensive UML class diagram, we add two layers of constraints to constrain both the model construction and communication. We used OCL to describe the model details such as invariants, pre, and post conditions. Furthermore, we used Intelligent Physical Agents (FIPA) ontology, to define the communication semantics among the agents. Then we modelled the case-study behaviour by UML use case diagram, activity diagram and state machine. Finally, we auto-generate Java and Python code from the same model by using GPT-4. The code is deployed to simulate the UVF-MCC multi agent in The Java code is deployed within the Java Agent Development (JADE) framework, and Python Agent Development (PADE) framework consequently. In Section 5, we evaluate the model behavior by comparing JADE and PADE simulation behavior to each other's and to the originally designated predicated behavior from the model. Furthermore, we took a closer look at the generated autocade structure via its cyclomatic complexity. In this part we compared a generated code complexity from a model with only OCL constrains to the same model that is constrained by OCL and FIPA-ontology. Finally, Section 6 wraps up our findings, discusses their implications, and suggests next steps in future research. ## 2 Problem Statement Natural language inherently possesses ambiguity, which is not only a challenge for machines to comprehend but is also confusing for humans. When utilizing ChatGPT to auto-generate intricate software artifices, defected code is often produced, due to the uncertain and open-ended nature of the input prompt. This issue becomes significantly pronounced in cases where the software to be generated is complex, multi-dimensional and cannot be effectively described using natural language. Yet, MDD promises an elevated level of software abstraction, where high-level model is used as the primary artifact from which the final application is generated [1]. However, the process of designing and maintaining these models, introduces challenges that can limit the MDD effectiveness. Herein, two main problems can be identified. Firstly, traditional modeling techniques, such as UML diagrams, while being excellent for data structuring in software development, often lacks the semantic richness and rule-based derivation of knowledge inherent to ontologies [1]. This leads to models that are accurate in terms of structure and behavior but lacking in semantic depth, making them less effective in modeling complex real-world scenarios. Secondly, transforming these models into executable code is not a straightforward process [16]. As it involves manual scripting of the code generator, that requires to be meticulously maintained and updated to keep pace with changes in the model and the underlying technology stack. This is particularly true when alternating the deployment from one programming language to another [1] To comprehensively address the challenge in code generation within the framework of the current MDD approach [10], it is essential to pinpoint the distinction between traditional coding and MDD code generation as shown in Figure 1. Traditional coding tends to directly encode the software functionalities in code. This approach works well for smaller features that can be transcribed straightforwardly as code. Debugging, testing, and maintenance are also performed at the code level. In contrast, the model-and-code separation approach involves the use of models to abstract and understand the system better, separate from the code. De Figure 1: Difference between traditional coding and MDD. models while coding the application, and once coding is done, models are often discarded due to the high cost of keeping them up to date. Code visualization involves creating models after the software is designed and built, to understand what a program does or to import libraries or other constructs from code to be used as elements in models. These models, however, are typically not used for implementing, debugging, or testing the software as we have the code [22]. While in MDD, models are the primary artifacts in the development process. These source models are used instead of source code. The target code is automatically generated from these models, which raises the level of abstraction and hides complexity. Tools like Eclipse Papyrus, MagicDraw, Enterprise Architect, or IBM Rational Rhapsody have traditionally been used for this purpose [13]. Yet, every time there is a shift in the deployment language or a significant update in the model, these tools necessitate substantial alterations to the code generators, impeding agility in the development cycle. However, creating a code generator is a demanding task, consuming considerable time and energy from modelers. Further complicating matters is the requirement to craft a unique code generator for each programming language, making the prevailing MDD approach less adaptive to different deployment languages, and therefore agile MDD fails to exist. It's in this context that we see potential in leveraging LLMs like ChatGPT as universal code generators [14]. Accordingly, our study highlights the challenging issue that "Although, MDD provides a structured methodology that overcomes the inadequacies of natural language for auto-generating deployable code, the existing MDD approach has not been adequately adapted to the present LLM capabilities in code auto-generation. This misalignment makes the current MDD approach unfit for the agile software development workflow." ## 3 Proposed Approach To tackle the challenge outlined in the problem statement, our proposed MDD approach necessitates that ChatGPT fully understands the model and its associated views. Given that ChatGPT currently processes information through text prompts, we employed PlantUML to convert the visual UML diagram to formal text representation, that can be easily copied into ChatGPT prompt. In the proposed approach in Figure 2. The modeller initially creates the different model layers, which are structural, behavioural, and constraints. The structural layers contain all the diagrams that reflect the static by illustrating the software components and the intricate relationships among them. For example, Class diagrams detail object relationships and hierarchies, while package diagrams group these objects, highlighting dependencies. Component diagrams then break down system functionality at a high level, capturing inter-component relationships. For the real-world physical layout, deployment diagrams depict hardware configurations and component distributions. Object diagrams offer runtime object snapshots, while profile diagrams tailor UML models to specific platforms. The behavioural layer models how the system operates and interacts. Sequence diagrams lay out events in a linear progression, giving a clear timeline of interactions. Activity diagrams present a flowchart-like representation of processes, detailing step-by-step actions. Interaction diagrams showcase the interplay between components, while timing diagrams emphasize the importance of timing and sequence. On the user side, use case diagrams illustrate how external entities engage with the system. Lastly, state diagrams capture the life cycle of entities, showing how they transition between different states. Figure 2: Proposed Agile Model Driven Development approach. Although, the structural and behavioural diagrams provide a holistic architectural view, they often lack the rules that regulate the model architecture semantics. Accordingly, in this research we propose the constraints layer that fine-tunes the model, by explicitly the model meta-values that cannot expressed by UML notations. OCL for example is used to restrict the construction details of the structural and behavioural layers, by specifying invariants on classes and stereotypes, describe pre- and post-conditions on method and states, and limits the parameters' values. Furthermore, communication constraints can be defined using the proper formal method such as an ontology language to express the communication semantics meanings, and protocol that is necessary to communicate and share knowledge among the software artifacts. Ultimltiy in the code deployment layer, we employed ChatGPT, which is based on the GPT-4 architecture, to generate code. The reason to use GPT-4 rather than GPT-3.5 is the higher capability of GPT-4 to reason, which is very important value for our approach as the LLM must understand the model semantics and rules that is encapsulated in the constraints layer to embed them in the generated code. Furthermore, after the using ChatGPT to auto-generate the code, it is important that the modeler deploys the generated code onto the software platform and ensure that it is operational. However, it's important to be aware that ChatGPT's code generation capabilities are still evolving and not flawless [14]. As such, it is anticipated that there may be bugs encountered during the deployment of the code. Consequently, it is necessary for the mod to address these bugs, potentially with the assistance of ChatGPT, and repeatedly run the code until it is successfully fulfilling its intended purpose. Ultimately, in the code deployment layer, ChatGPT, based on the GPT-4 architecture, is employed to generate code. The choice of GPT-4 over GPT-3.5 is due to its superior reasoning capabilities, a vital feature for our approach. The LLM must comprehend the model semantics and semantics encapsulated in the constraints layer to integrate them into the generated code. Furthermore, post auto-generation of code using ChatGPT, it is crucial for the modeler to deploy the generated code and verify its operationality. However, it's essential to acknowledge that ChatGPT's code generation capabilities are continually evolving and may not be flawless. Therefore, the possibility of encountering bugs during the code deployment is anticipated. It becomes imperative for the modeler to address these bugs, possibly with assistance from ChatGPT, and iteratively run the code until it successfully accomplishes its intended purpose." ## 4 Case-Study Model The chosen use-case involves a UVF, comprising various types of UVs that undertake specific missions and are coordinated by an MCC involving a human operator [1]. This case-study is intentionally distributed, enabling it to be modeled and simulated as a MAS [15]. Such MAS often encompasses a high complexity level, as each entity is represented as an agent and must communicate and share information with other entities (i.e., agents) to achieve a common goal (i.e., the fleet mission). To avoid overwhelming the reader with the intricacies of MAS in the following sections, we will highlight only the essential model views that facilitate an understanding of the MAS operation concept [1]. ### Model Structural Layer Class diagram can be considered the most important view in the model structure layer. Every entity within the case-study is represented as an agent class as shown in Figure 3. A summery of these agents are explained as follows: * **Operator**: models the human operator and contains attributes such as operator-ID. It also includes the actions such as send the mission-brief and receive the mission-performance. Figure 3: Case-study class diagram. **MCC**: models the command center, including attributes like MCC-ID. MCC coordinates missions and monitoring the fleet. It includes actions such as receive the mission-brief, send the fleet-plan, receive the fleet-performance, and send the mission-performance. * **UVF-Manager**: models the UVF-manager, containing attributes like UVF-ID, UVs' number, fleet-plan, and fleet-performance. It includes actions such as receive the fleet-plan, send UV-tasks, send the fleet-performance, and receive the UV-performance. * **UV**: a generic class that models the UVs. It contains attributes such as UV-ID, the UV's Task, the UV's Status, and Performance. Actions include receive the UV-task and send the UV-performance. * **UAV, UGV, USV**: these are subclasses of UV, each modelling different types of a UV. The previously described class diagram has been employed to articulate the intricate internal details of each agent, encompassing attributes, operations, and visibility. Additionally, the class diagram is utilized to delineate all conceivable relationships among the agents, including composition, aggregation, and inheritance. Moreover, the multiplicity of the classes establishes the cardinality between the agents. ### Model Behavioural Layer To maintain brevity and keep the article focused, the article will only explain two views which are the activity and state diagrams, as they are the most important behavioral views to understand the case-study model. The activity diagram refines and complements the class diagram by meticulously detailing aspects such as synchronization, parallel execution, and conditional flows, which are indispensable for effectively achieving the mission goals. In contrast, the state diagram offers a microscopic perspective, unveiling the life cycle of the agents' class within the model and illuminating how they coordinate and respond to realize the overarching mission objectives. The activity diagram in Figure 4 elaborates the interplay of information and task flows within the agents. It illustrates the orchestration of processes and the sequence in which tasks are allocated, carried out, and assessed, providing an understanding of the MAS temporal and logical dynamics. Thus, the interaction begins when the operator agent sends the mission-brief to the MCC agent. The latter transforms the brief into a plan and conveys it to the UVF-manager, which, in turn, assigns the tasks to the available UVs. Subsequently, the UVF-manager agent awaits the completion of tasks by each UV and collates their performance, which is instrumental in assessing the overall UVF performance. This consolidated performance is relayed to the MCC, translated into mission-performance, and communicated back to the operator agent. Furthermore, the state diagram provides a granular exploration into the agent internal behaviour, illustrating the transitions triggered by events and the corresponding actions undertaken by the agents. In the scenario presented, the operator agent, the MCC, and the UVF-manager are all modelled using a straightforward two-state diagram, representing states of being either busy or free. However, for the UV, modelling a more intricate state machine was imperative, as it is utilized later by the agent to assess its task performance. Figure 5 explains the different UV states as follows: Figure 4: Case-study activity diagram. Figure 5: UV agent state diagram. **Initial**: the UV is prepared and ready to operate. * **Available**: the UV can be either registered or unregistered. * **Unavailable**: the UV is rendered unregistrable due to being out of service, possibly due to a failure or battery charging. * **Unregistered**: In this condition, the UV is available but has not been registered, as it is still configuring its parameters. * **Registered**: the UV can be either controlled or uncontrolled. * **Uncontrolled**: the UV is registered but has not been assigned any mission. * **Controlled**: the UV is not only registered but also allocated a mission. ### Model Constraints Layer The constraints layer in the proposed MDD approach acts as a meta-model that encapsulates all aspects of the technical requirements that cannot be formalized in the structural and behavioral layers. In the following sections we will discuss in detail the different types of meta-model constraints that have been considered within the case-study modeling. #### 4.3.1 Construction Constraints The OCL is a declarative language used primarily with UML to describe rules that apply to classes within a model. Incorporating OCL as construction constraints within UML diagrams serves as a key enabler for refining the model. Therefore, OCL enhances the model clarity by addressing the inherent ambiguities in its construction details. This precision is particularly beneficial in the generation of deployed implemented code directly from model views. The added constraints ensure that the transition from model to code is more accurate and seamless. During our study, we categorized five different types on constraints, and we applied them on all the agent classes. Figure 6 shows an example of some of the constrains that applied on the UV agent class. The five constraint types are summarised as follows: * **Uniqueness**: to ensure that every instance agent is unique. For example, the UV agent must have a unique identifier across MAS. * **Cardinality**: to ensure the association of the agent instances with each other's, for each UVF-manager with a unique ID is associated with a group of UVs with different unique IDs. * **Value**: this constraint type is ensuring that some of the agent class values are limited to certain threshold. For example, the performance value of any UV agent is within the 0 to 100 range. * **Pre-condition**: this constraint type guarantee the state consistency of an agent instance before triggering the next state. For example, a UV agent can only receive a new task if its current status is 'Idle'. This ensures that an agent must complete its current task or be in a standby state before being assigned a new task, preventing overloading or task conflicts. * **Post-condition**: this constraint type mandates the new state of an agent instance after moving from old state. For example, after a UV agent has received a new task, its status must be updated to be 'Active'. This reflects that the agent is currently engaged with a task and helps in the accurate tracking and management of the agent's workload. #### 4.3.2 Communication Constraints OCL, while adept at specifying constraints on the static aspects and behaviors of classes within a UML model, is not inherently designed to manage or constrain communication among the classes themselves. Its limitations become especially pronounced when addressing our case- study requirements to achieve a mission in MAS, where interaction and communication among agents are fundamental (Sadik Urban, 2018). Multi-agent frameworks like JADE and PADE offer a robust and dynamic ontology language that is called FIPA-ontology communication language (Foundation for Intelligent Physical Agents, 2023). FIPA offers a comprehensive set of interaction protocols, that can achieve intricate patterns of interaction, negotiation, and knowledge exchange among diverse agents, which is a capability OCL doesn't naturally extend. Figure 6: UV agent construction constraints in OCL. FIPA-ontology is standard that defines a set of fixed schemas that together form the MAS communication model as show in Figure 7. The first set of schemas in our model are the message communication schemas. These set of communication schemas are listed as follows: * **Mission-Brief**: holds information regarding the mission-brief, containing attributes like mission-ID, description, and status. * **Fleet-Plan**: contains attributes that provide details about the fleet-plan, such as plan-ID, description, and status. * **UV-Task**: includes attributes like task-ID, description, and status. * **UV-Performance**: includes attributes like UV-performance-ID and performance-metric. * **Fleet-Performance**: includes attributes like Fleet-Performance-ID and performance-metric. * **Mission-Performance**: includes attributes like mission-performance-ID and performance-metric. The second set of schemas are the predicates, which depict relationships between agent classes including the communication schemas: * **(agent-x)** <**is-a>** (**agent-y**): expresses the inheritance relationship between agents, e.g., UAV is a type of UV. * **(agent-x)** <**has-a>** (**agent-y**): represents the composition relationship between agent, e.g., MCC has a UVF-manager. * **(agent-x)** <**owns**> (**agent-y**): expresses the aggregation relationship, e.g., the UVF-manager owns multiple UVs. * **(agent-x)** <**collaborates**> (**agent-y**): defines the collaboration between agents, e.g., the operator collaborates with the MCC. The third set of schemas are the actions. Actions stand for operations that can be performed by an agent specially in our model on a message schema: * **Send** (**schema-x**): represents the action of transmitting data, e.g., the operator agent uses this action to send a mission-brief to the MCC. * **Receive** (**schema-x**): represents the action of receiving data, e.g., the MCC agent uses this action to receive the mission-brief from the operator. #### 4.3.3 Other Constraints Other technical requirements must be considered in the constraints layer as well. Examples of these constraints can be the auto-generated code quality, code privacy, cybersecurity, etc. One way to formalize these constraints is by using the OCL. For instance, to ensure a consistent indentation we used: * self.leadingSpaces.mod(spacePerIndent) = 0. Other OCL constraints that we have considered within this model to regulate the maximum line length, whitespace, function length, and import Statements. Therefore, applying these code quality constraints is ensuring that the auto-generated code is functionally accurate, readable, maintainable, and clear. Furthermore, formal constraints in this layer can be added to regulate other important aspects of the autogenerated code such as data privacy and cybersecurity. However, in order not to diverse from the main paper topic, we will prefer to discuss that in the future work section. ## 5 Code Evaluation Due to our proposed MDD approach, after modeling the system in a textual formal format such as PlantUML, we give this model as an input to GPT-4 prompt, then we use the output code. In our case study, few bugs have been produced by GPT-4, however by fixing these bugs, the output code was capable of being deployed. As our focus in this study is not the auto-generated code correctness rather than its completeness. We focused on our evaluation on exploring and analyzing the autogenerated code structure and behavior, rather than discovering the type and number of generated code bugs. For this reason, we conducted two different experiments. First Experiment targets the exploring of the autogenerated code behavior, while the second experiment aims to analysis the autogenerated code structure and complexity. Figure 7: Case-study FIPA-ontology model. ### Experiment 1: Behavioural Dynamic analysis In the first experiment, we orchestrated the generation of two distinct deployments. The first, written in Java, is tailored to run on JADE platform, while the second, crafted in Python, is designated to be implemented in PADE. The goal of the experiment is to compare the code behaviour that run on JADE against the code is running on PADE, to ensure the consistency of the system dynamic regardless the deployment execution language. Accordingly, we observed the agent interaction behavior on JADE vs PADE framework. We found that the agents' behavior as captured by JADE Sniffer tool aligns with the plotted sequence diagram from PADE agent interaction, as shown in Figure 8. Both JADE and PADE deployment have three UV instances, which are UAV, UGV, and USV. In both sequence diagrams in Figure 8, we see that the depicted process commences with the Operator transmitting a mission-brief to the MCC. On receiving this, the MCC solicits the UVF-manager to identify available UVs. Upon obtaining a list of accessible UVs, the MCC devises a fleet-plan and conveys it to the UVF-manager. After this, the UVF-manager dispatches specific tasks to the UAV, UGV, and USV. Each UV, upon task completion, relays performance data to the UVF-manager. Collating this data, the UVF-manager formulates a comprehensive fleet-performance metric, which is relayed back to the MCC. The MCC, in turn, evaluates this metric in congruence with the mission objectives, compiling a definitive mission-performance report. This report, the culmination of the entire operation, is ultimately returned to the Operator. Two important remarks have been noticed from comparing these two sequence diagrams in Figure 8 with the original case-study activity diagram in Figure 4. First, we noticed that ChatGPT has enhanced the interaction by adding new behaviours to MCC agent and UVF-manager agent. This new behaviour can be seen when the MCC is sending DiscoverUVs message to the UVF-Manger agent and waiting the UVList before forming a FleetPlan, as logically the MCC needed to know what the available UVs resources are before planning them based on the mission-brief. This new interaction behaviour was not explicitly mentioned in the case-study activity diagram. The second remark that the timing of interaction between the MCC and the UVs differ in JADE and PADE, most probably due to the difference in the state machine of each UV instance. This is a good indication that these UV state machine can emulate the operation of the agents. ### Experiment 2: Structural Complexity assessment The second experiment focuses on exploring the autogenerated code structure and complexity. Therefore, in this experiment we used the cyclomatic complexity metric, to measure and analysis the autogenerated code complexity. Cyclomatic complexity quantifies the code complexity by counting the number of linearly independent paths through its source code. Figure 8: JADE vs PADE Sequence diagram. Figure 9: code control-flow graph example. Calculated using the control-flow graph of the code as the one shown in Figure 9. In the control-flow graph example shown in Figure 9, the cyclomatic complexity (M) can be calculated from the formula: \[\text{M}=\text{E}\text{ - N}+2\text{P} \tag{1}\] Where: * E is the number of edges in the flow graph * N is the number of nodes * P is the number of graph separate branches Thus, M in this case equals 3. Accordingly, M can be used to assess the difficulty of code testing, maintenance, understanding, refactoring, performance, reliability, and documentation, where the following values are considered in the assessment: * M = 1-10: law risk * M = 11-20: moderate risk * M = 21-50: high risk; needs to be reviewed and perhaps split into smaller modules * M > 50: sever risk; necessary refactoring is required As in our MDD approach, we emphasized the effect of adding the formal constraints on generating a deployed code, our interest in this experiment is to understand the influence of the constraints layer on the autogenerated code. Therefore, in the experiment we autogenerated two distinct deployments, that differ in the level of constraints involved in their models. The first model implements only the OCL constraints, while the second model add the FIPA-ontology to the model. In case of using OCL constraints only, the agents are communicating via string-based message as shown in the agent communication message in Figure 10-a, while using OCL along with FIPA-ontology constraints resulted in agents that communicate via schema-based message as shown in Figure 10-b. After generating the two distinct deployments, we transformed the agent classes into control flow diagrams to calculate their M, as shown in Table 1 and Table 2. By comparing M values of the auto-generated code in Table 1 and Table 2, we will find that the code complexity is slightly increasing by adding the FIPA-ontology constraints in the second the deployment. however, the complexity of all the agent classes in both deployments is still locating under the law risk category. This means that the auto-generated structure is adequate and does not need any further refactoring. Furthermore, the highest M value belongs to the UVF-manager in the second deployment, where we considered both the OCL and the FIPA-ontology constraints. This value equals to 6, which means that there is a still a large risk marge that allows us to add further constraints in our model without negatively influencing the complexity of the autogenerated code. ## 6 Discussion, Conclusion, and Future Work In our research, we highlighted the difficulties in auto-generating deployable code from natural language using LLMs like ChatGPT, primarily due \begin{table} \begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline **Agent class** & **Operator** & **MCC Manager** & **UV Manager** & **UV** & **Model** \\ \hline **Edgex (E)** & 12 & 22 & 23 & 12 & \\ \hline **Nodes (p)** & 11 & 19 & 19 & 11 & \\ \hline **Branches (p)** & 1 & 1 & 1 & 1 & \\ \hline **Consistency (M)** & 3 & 9 & 6 & 3 & 17 \\ \hline \end{tabular} \end{table} Table 2: Cyclomatic Complexity of auto -generated code with OCL and Ontology constraints. Figure 10: String based communication vs schema-based communication. \begin{table} \begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline **Agent class** & **Operator** & **MCC Manager** & **UV Manager** & **UV Manager** \\ \hline **Edgex (E)** & 12 & 22 & 23 & 12 & \\ \hline **Nodes (p)** & 11 & 19 & 19 & 11 & \\ \hline **Branches (p)** & 1 & 1 & 1 & 1 & \\ \hline **Consistency (M)** & 3 & 9 & 6 & 3 & 17 \\ \hline \end{tabular} \end{table} Table 2: Cyclomatic Complexity of auto -generated code with OCL and Ontology constraints. to language ambiguity. To address this, we employed formal modelling languages, such as UML, for better interpretation by ChatGPT. We found that current UML code generation practices don't fully exploit LLMs, revealing a gap in agility within the MDD process. To enhance this agility, we introduced "constraints" into UML models, adding semantic depth to ensure accurate code generation. These constraints improve various software aspects, such as structure, and communication. In our case study, we showcased our proposed MDD approach by modelling a multi-agent of UVF. We used class diagrams to outline agents, while activity and state diagrams captured their interactions and internal behaviours. Detailed constraints were provided using the Object Constraint Language (OCL) for structure, and FIPA-ontology for agent communication. This model then served as a foundation for auto-generating code in both Java and Python using GPT-4, chosen for its advanced reasoning over GPT-3.5. The effectiveness of our MDD approach relies on the LLM's ability to accurately understand the model's constraints, ensuring code generation remains true to our design. In the first evaluation experiment, we examined the behaviour of auto-generated code within simulation environments: Java's JADE and Python's PADE frameworks. Both deployments effectively captured the intended agent interactions, though there were minor sequence variations between them. Remarkably, GPT-4 not only adhered to the specified agent logic but also enriched it by introducing two new behaviours in the MCC agent's communication sequence. This addition highlighted the power of communication constraints in guiding GPT-4 and its enhanced comprehension of agent interactions. While these improvements were impressive, they underscored a need for meticulous code review. Despite GPT-4's advancements, ensuring that the generated code remains consistent with design intentions is crucial to prevent unexpected behaviours. In the second experiment, we examined the structural of the auto-generated code, specifically by assessing its cyclomatic complexity. In this experiment, we created two separate deployments. The first deployment code resulted from a model that involves only OCL constraints, while the second deployment coded resulted from a model that involves both OCL and FIPA-Ontology constraints. Our analysis revealed the intriguing remake that integrating FIPA-ontology constraints didn't dramatically augment the complexity of the auto-generated code. This suggests that these constraints provide meaningful semantics without unduly complicating the resultant codebase. Furthermore, the analysis also hinted at a notable latitude in our approach. There appears to be a reasonable buffer allowing for the inclusion of additional constraints to the model in future iterations without triggering an immediate need for a code refactor. This is indicative of the robustness and scalability inherent in our MDD approach. In our exploration into integrating the advantages of LLMs into MDD, we've identified that using formal modelling languages can significantly bridge the gap between the challenges of natural language ambiguities and the precision of code generation. The incorporation of meta-modeling constraints not only refines the code generation process but also provides insights into its structural complexity, ensuring a more informed and resilient codebase. Combined, these advancements hint at a transformative path to achieving the elusive agility in current MDD practices. As the world of software development evolves, this seamless interplay between structured modeling, advanced LLM reasoning, and structural complexity assessments will be paramount in crafting agile, efficient, and robust software solutions. In upcoming research, we plan to assess the correctness of the auto-generated code by quantifying the bugs present and pinpointing if certain defects consistently relate to the model. Given the influential role of constraints in refining the auto-generated code, we intend to incorporate new privacy and cybersecurity constraints and subsequently analyse the characteristics of the resultant code. It's also essential to compare our methodology with current MDD frameworks, evaluating factors like efficiency, accuracy, and reliability in various contexts. Through comprehensive enhancement and evaluation, we aim to pave the way for broader industry adoption.
2308.08774
Differential Privacy, Linguistic Fairness, and Training Data Influence: Impossibility and Possibility Theorems for Multilingual Language Models
Language models such as mBERT, XLM-R, and BLOOM aim to achieve multilingual generalization or compression to facilitate transfer to a large number of (potentially unseen) languages. However, these models should ideally also be private, linguistically fair, and transparent, by relating their predictions to training data. Can these requirements be simultaneously satisfied? We show that multilingual compression and linguistic fairness are compatible with differential privacy, but that differential privacy is at odds with training data influence sparsity, an objective for transparency. We further present a series of experiments on two common NLP tasks and evaluate multilingual compression and training data influence sparsity under different privacy guarantees, exploring these trade-offs in more detail. Our results suggest that we need to develop ways to jointly optimize for these objectives in order to find practical trade-offs.
Phillip Rust, Anders Søgaard
2023-08-17T04:13:26Z
http://arxiv.org/abs/2308.08774v1
Differential Privacy, Linguistic Fairness, and Training Data Influence: Impossibility and Possibility Theorems for Multilingual Language Models ###### Abstract Language models such as mBERT, XLM-R, and BLOOM aim to achieve multilingual generalization or compression to facilitate transfer to a large number of (potentially unseen) languages. However, these models should ideally also be private, linguistically fair, and transparent, by relating their predictions to training data. Can these requirements be simultaneously satisfied? We show that multilingual compression and linguistic fairness are compatible with differential privacy, but that differential privacy is at odds with training data influence sparsity, an objective for transparency. We further present a series of experiments on two common NLP tasks and evaluate multilingual compression and training data influence sparsity under different privacy guarantees, exploring these trade-offs in more detail. Our results suggest that we need to develop ways to jointly optimize for these objectives in order to find practical trade-offs. Machine Learning, ICML ## 1 Introduction One of the open challenges in AI is bridging the widening digital language divide by providing technologies that work well for all languages. Multilingual language models such as mBERT (Devlin et al., 2019), XLM-R (Conneau et al., 2020), and BLOOM (Scao et al., 2022), facilitate transfer between closely related languages, enabling roll-out of technologies for low-resource languages, and are used for a wide range of real-world applications in many languages--e.g., from named entity recognition (Khalifa et al., 2021) to legal document classification (Wang and Banko, 2021). Generalization across languages is challenged by topological divides, language families, or scripts (Singh et al., 2019; Duffer and Schutze, 2020) and finding architectures that best facilitate such transfer, achieving optimal **multilingual compression**(Ravishankar and Sogaard, 2021) through parameter sharing (rather than compartmentalization), remains an open research problem. With the widespread adaptation of multilingual language models also comes responsibility and requirements that models are trustworthy (Pruksachatkun et al., 2021). What does trustworthiness amount to for multilingual language models? A crucial requirement is that multilingual NLP models perform equally well across languages, not favoring any languages over others. Choudhury and Deshpande (2021) refer to this property as **linguistic fairness**. Linguistic fairness is defined as zero variance across language-specific losses, typically estimated on held-out data.1 Footnote 1: This definition of linguistic fairness is an instantiation of _equal risk fairness_ or overall performance parity, i.e., equal model performance across groups (Berk et al., 2018; Verma and Rubin, 2018; Williamson and Menon, 2019), which balances precision-based and recall-based metrics and is considered more relevant than calibration-based metrics for standard NLP applications. Since the three are mutually exclusive (Miconi, 2017), we ignore calibration and balance precision and recall. Another crucial requirement is _transparency_, i.e., the ability to say _why_ models make particular predictions. Methods to achieve transparency come in two flavors; Some methods--commonly referred to as feature attribution methods--present rationales behind predictions in terms of input token attributions, but such rationales are limited in that they cannot explain predictions motivated by the absence of input tokens or the presence of particular token combinations. Feature attribution methods have also been shown to be unreliable (Kindermans et al., 2019; Arun et al., 2020). Other methods highlight training data influence, i.e., provide influential data points as rationales for decisions. Often referred to as instance-based interpretability methods, they are argued to be more useful across different NLP tasks (Han et al., 2020; Han and Tsvetkov, 2021; Zhou et al., 2021). We refer to the objective of achieving sparse training data influence, i.e. strong instance-interpretability, as **training data influence sparsity**. Finally, for many NLP applications, we further need our models to be private, for which **differential privacy**(DP; Dwork, 2006) provides a theoretically rigorous framework. The trustworthiness objectives as defined above have primarily been considered in a monolingual context, and are often (falsely) assumed to be independent (Ruder et al., 2022).2 Our paper investigates _the extent to which these objectives align or are at odds_. We do so in a multilingual setting and show how multilingually presents options and challenges.3 Our theoretical contributions show that while privacy and linguistic fairness are compatible through multilingual compression, privacy and training data influence sparsity are not, and our empirical results indicate that these objectives interact in non-linear ways.4 Footnote 2: One exception is a growing body of work showing fairness and differential privacy are at odds (Bagdasaryan et al., 2019; Cummings et al., 2019; Chang and Shokri, 2021; Hansen et al., 2022). While Naidu et al. (2021) show that differential privacy and GradCAM (Selvaraju et al., 2019), a feature attribution method, are compatible, the interaction between differential privacy and training data influence remains unexplored. ContributionsWe begin (in SS2) with a theoretical exploration of differential privacy, training data influence, and linguistic fairness in the context of multilingual language models. We show that differential privacy and training data influence sparsity are fundamentally at odds, a result which is not limited to the multilingual setting. While differential privacy and fairness are often said to be at odds, we also show that differential privacy and linguistic fairness over languages are compatible in the multilingual setting, as a result of compression. Subsequently (in SS3-SS5), we present empirical results on the impact of differentially private fine-tuning on multilingual compression and training data influence: We analyze the effect of such fine-tuning on the multilingual compression of large LMs and find that it is possible to achieve (i) high compression with strong privacy at the cost of performance; (ii) high compression with high performance at the cost of privacy; or (iii) privacy and accuracy at the cost of compression. Since we show in SS2 that performance, privacy and compression _are theoretically_ compatible, this leaves us with an open problem: How do we practically optimize for both performance, privacy and compression? Furthermore, we compare four (proxy) metrics for quantifying multilingual compression--sentence retrieval, centered kernel alignment (CKA; Kornblith et al., 2019), IsoScore (Rudman et al., 2022), representational similarity analysis (RSA; Kriegeskorte et al., 2008; Edelman, 1998)--and discuss their usefulness for balancing these trade-offs. Finally, we show that LMs exhibiting high multilingual compression are less instance-interpretable in that they make highlighting training data influence more difficult. In sum, our work shows that _linguistically fair and private high-performance multilingual models are possible, even if learning them is challenging. However, training data influence methods will fail for such models_. ## 2 Theoretical Exploration We consider language model learning and fine-tuning in a multilingual setting, in which our training data \(D=D_{1}\cup\ldots\cup D_{|L|}\) is the union of disjoint training data from \(|L|\) different languages. We consider the interaction of differential privacy, training data influence and linguistic fairness with performance and compression in this setting. PreliminariesWe briefly introduce our formal definitions here: A randomized algorithm, here model, \(\mathcal{M}:\mathcal{D}\rightarrow\mathcal{Y}\) is _\(\varepsilon_{p}\)-differentially private_(Dwork, 2006) iff for all adjacent datasets \(D,D^{\prime}\in\mathcal{D}\) and all \(Y\subset\mathcal{Y}\), \(\mathbb{P}(\mathcal{M}(D)\in Y)\leq\exp(\varepsilon_{p})\cdot\mathbb{P}( \mathcal{M}(D^{\prime})\in Y)\).5 Adjacent means that the datasets differ by exactly one example \(x_{\mathit{diff}}\). Footnote 5: Note how standard empirical risk minimization is not private, since it is a linear combination of training samples near the decision boundary, and if \(D\) and \(D^{\prime}\) differ in one of those, the classifier changes significantly. A model \(\mathcal{M}\) is said to be _\(\varepsilon_{i}\)-instance-interpretable_, i.e., having sparse training data influence, iff for any \(D,D^{\prime},D^{\prime\prime}\in\mathcal{D}\) with \(D^{\prime}=D\setminus\{x_{\mathit{diff}}\}\), \(D^{\prime\prime}=D\setminus\{x^{\prime}\}\), and \(x_{\mathit{diff}}\neq x^{\prime}\), where \(x_{\mathit{diff}}\) is the most influential training data point under leave-one-out influence,6 it holds that \(\mathbb{P}(\mathcal{M}(D)\in Y)-\mathbb{P}(\mathcal{M}(D^{\prime})\in Y)>\exp (\varepsilon_{i})\cdot(\mathbb{P}(\mathcal{M}(D)\in Y)-\mathbb{P}(\mathcal{M}(D ^{\prime\prime})\in Y))\). In other words, \(x_{\mathit{diff}}\) had more influence on \(\mathcal{M}\) than any other data point \(x^{\prime}\) by some margin \(\exp(\varepsilon_{i})\)(Koh and Liang, 2017). Footnote 6: Leave-one-out here means \(D^{\prime}=D\setminus\{x_{\mathit{diff}}\}\) and is the gold standard for instance-based methods, which explains the close connection to DP where we also deal with adjacent datasets. A model \(\mathcal{M}\) is said to be fair if for a group partitioning \(g(D)\to D_{g_{1}},\ldots,D_{g_{n}}\) into smaller samples and for some loss function \(\ell\), e.g., 0-1 loss, \(\ell(\mathcal{M}(D_{g_{i}}))=\ell(\mathcal{M}(D_{g_{j}}))\)(Williamson and Menon, 2019). A model that is fair for a group partitioning by languages is said to be linguistically fair (Choudhury and Deshpande, 2021). Finally, a model \(\mathcal{M}\) exhibits perfect multilingual compression when it outputs identical representations for semantically equivalent inputs irrespective of the input language. Formally, for a pair of translation equivalent sentences, (\(i_{j}\), \(i_{q}\)), the representations of \(i_{j}\) and \(i_{q}\) are identical at any layer \(l\) of the model, i.e \(\mathcal{M}^{l}(i_{j})=\mathcal{M}^{l}(i_{q})\). In the following paragraphs, we discuss under what conditions DP, training data influence, linguistic fairness, and multilingual compression are at odds or are compatible, and how these conditions align with common scenarios in multilingual NLP.7 Footnote 7: Differential privacy meaningfully protects any individual training example. However, sensitive information may be repeated across many training examples, so \(\varepsilon\)-DP does not necessarily prevent leakage of such information at the granularity of individual people, real-world events, etc. For example, in our multilingual setting, an attacker may still gain access to a social security number learned by the model, but they will be unable to identify whether the number was leaked in a particular language. Differential Privacy and Training Data Influence SparsityWe first show that differential privacy and training data influence sparsity are fundamentally at odds: **Theorem 1**.: _A model \(\mathcal{M}\) becomes less \(\varepsilon_{i}\)-instance-interpretable as it becomes more \(\varepsilon_{p}\)-differentially private, and vice-versa._ Proof.: Let \(\mathbb{P}(\mathcal{M}(D)\in Y)\) be abbreviated as \(p\), \(\mathbb{P}(\mathcal{M}(D^{\prime})\in Y)=\mathbb{P}(\mathcal{M}(D\setminus\{x_ {\mathit{diff}}\}\in Y)\) be abbreviated as \(p_{d}\), and let \(\mathbb{P}(\mathcal{M}(D^{\prime\prime})\in Y)=\mathbb{P}(\mathcal{M}(D \setminus\{x^{\prime}\}\in Y)\) be abbreviated as \(p_{2}\). Assume that \(\mathcal{M}\) is \(\varepsilon_{i}\)-instance-interpretable and \(\varepsilon_{p}\)-differentially private. If \(\mathcal{M}\) is \(\varepsilon_{p}\)-differentially private, it holds that \[p \leq \exp(\varepsilon_{p})\cdot p_{d} \tag{1}\] \[\Rightarrow \exp(\varepsilon_{p}) \geq \frac{p}{p_{d}}\] If \(\mathcal{M}\) is also \(\varepsilon_{i}\)-instance-interpretable, it also holds that \[(i) p-p_{d}> \exp(\varepsilon_{i})(p-p_{2}) \tag{2}\] \[(ii) \Rightarrow p> \exp(\varepsilon_{i})(p-p_{2})+p_{d}\] \[(iii) \Rightarrow \frac{p}{p_{d}}> \frac{\exp(\varepsilon_{i})(p-p_{2})+p_{d}}{p_{d}}\] \[(iv) \Rightarrow \exp(\varepsilon_{p})> \frac{\exp(\varepsilon_{i})(p-p_{2})}{p_{d}}+1\] Step \((iv)\) follows from Equation 1. We can now see from Equation 2 step \((iv)\) that \(\varepsilon_{p}\) increases with increasing \(\varepsilon_{i}\), i.e. the model becomes less differentially private as it becomes more instance-interpretable, and vice-versa. This result is not limited to the multilingual setting. Differential Privacy and Linguistic FairnessFairness and differential privacy are occasionally at odds, as shown by Bagdasaryan et al. (2019); Cummings et al. (2019); Chang and Shokri (2021); Hansen et al. (2022),8 but in the multilingual setting, fairness and privacy can be compatible (for the common definitions above). We first note that there is a trivial solution to obtaining differential privacy and linguistic fairness (a joint optimum), namely randomness. This simply shows that the two objectives can be simultaneously satisfied. Next, imagine a perfectly compressed multilingual language model trained on a multi-parallel dataset. Footnote 8: Several authors have considered practical trade-offs between privacy and fairness, including Jagielski et al. (2019), Lyu et al. (2020), Pannekoek and Spigler (2021), and Liu et al. (2021b). **Theorem 2**.: _If a model \(\mathcal{M}_{D}\) trained on parallel data from \(|L|\geq 2\) languages, \(D=\{\ldots,i_{1},\ldots,i_{|L|},\ldots\}\), with \(i_{j}\) and \(i_{q}\) being translation equivalents, is perfectly multilingually compressed, then it is \(\varepsilon_{p}\)-differentially private._ Proof.: Since \(\mathcal{M}_{D}\) is perfectly compressed, the representation of \(i_{j}\) is identical to \(i_{q}\) at any layer \(l\), i.e., \(\mathcal{M}_{D}^{l}(i_{j})=\mathcal{M}_{D}^{l}(i_{q})\). This gives us strong \(k\)-anonymity (Li et al., 2012) in the representation space of \(\mathcal{M}_{D}\), with \(k=|L|\) and all dimensions as quasi-identifiers. Since \(k\)-anonymity is not obtained through a deterministic (reversible) procedure, but a randomly initialized learning procedure with random sampling, and since our attributes are randomly initialized, \(k\)-anonymization entails differential privacy in our setting.9\(\mathcal{M}_{D}\), given perfect compression and convergence, is 0-differentially private, i.e., the probability distribution of \(\mathcal{M}_{D}\) is unaffected by the removal of any single row. Footnote 9: The procedure also is not dependent on any individual input, because all individual data properties are either random (from initialization) or \(k\)-anonymous, by construction. It follows directly from perfect compression that \(\mathcal{M}_{D}\) is also linguistically fair because identical representations imply identical performance across languages. It is therefore an immediate corollary of the above result that a linguistically fair model can be differentially private. While the assumptions of a perfectly compressed model and clean multi-parallel dataset rarely hold up in practice and there is no obvious way to satisfy them while maintaining utility, the practical significance of this result is a reminder that multilingual training converges toward \(k\)-anonymization, and that safe \(k\)-anonymization of the representation space, if obtained, would provide us differential privacy. In the absence of strong guarantees, increasing the number of training languages (larger \(k\)) would strengthen privacy (Li et al., 2012). Our empirical results below (SS4) suggest that we can often obtain strong privacy and strong compression, but at the cost of performance. ## 3 Experimental Setup In our experiments, we investigate the relation between the performance and multilingual compression of fine-tuned multilingual language models, and their privacy and train ing data influence. We rely on a commonly used multilingual pretrained language model, which we fine-tune with different levels of (\(\varepsilon\), \(\delta\))-differential privacy on two common NLP tasks and evaluate using metrics of compression and training data influence.10 This section presents the pretrained language model, the tasks, the training protocol, the metrics of compression and training data influence, and the evaluation procedure. Footnote 10: For completeness, we explain the difference between \(\varepsilon\)-DP and (\(\varepsilon\), \(\delta\))-DP in Appendix B. ModelWe use a pretrained XLM-R Base (Conneau et al., 2020), which is a 12-layer encoder-only transformer with \(\sim\)277M parameters and 250k vocabulary size trained on CC-100 (100 languages) via masked language modeling. Tasks and DataWe fine-tune in a zero-shot cross-lingual transfer setting for part-of-speech (POS) tagging and natural language inference (NLI). Why these tasks? First, while POS tagging is driven by lower-level syntactic features, NLI requires a higher-level understanding (Lauscher et al., 2020). Second, we can leverage _multi-parallel_ corpora for multilingual fine-tuning and zero-shot cross-lingual transfer in both tasks, which helps eliminate confounders.11 Footnote 11: One limitation of this selection is that we only consider classification but no generative tasks, which could be worth exploring in the future. For POS tagging, we use the Parallel Universal Dependencies (PUD) treebank from Universal Dependencies (UD) v2.8 (Nivre et al., 2020; Zeman et al., 2021), which contains 1000 sentences parallel across 15 languages. We train in 7 of these languages (fr, it, ja, pt, th, tr, zh),12 exclude English,13 and use the remaining 7 languages (ar, de, es, hi, id, ko, ru) for validation. This split ensures that (1) we both train and evaluate on typologically diverse language samples, (2) there exist additional UD v2.8 treebanks in our validation set languages that we can harness for testing, and (3) there exist parallel sentences in our training set languages that we can harness to evaluate multilingual compression. We use the test splits of the following treebanks for testing: Arabic-PADT, German-GSD, Spanish-GSD, Hindi-HDTB, Indonesian-GSD, Korean-Kaist, and Russian-SynTagRus. Appendix Table 4 lists the treebanks' sizes.14 Footnote 12: See Table 2 for language details. Footnote 13: We exclude English to keep the number of languages balanced and because the combined corpus is already biased towards Indo-European with Latin scripts (see Table 2). Footnote 14: Regardless of test split size, each language contributes equally to the mean accuracy reported in Figure 1. For NLI, we rely on the XNLI dataset (Conneau et al., 2018), which contains (premise, hypothesis, label)-triplets multi-parallel across 15 languages. We, again, train in 7 of these languages (bg, es, fr, hi, tr, vi, zh), exclude the original English data, and validate in the remaining 7 languages (ar, de, el, ru, sw, th, ur). We train and validate our models on the original XNLI validation data (7500 examples per language), and we test the models on the original test data (15000 examples per language) in the validation set languages. The idea to train and validate on the same sentences (in different languages) while testing on sentences from different treebanks (as we do for POS) or a different dataset split (as for XNLI) is to induce a slight distributional shift between validation and test data for the same language sample. This shift lets us evaluate the regularization strength of the gradient noise added by the DP-optimizer. TrainingWe employ the standard fine-tuning procedures for token classification (POS) and sequence classification (XNLI) proposed by Devlin et al. (2019). Similar to Li et al. (2022), we use DP-AdamW (i.e., the DP-SGD algorithm (Abadi et al., 2016) applied to the AdamW optimizer with default hyperparameters (Loshchilov and Hutter, 2019; Kingma and Ba, 2015)) to train with (\(\varepsilon\), \(\delta\))-DP. We evaluate 6 different privacy budgets with \(\varepsilon\in\{1,3,8,15,30,\infty\}\).15 We set \(\delta=\frac{1e-4}{|D_{train}|}\) for POS, where \(|D_{train}|=7000\) is the length of the training dataset, and \(\delta=1e-6\) for XNLI.16 The noise multiplier \(\sigma\) corresponding to a particular (\(\varepsilon\), \(\delta\))-budget is determined numerically before training through binary search. Our implementation builds upon the optimized Opacus (Yousefpour et al., 2021) privacy engine by Li et al. (2022).17,18 We use the Renyi differential privacy (RDP; Mironov, 2017; Mironov et al., 2019) accountant with conversion to (\(\varepsilon\), \(\delta\))-DP (Canonne et al., 2020). Hyper-parameter tuning on private data--which the POS and XNLI data in our study simulate--has been shown to incur additional privacy leakage (Liu and Talwar, 2019; Papernot and Steinke, 2022). Therefore, we try to keep hyper-parameter tuning to a minimum and rely on sensible priors to select a suitable range of hyper-parameters. For POS, we find that the range of good hyper-parameters for non-private settings transfers well to private settings if we just use slightly higher learning rates. For XNLI, we select hyper-parameters such that the sampling rate matches that used by Li et al. (2022) for NLI tasks in the GLUE benchmark (Wang et al., 2018).19 Accordingly, we train with a maximum sequence length of 128 for 10 epochs with a total batch size of 96 for POS and 30 epochs with batch size 512 for XNLI.20 At each privacy budget, we train models (3 random initializations each) with 6 learning rates for POS (\(1e{-}4\), \(3e{-}4\), \(5e{-}4\), \(7e{-}4\), \(1e{-}5\), \(5e{-}5\), \(7e{-}5\), \(1e{-}6\)) and 3 learning rates for XNLI (\(3e{-}4\), \(4e{-}4\), \(5e{-}4\) for private models and \(9e{-}5\), \(1e{-}4\), \(2e{-}4\) for non-private models). Based on the validation accuracy we then select the 5 best settings for each privacy level and task, listed in Appendix C. The learning rate is linearly decayed after 50 warm-up steps for POS and without warm-up for XNLI. We perform gradient clipping (per-sample in private settings) with a threshold of 0.1. Weight decay is set to 0.01. Footnote 20: Note that using fixed-size batches technically breaks the privacy guarantees of RDP based on the Sampled Gaussian Mechanism (Mironov et al., 2019). We follow the convention of using fixed-size batches, avoiding potential out-of-memory GPU issues, as a proxy for the true privacy spending and performance (see (Li et al., 2022) and Appendix D.4 in (Tramer & Boneh, 2021)). Quantifying Multilingual CompressionWe present four metrics of multilingual compression: A common proxy task to measure the quality of cross-lingual representations is sentence retrieval (Artetxe & Schwenk, 2019; Dufter & Schutze, 2020; Libovicky et al., 2020; Ravishankar & Sogaard, 2021; Liu et al., 2021; Maronikolakis et al., 2021). Dufter & Schutze (2020) quantify the degree of multilingual compression using bidirectional sentence retrieval precision as follows:21 Footnote 21: Note that using fixed-size batches technically breaks the privacy guarantees of RDP based on the Sampled Gaussian Mechanism (Mironov et al., 2019). We follow the convention of using fixed-size batches, avoiding potential out-of-memory GPU issues, as a proxy for the true privacy spending and performance (see (Li et al., 2022) and Appendix D.4 in (Tramer & Boneh, 2021)). \[\mathrm{P}=\frac{1}{2m}\sum_{i=1}^{m}\mathds{1}_{\operatorname*{arg\,max}_{k }R_{ik}=i}+\mathds{1}_{\operatorname*{arg\,max}_{k}R_{ki}=i}. \tag{3}\] Here, \(R\in\mathbb{R}^{m\times m}\) denotes the matrix of cosine similarities \(R_{ij}=\cos(e_{i}^{q},e_{j}^{r})\) between the \(m\) sub-word representations \(e_{i}^{q}\) and \(e_{j}^{r}\) from a LM at indices \(i\) and \(j\) for a set of parallel sentences in the languages \(q\) and \(r\).22 Footnote 22: The sub-word representations are taken from the LM’s layer \(l\) and mean-pooled over the sequence length (excluding special tokens). Kornblith et al. (2019) propose to use linear centered kernel alignment (CKA) as a similarity index for neural network representations. It is defined as \[\mathrm{CKA}(X,Y)=\frac{\|Y^{\mathrm{T}}X\|_{F}^{2}}{\|X^{\mathrm{T}}X\|_{F} \|Y^{\mathrm{T}}Y\|_{F}}. \tag{4}\] For LMs, the matrices \(X\) and \(Y\) are obtained by mean-pooling \(n\) sub-word representations at model layer \(l\)(Conneau et al., 2020; Glavas & Vulic, 2021). Typically, \(X\) and \(Y\) correspond to the representations from two different models for identical examples (Kornblith et al., 2019; Phang et al., 2021). We instead use the representations from a single model for a parallel sentence pair \((s_{q},s_{r})\) in languages \(q\) and \(r\) as \(X\) and \(Y\), respectively, to study the similarity of representations across languages, similar to Muller et al. (2021) and Conneau et al. (2020). Yang et al. (2022) also use CKA as a metric of compression. IsoScore (Rudman et al., 2022) is an isotropy metric, computed as outlined in Appendix D, that quantifies the degree to which a point cloud uniformly utilizes the vector space. In our context, this point cloud corresponds to the \(n\) sub-word representations of all examples in a corpus at layer \(l\). Prior work has shown that anisotropic representation spaces, such as the embedding spaces of large LMs (Ethayarajh, 2019), suffer from so-called _representation degeneration_(Gao et al., 2019), and that the isotropy of a model's representation space correlates with its task performance (Zhou et al., 2019; Wang et al., 2020; Zhou et al., 2021; Rajaee & Pilehvar, 2021, _inter alia_). High isotropy also means languages are not compartmentalized and should therefore correlate with high compression. Representational similarity analysis (RSA; Kriegeskorte et al., 2008; Edelman, 1998) was originally introduced in the field of cognitive neuroscience to analyze the similarity of fMRI activity patterns, but it is also applicable to neural network representations (Bouchacourt & Baroni, 2018; Chrupala, 2019; Chrupala & Alishahi, 2019; Lepori & McCoy, 2020; He et al., 2021, _inter alia_), e.g., to analyze their similarity across languages. RSA measures the similarity between the representational geometries (i.e., the arrangement in the vector space) of two sets of representations. The representational geometry is determined through pairwise (dis)similarity metrics, and similarity is typically measured using a rank-based correlation metric such as Spearman's \(\rho\)(Diedrichsen & Kriegeskorte, 2017). Quantifying Training Data InfluenceTraining data influence metrics can help us gain an understanding of the inner workings of a model (Koh & Liang, 2017; Yeh et al., 2018; Charpiat et al., 2019; Koh et al., 2019; Pruthi et al., 2020; Basu et al., 2020; K & Sogaard, 2021; Zhang et al., 2021; Kong & Chaudhuri, 2021, _inter alia_). Such metrics are approximations of leave-one-out-influence. Pruthi et al. (2020) proposed a both effective and practical method, called \(\mathrm{TracInCP}\),23 to compute the influence of a training example \(z\) on the model's prediction for another example \(z^{\prime}\), which could be a test example or \(z\) itself (called the self-influence). The influence is computed as follows: Footnote 23: “CP” stands for checkpoint; the method approximates \(\mathrm{TracInIdeal}\), which is impractical to compute, through model checkpoints taken during training (Pruthi et al., 2020). \[\mathrm{TracInCP}(z,z^{\prime})=\sum_{i=1}^{k}\eta_{i}\nabla\ell(\theta_{i},z) \cdot\nabla\ell(\theta_{i},z^{\prime}), \tag{5}\] where \(\eta_{i}\) is the learning rate and \(\nabla\ell(\theta_{i},z)\) is the gradient of the loss w.r.t. the model parameters \(\theta_{i}\) and inputs \(z\) for the \(i\)-th model checkpoint. We will use \(\mathrm{TracInCP}\) as an approximation of training data influence in our experiments. EvaluationWe evaluate our models both during and after fine-tuning. For POS, we evaluate every 100 steps, and for XNLI, every 200 steps. We measure zero-shot cross-lingual transfer performance on the validation and test data by accuracy (token-level for POS and sequence-level for XNLI). To account for randomness, we take the mean of the best 5 seeds for each privacy budget. The measures of multilingual compression (sentence retrieval precision, CKA, IsoScore, RSA) are computed using distinct evaluation corpora comprising parallel sentences for all languages pairs in the respective training set language sample. For models trained on XNLI, we use 3000 sentence pairs per language pair from the TED 2020 corpus (Reimers and Gurevych, 2020) and 3500 pairs from the WikiMatrix dataset (Schwenk et al., 2021). For models trained for POS, we use 3500 pairs from TED 2020, 3500 pairs from WikiMatrix, and 900 pairs from Tatoeba,24,25,26 numbers chosen based on availability and memory usage. Footnote 24: [https://tatoeba.org](https://tatoeba.org) Following Dufter and Schutze (2020), we evaluate the models at layers 0 and 8, which complement each other well with regard to the properties they capture, e.g., multilinguality and task-specificity (Choenni and Shutova, 2020; de Vries et al., 2020; Muller et al., 2021). We compute the sentence retrieval precision between language pairs and take the mean.27 The IsoScore is computed for the contextualized representations of all examples in the respective corpus at once. In contrast, CKA and RSA scores are also computed per language pair, and then averaged across those.28 For RSA, we use \(\mathrm{D}=1-\mathrm{Spearman's}\,\rho\) and \(\mathrm{S}=\mathrm{Spearman's}\,\rho\) as the dissimilarity and similarity metrics, respectively.29 Finally, we average results for all four metrics across TED 2020, WikiMatrix, and Tatoeba, the two layers, and the 5 best seeds for each privacy budget. For comparison, we also compute all metrics for the original pretrained and a randomly initialized XLM-R model. Footnote 25: We extract sentence pairs from Tatoeba using the tatoebatools library ([https://github.com/LBeaudoux/tatoebatools](https://github.com/LBeaudoux/tatoebatools)). Footnote 26: We exclude th from the WikiMatrix and Tatoeba evaluation sets for POS as there are insufficiently many sentence pairs available between th and the remaining languages. Footnote 27: Sentence retrieval is bidirectional (see Eq. 3). Given \(|L|\) languages, we therefore average over the full \(\mathbb{R}^{|L|\times|L|}\) language pair matrix, only excluding the main diagonal. Footnote 28: CKA and RSA are symmetrical. Given \(|L|\) languages, we thus only use the upper triangle of the \(\mathbb{R}^{|L|\times|L|}\) language pair matrix, still excluding the main diagonal. ## 4 Results Privacy, Compression, PerformanceWe now empirically investigate the relationship between differential privacy, multilingual compression, and cross-lingual transfer performance. We present aggregated results in Figure 1 and non-aggregated results in Appendix G. We observe that the zero-shot accuracy _decreases_ as we fine-tune with stronger privacy guarantees (Figures 0(a) and 0(f)), which is expected due to the _privacy-utility tradeoff_(Geng et al., 2020). In particular, the relatively small sizes of our training datasets make private LM fine-tuning more challenging (Kerrigan et al., 2020; Habernal, 2021; Senge et al., 2022; Yu et al., 2022) because, for a fixed number of update steps, the gradient noise added per update step grows as the size of the training dataset decreases (Tramer and Boneh, 2021; McMahan et al., 2018). Note that although the private models tend to underperform the non-private models by a large margin on the validation set (\(>\)30% for XNLI, as shown in Appendix Table 6), the performance gap on the test set is noticeably smaller, showing that training with differential privacy, like other noise injection schemes (Bishop, 1995), is also a form of regularization. Figures 0(b) and 0(g) display sentence retrieval precision when fine-tuning with different privacy budgets. The highest compression is achieved by the non-private models. The second-highest compression is achieved for \(\varepsilon=1\), our most private models. Both suggest non-linear privacy-compression interactions, with POS showing lowest compression for \(\varepsilon=30\) (or higher) and XNLI showing lowest compression for \(\varepsilon=8\). The results are very similar for IsoScore (Figures 0(d), 0(i)) and also similar, albeit less pronounced for CKA (Figures 0(c), 0(h)).30 RSA, in contrast, exhibits very low scores for highly private models; see Appendix E. Footnote 30: The randomly initialized XLM-R model shows high CKA scores. This is explained by the high dimensionality (\(d=768\)) of the contextualized representations, considering that CKA saturates with increasing network width (Kornblith et al., 2019), and the high centroid similarity of random activations. These results show that we can achieve _strong compression and strong performance at the cost of privacy_ (\(\varepsilon=\infty\)), _strong compression and strong privacy at the cost of performance_ (\(\varepsilon=1\)), or _trade-off performance and privacy at the cost of compression_ (e.g., \(\varepsilon=8\)). It may seem counterintuitive that multilingual compression and cross-lingual transfer performance are not strictly correlated. However, in the fine-tuning setting, we can sacrifice task-specific knowledge in favor of multilingual compression, which leads to poor performance. Vice-versa, a model may ex -point spurious correlations in the data to make correct predictions without actually relying on cross-lingual signal. An example for the former case is the pretrained (but not fine-tuned) XLM-R, which scores highly in multilingual compression (as displayed in Figure 1) but has poor cross-lingual transfer performance in the downstream tasks. We also find that in some fine-tuning settings, e.g., \(\varepsilon=\infty\), the multilingual compression surpasses that of the pre-trained XLM-R. While Liu et al. (2021c) have previously shown that sentence retrieval performance typically drops (i.e., compression worsens) over the course of fine-tuning (which we confirm in Appendix Fig. 5), this finding clearly shows that there are exceptions. Future work may investigate this further. Lastly, retrieval and CKA scores are always highest between typologically similar languages and languages over-represented in pretraining (see Table 2 for a comparison across languages) _across all levels of privacy_, as shown by the non-aggregated results in the Appendix Figures 6-13. This finding thus extends conclusions from prior work (Pires et al., 2019; Wu and Dredze, 2019; K et al., 2020; Lauscher et al., 2020) to private models. ## 5 More multilingual, less interpretable? MetricTo answer this question, we introduce \(\mathrm{InfU}\) (**Influence**Uniformity), a measure of uniformity based on \(\mathrm{TracInCP}\) influence scores for each training example in the multiparallel dataset \(D=\{\ldots,i_{1},\ldots,i_{|L|},\ldots\}\), with \(i_{j}\) and \(i_{k}\) translation equivalents. We compute \(\mathrm{InfU}\) for \(\mathcal{M}\) and the translation equivalents \(i\) = \(\{i_{1},\ldots,i_{|L|}\}\) as follows: \[\mathrm{InfU}(i)=\frac{1}{|L|}\sum_{k=1}^{|L|}\text{H}(\sigma(\mathrm{TracInCP} (i_{k},i))) \tag{6}\] where \(H\) is the entropy with \(log_{|L|}\) and \(\sigma\) is a softmax used to obtain a probability distribution over influence scores. \(\mathrm{InfU}\) is maximized (\(\mathrm{InfU}=1\)) for uniform influence scores, fulfilling \(\mathrm{TracInCP}(i_{j},i_{k})=\mathrm{TracInCP}(i_{q},i_{r})\), \(\forall j,k,q,r\in L\). This means a perfectly multilingual model that yields equivalent representations for translation equivalent examples obtains \(\mathrm{InfU}=1\). In this scenario of maximum uniformity our model is also the least instance-interpretable because training data influence is minimally sparse, so we cannot easily identify influential examples for a prediction. We use \(\mathrm{InfU}\) to study to what extent influence sparsity aligns with the metrics privacy and cross-lingual performance. Figure 1: Task performance, sentence retrieval, CKA, IsoScore, and RSA results when fine-tuning with different privacy guarantees (\(\infty\)=non-private). We add the original pretrained XLM-R and XLM-R with randomly initialized weights for comparison. The results show how non-private fine-tuning balances multilingual compression and task performance. Strongly private fine-tuning (\(\varepsilon=1\)) is compatible with high compression (retrieval, CKA, IsoScore), but not with task performance. For medium levels of privacy (e.g., \(\varepsilon=8\)), we see the result of balancing privacy and task performance at the expense of multilingual compression. SetupWe use 1000 training examples and compute \(\mathrm{TracInCP}\) scores from the last 3 model checkpoints, taken every 100 steps, with their corresponding learning rates.31 Footnote 31: Since the learning rate changes every training step, we use the learning rate from the end of each checkpointing interval. Results and AnalysisWe plot the mean \(\mathrm{InfU}\) against the mean sentence retrieval precision for our fine-tuned models and compute Pearson's \(\mathrm{R}\) in Figures 1(a) and 1(c). For both tasks, there is a significant (\(p<0.05\)) strong positive correlation between the \(\mathrm{InfU}\) score and multilingual compression as determined through sentence retrieval. This supports the idea that _multilingual compression is at odds with training data influence_. See also how highly private and low-performing models score highly in \(\mathrm{InfU}\) (Figures 1(b), 1(d)); and non-private and high-performing models do the same. For medium levels of privacy we, however, see a trade-off characterized by lower \(\mathrm{InfU}\), i.e., better instance-interpretability, and medium performance. Strong _privacy_ guarantees, sparse training data influence estimates, and performance are incompatible, because the high-performing models are strictly low in privacy and training data influence sparsity, and the models high in privacy are strictly low in performance and training data influence sparsity. ## 6 Related Work While privacy, fairness, and interpretability _individually_ have enjoyed ample attention from the research community in recent years (Liu et al., 2021; Mehrabi et al., 2021; Sogaard, 2021), the interactions between these objectives have not been explored much (Ruder et al., 2022). Some prior work has focused on the interactions between group fairness and differential privacy, suggesting that the two objectives are at odds, although this relationship also depends on the selected notion of fairness (Bagdasaryan et al., 2019; Cummings et al., 2019; Chang and Shokri, 2021; Hansen et al., 2022). Somewhat in contrast to this work, we show that linguistic fairness (group fairness over linguistic communities) and differential privacy may align for multilingual language models. Furthermore, Naidu et al. (2021) and Shokri et al. (2021) have studied the interaction between privacy and feature attribution methods for model explainability. While the former show that privacy and feature attribution methods can align, the latter find that model explanations are at risk of membership inference attacks. Closest to our work is contemporaneous work by Strobel and Shokri (2022) who discuss the interactions of data privacy with fairness, explainability, and robustness. Our work differs from theirs in that we are particularly concerned with multilingual language models and we consider instance-based interpretability methods while they consider feature attribution methods. Strobel and Shokri (2022) also call for more research at the intersection of different objectives rather than working on one at a time. ## 7 Conclusion We presented a preliminary investigation of how multilingual compression, differential privacy, training data influence, and linguistic fairness interact in multilingual models. We found that privacy and influence are incompatible, while privacy and linguistic fairness, often said to be at odds, are theoretically compatible through multilingual compression. We also explored these interactions empirically. Our results support the idea that high multilingual compression can be achieved either while optimizing for performance or while optimizing for privacy, but that by trading off privacy and performance, we compromise compression. Finding practical trade-offs between _all_ these di Figure 2: Linear fit and Pearson correlation between the influence uniformity \(\mathrm{InfU}\) and sentence retrieval precision (1(a), 1(c)) and \(\mathrm{InfU}\) versus downstream performance for different levels of privacy (1(b), 1(d)). We see significant positive correlations between retrieval precision and \(\mathrm{InfU}\), suggesting a negative correlation between multilingual compression and training data influence sparsity. For task performance, we see the trade-off between training data influence sparsity (\(\mathrm{InfU}\)) and privacy, which aligns with our theoretical expectations (§2). mensions remains an open challenge. Finally, we introduced a new diagnostic metric, influence uniformity, which we used to validate that privacy and training data influence sparsity are incompatible, and that the interactions between privacy, training data influence sparsity, and multilingual compression are, therefore, also non-linear. ## Ethical Aspects and Broader Impact It is crucial that NLP goes beyond performance and studies the interaction of objectives such as privacy, interpretability, and fairness, also in multilingual NLP (Ruder et al., 2022). Our work aims to provide a starting point for further research in this area. Our empirical investigation, including the models we train, fully relies on publicly available models and data. Moreover, we do not create any new datasets. Therefore, we foresee no misuse of the results of our work. ## Acknowledgements We thank the anonymous reviewers and members of the CoAStaL group for their helpful feedback and suggestions. Phillip Rust is funded by the Novo Nordisk Foundation (grant NNF 20SA0066568).
2302.12777
On the Misspecification of Linear Assumptions in Synthetic Control
The synthetic control (SC) method is a popular approach for estimating treatment effects from observational panel data. It rests on a crucial assumption that we can write the treated unit as a linear combination of the untreated units. This linearity assumption, however, can be unlikely to hold in practice and, when violated, the resulting SC estimates are incorrect. In this paper we examine two questions: (1) How large can the misspecification error be? (2) How can we limit it? First, we provide theoretical bounds to quantify the misspecification error. The bounds are comforting: small misspecifications induce small errors. With these bounds in hand, we then develop new SC estimators that are specially designed to minimize misspecification error. The estimators are based on additional data about each unit, which is used to produce the SC weights. (For example, if the units are countries then the additional data might be demographic information about each.) We study our estimators on synthetic data; we find they produce more accurate causal estimates than standard synthetic controls. We then re-analyze the California tobacco-program data of the original SC paper, now including additional data from the US census about per-state demographics. Our estimators show that the observations in the pre-treatment period lie within the bounds of misspecification error, and that the observations post-treatment lie outside of those bounds. This is evidence that our SC methods have uncovered a true effect.
Achille Nazaret, Claudia Shi, David M. Blei
2023-02-24T17:48:48Z
http://arxiv.org/abs/2302.12777v1
# On the Misspecification of Linear Assumptions in Synthetic Control ###### Abstract The synthetic control (SC) method is a popular approach for estimating treatment effects from observational panel data. It rests on a crucial assumption that we can write the treated unit as a linear combination of the untreated units. This linearity assumption, however, can be unlikely to hold in practice and, when violated, the resulting SC estimates are incorrect. In this paper we examine two questions: (1) How large can the misspecification error be? (2) How can we limit it? First, we provide theoretical bounds to quantify the misspecification error. The bounds are comforting: small misspecifications induce small errors. With these bounds in hand, we then develop new SC estimators that are specially designed to minimize misspecification error. The estimators are based on additional data about each unit, which is used to produce the SC weights. (For example, if the units are countries then the additional data might be demographic information about each.) We study our estimators on synthetic data; we find they produce more accurate causal estimates than standard synthetic controls. We then re-analyze the California tobacco-program data of the original SC paper, now including additional data from the US census about per-state demographics. Our estimators show that the observations in the pre-treatment period lie within the bounds of misspecification error, and that the observations post-treatment lie outside of those bounds. This is evidence that our SC methods have uncovered a true effect. ## 1 Introduction The synthetic control (SC) method is a popular approach for analyzing observational panel data to estimate causal effects [1]. SC has been widely used in science [11] and social science [12], as well as for evaluating public policies [1, 13, 14]. The typical SC setup involves measurements of an outcome variable over time. One unit, called the _target_, received an intervention at a certain time. The other units, called _donors_, never received an intervention. The goal of SC is to estimate the target's counterfactual outcomes. What would have happened had it not received the intervention? Example: The panel data in Fig. 1 (left) contains cigarette sales across states and time. In 1988 California implemented a program that increased the tobacco tax by 25 cents. After 1988, how much would California's have smoked had the program not been implemented? Here, California is the target; the other states are the donors. The idea behind SC is to approximate the target's control outcomes--the smoking rate in California without its policy--with a weighted combination of the donor's control outcomes. In the example, SC uses data from the pre-policy periods to fit California's pre-policy smoking rates as a weighted combination of the other states' smoking rates. It then uses its fitted weights to estimate the smoking rate in California after 1988, had the policy not been introduced. These estimates, along with California's post-policy rates, help assess the causal effect of the policy. What justifies this procedure? In its original formulation, Abadie et al. [1] shows that SC is justified if the control outcomes follow a linear factor model, where a per-period factor linearly combines with a per-unit factor. Following this work, Shi et al. [15] shows that the linear factor model itself can be justified through assumptions about the individuals within each unit (e.g., people within each state) and invariances around the causal structure of the individual-level outcomes (e.g., whether they smoke). But whether at the aggregate or individual level, these assumptions point to the same requirement: that the target need to be expressed as a linear combination of the donors. What if this requirement is not satisfied? What if California is not a linear combination of the other states? This paper studies the practical situation where the synthetic control is _misspecified_. We study how to quantify this misspecification error and how to minimize it. In detail, we derive two bounds on the SC error, the M bound and the James Bound. Both bounds build on the causal framework of Shi et al. [11]. It assumes a set of _invariant causes_, variables that govern the individual-level outcomes in the same way across units, and where the difference between the units' outcomes involves different distributions of those causes. For example, whether someone smokes might be caused by their age and education level, and the difference between California's and Nevada's smoking rates lies in their different population distributions of those demographic variables. Our theory shows how the similarity between the true target distribution of the causes and its synthetic distribution, induced by the SC weights, helps bound the error of the corresponding SC estimates. We then consider a situation where we additionally observe external data about the invariant causes, such as demographic information about each state. We show how to use such data to estimate the misspecification interval for a fixed set of SC weights, and we develop two new algorithms for estimating SC weights that explicitly minimize the width of this interval. (One algorithm assumes we observe all invariant causes; the other does not make that assumption, but provides wider misspecification intervals.) Thus this paper provides a new form of SC analysis, one where we analyze panel data and demographic data together to estimate the target counterfactual and assess its robustness to misspecification. Figure 2 illustrates this analysis on the California tobacco data, now also using additional data from the U.S. census about per-state demographics. Our estimators show that the observed outcome in the pre-policy period lies within the bounds of misspecification error, and that the observed outcome post-policy lies outside of those bounds. These results suggest that California's 1988 anti-tobacco program had a true effect, despite possible misspecification of the synthetic control. Related Work.This paper contributes to the literature on synthetic controls [1, 2]. The M-bound and James-bound estimators of Sections 3 and 4 contribute to research on novel SC estimators [1, 2, 3, 4 vention at \(T_{0}\), while \(Y_{jt}\) is the potential outcome in a world with no intervention. For \(j=0\) and \(t\geq T_{0}\), \(y_{jt}=\widetilde{Y}_{jt}\), otherwise, \(y_{jt}=Y_{jt}\). Our causal question is, what would the target counterfactual be, had the intervention not occurred? We would like to estimate \(Y_{0t}\) for \(t\geq T_{0}\). ### Synthetic Controls and their Assumptions Synthetic control methods estimate the counterfactual outcomes of the target \(Y_{0t},t\geq T_{0}\) with a weighted combination of the outcomes of the donors: \(Y_{0t}=\sum_{j}w_{j}y_{jt}\). The SC weights are fitted from the pre-intervention outcomes, \[w=\operatorname*{arg\,min}_{w\in\Delta^{J}}\sum_{t=0}^{T_{0}-1}\Bigl{(}y_{0t} -\sum_{j}w_{j}y_{jt}\Bigr{)}^{2}. \tag{1}\] The validity of SC relies on two conditions: (1) During the pre-intervention period, the target's outcomes can be written as a weighted combination of the control units' outcomes. (2) The weighted combination from the pre-intervention periods generalizes to the post-intervention periods. As a first step to obtain these two conditions, Abadie et al. [1] and most other works assume that the outcomes under no intervention are generated by a linear factor model [1, 2, 1, 10]. We call it assumption A1. **A1. (Linear Factor Model)** Under no intervention, the outcomes are generated from a linear factor model, \[Y_{jt}=\mu_{j}^{\top}\lambda_{t}+\epsilon_{jt}, \tag{2}\] where \(\mu_{j}\) is a unit-specific latent factor, \(\lambda_{t}\) is a time-dependent factor, and \(\epsilon_{jt}\) is independent random noise. Then, Abadie et al. [1] assumes that the target's outcomes can be written as a convex combination of the donors' outcomes. This implies that the target's latent factor is a convex combination of the donors' latent factors, we it call A2. **A2. (Convex Combination)** The target unit's latent factor is a convex combination of the donors' latent factors, \[\exists w\in\Delta^{J},\qquad\mu_{0}=\sum_{j}w_{j}\mu_{j},\] where \(\Delta^{J}\) is the simplex over \(J\) coordinates. **Remark 1**.: _To be precise, Abadie et al. [1] assumes that \(A:=\sum_{t<T_{0}}\lambda_{t}^{\top}\lambda_{t}\) is nonsingular. Consequently, once they assume \(Y_{0t}=\sum_{j}w_{j}Y_{jt}\) for \(t<T_{0}\), then the invertability of \(A\) implies \(\mu_{0}=\sum_{j}w_{j}\mu_{j}\) (intuitively, we "invert" the factor model). The target is a convex combination of donors._ With assumptions A1 and A2 in hand, estimator (1) will identify the weights of A2. These weights will then estimate the untreated potential outcomes using the factor model A1. With assumptions A1 and A2, synthetic control is possible. ### A Fine-grained Model for SC When does the linear factor model assumption A1 hold in practice? Shi et al. [19] explores a justification. First, the authors notice that SC often considers large units composed of multiple individuals (states, countries) and aggregated outcomes that are averages of individual-level outcomes (per-capita cigarette consumption). Therefore, they propose a "fine-grained" model of synthetic controls, which introduces individual-level variables. The variable \(Y_{ijt}\) denotes individual \(i\)'s outcome in unit \(j\) at time \(t\) (their cigarette consumption at time \(t\)). The outcome of each unit \(Y_{jt}\) is the average of the individual outcomes in the unit. Second, Shi et al. [19] posits the idea of _invariant causes_, which we denote as \(x_{ijt}\). The invariant causes \(x_{ijt}\) are individual-level variables that follow two invariance assumptions. (1) When conditioned Figure 2: Visualization of the observed California outcomes (solid lines), the SC estimates (dotted lines), and the misspecification intervals (shaded area), as calculated by the M-bound estimator (left) or the James-bound estimator (right). California’s outcomes lie within the error bounds prior to the intervention but escape the error bounds after the intervention. This suggests that the tobacco program had a causal effect despite possible misspecification of SC. on the invariant causes, the individual-level outcomes do not depend on which unit the individual is from. This way, \(x\mapsto\mathbb{E}\left[Y_{ijt}|x_{ijt}=x\right]\) is the same function for all \(i\) and \(j\); we write it \(\mathbb{E}_{t}\left[Y|x\right]\). (2) The distribution of the invariant causes in each unit can change from unit to unit but remain the same across time. The distribution of invariant causes over the individuals in unit \(j\) is denoted \(x\mapsto p_{j}(x)\), with no dependence in \(t\). Finally, with these individual-level variables, Shi et al. [14] shows that the unit-level outcomes are \[\mathbb{E}\left[Y_{jt}\right]=\int_{x}\mathbb{E}_{t}\left[Y\mid x\right]p_{j} (x)\;\mathrm{d}x. \tag{3}\] Further, if the distributions of invariant causes \(p_{j}\) are _discrete_ and _finite_ then Eq. 3 becomes a finite sum, \[\mathbb{E}\left[Y_{jt}\right]=\sum_{x_{k}}\underbrace{\mathbb{E}_{t}\left[Y \mid x_{k}\right]}_{(\lambda_{t})_{k}}\underbrace{p_{j}(x_{k})}_{(\mu_{j})_{k }}=\lambda_{t}^{\top}\mu_{j}. \tag{4}\] Thus Equation 4 justifies the linear factor model A1 and provides context about what the latent factors might represent. Regarding A2, Shi et al. [14] still assumes it; it rewrites, \[\exists w\in\Delta^{J},\qquad p_{0}=\sum_{j}w_{j}p_{j},\] where the equality is in the space of probability distributions. ### Misspecification in SC Shi et al. [14] explains where the linear factor model A1 can come from. However, even with the fine-grained approach, the factor model arises from the strong assumption of _discrete_ and _finite_ causes, which might not hold in practice. In other words, the factor model A1 is not guaranteed. The convex combination A2 is also a key assumption that is unlikely to hold in practice. With a limited number of donors, the \(p_{j}\) (or \(\mu_{j}\)) can be linearly independent. And with continuous invariant causes, the \(p_{j}\) are densities, making it impossible to match \(p_{0}\) even with infinitely many donors. In this work, we relax A1 and A2. To relax A1, we use the fine-grained model of Shi et al. [14] but do not assume that causes are _discrete_ and _finite_. To relax A2, is the focus of the paper. When A2 is violated, the target is not a convex combination of the donors. The synthetic control is _misspecified_, which leads to errors in the estimation of causal effects. Formally, we define the misspecification error as the absolute difference between the expected counterfactual outcome and the synthetic outcome (after intervention): \[\left|\mathbb{E}\left[Y_{0t}\right]-\sum_{j=1}^{J}w_{j}\mathbb{E}\left[Y_{jt} \right]\right|\text{ for }t\geq T_{0}.\] We turn to the problem of how to bound and minimize this error. We will show how to leverage external data about the invariant causes - such as demographic data from the census - to estimate a _misspecification interval_ for the synthetic outcomes. We then derive new ways to fit the SC weights that minimize the width of that interval. The result is a new type of estimate of the SC counterfactual, and an assessment of its sensitivity to the linearity misspecification of A2. ## 3 The M Bound and its Estimator In this section, we derive an exact bound that quantifies errors induced by violation of A2. Using this bound, we develop an estimator minimizing A2 mis-specification errors. ### The M Bound Let \(\hat{p}_{0}\) be the synthetic distribution, defined as \(\hat{p}_{0}(w)=\sum_{j}w_{j}p_{j}\). We examine the difference between the distribution of the target unit \(p_{0}\) and the synthetic distribution \(\hat{p}_{0}\). If \(p_{0}\neq\hat{p}_{0}\) but \(\hat{p}_{0}\) remains "close" to \(p_{0}\), we expect the synthetic control estimate to remain approximately correct, \[\mathbb{E}\left[Y_{0t}\right]\approx\sum_{j=1}^{J}w_{j}\mathbb{E}\left[Y_{jt} \right].\] We formalize this intuition by bounding the errors resulting from the misspecification of A2. **Bound 1** (M bound).: _For any \(t\), assume that \(x\mapsto\mathbb{E}_{t}\left[Y|x\right]\) is \(\ell\)-Lipschitz1. Then for any weights in the simplex \(w\), we have the Misspecification error bound (M-bound):_ Footnote 1: See Appendix A for details about Lipschitz functions. \[\left|\mathbb{E}\left[Y_{0t}\right]-\sum_{j=1}^{J}w_{j}\mathbb{E}\left[Y_{jt} \right]\right|\leq\ell\cdot W_{1}\left(p_{0},\hat{p}_{0}\right), \tag{5}\] _where \(\hat{p}_{0}=\sum_{j}w_{j}p_{j}\) and \(W_{1}\) is a \(\ell_{1}\)-Wasserstein distance._ The proof is in Appendix A. The Wasserstein distance \(W_{1}\) is a distance between probability distributions [20]. It quantifies the differences between the true population distribution \(p_{0}\) and the synthetic population distribution \(\hat{p}_{0}\). For any set of weights \(w\), the M bound (Bound 1) circumscribes the error of the SC estimate by a function of the weights, the population distributions of each unit (the \(p_{j}\)), and the sensitivity of the outcome variables to the variation of the causes (the Lipschitz constant \(\ell\)). If \(p_{0}=\hat{p}_{0}\), then the Wasserstein distance \(W_{1}(p_{0},\hat{p}_{0})\) between the true and the synthetic distribution is zero. The M bound recovers that the SC estimate is correct. When \(p_{0}\neq\hat{p}_{0}\), the M bound shows that the estimation error is proportional to the distance \(W_{1}(p_{0},\hat{p}_{0})\). The intuition behind Eq. 5 is that when a misspecification occurs, a portion of the population \(p_{0}\) is approximated with an incorrect portion of the synthetic population \(\hat{p}_{0}\). It is unpredictable how these populations will behave. In the worst case, their outcomes can differ by at most the distance between them (captured by \(W_{1}\)) and the maximum possible variation of the conditional outcome (captured by \(\ell\)). Hence, the M bound proves (theoretically) that a small misspecification induces a small estimation error. ### The M-bound Estimator We established the M bound, which quantifies the misspecification error for any set of weights \(w\). To find weights with minimal misspecification error, we develop the M-bound estimator. See Algorithm 1. ``` Input: Distributions \(p_{0},...,p_{J}\); learning rate \(\alpha\); number of epochs \(E\). Output:\((w_{j})\) minimizing the M bound. \((w_{1},...,w_{J})\leftarrow(\frac{1}{J},...,\frac{1}{J})\) for\(e=1\)to\(E\)do \(\hat{p}_{0}\leftarrow\sum w_{j}p_{j}\) \(\text{grad}\leftarrow\nabla_{w}W_{1}(p_{0},\hat{p}_{0})\) \(w\gets w-\alpha\cdot\text{grad}\) \(w\leftarrow\text{project\_simplex}(w)\) endfor return\(w\) ``` **Algorithm 1**Minimization of the M bound The M-bound estimator takes population distribution data \(p_{j}\) for each unit as input and returns a set of weights that minimizes the M bound. To obtain the weights, it uses projected gradient descent with the following objective, \[(w_{1},...,w_{J})\mapsto W_{1}\Big{(}p_{0},\sum_{j}w_{j}p_{j}\Big{)}.\] Notice it computes the SC weights using the population distribution of each unit. It does not use the outcomes data. After obtaining a set of weights from Algorithm 1, we can use Eq. 5 with an estimated constant \(\ell\) to create a misspecification interval around the synthetic control estimate, \[\mathbb{E}\left[Y_{0t}\right]\in[\hat{y}_{0t}-M,\hat{y}_{0t}+M]\qquad\forall t, \tag{6}\] where \(\hat{y}_{0t}:=\sum_{j=1}^{J}w_{j}y_{jt},\ M:=\ell\cdot W_{1}\left(p_{0},\hat{p} _{0}\right)\). Thus, the M bound, with its associated estimator and misspecification interval, can be used to discover causal effects. In Section 5.2, we revisit the California tobacco example. We use demographic data of each US state to form the invariant causes distributions \(p_{j}\) and fit the M-bound estimator with these \(p_{j}\). Like standard SC, the weights returned by the estimator are used to form the synthetic outcomes. In addition, the M bound provides misspecification intervals accounting for the A2 misspecification error. Fig. 2 illustrates the synthetic control estimate with its misspecification interval generated by the M-bound estimator. We see that California's observed outcomes lie within the interval before intervention and escape it after the intervention. This suggests that a causal effect is present, even in case of misspecification. ## 4 The James Bound and its Estimator In Section 3, we derived a theoretical bound on misspecification error and showed how to use the M-bound estimator to detect a causal effect. In theory, the true outcome is guaranteed to lie within the M bound. In practice, the misspecification interval produced by the M bound is only valid if we observe the distribution of all invariant causes \(p_{j}\). Observing all invariant causes is a strong assumption that may not hold. Here, we consider the setting where the invariant causes are only partially observed. We first derive a new error bound, the James bound, that accounts for misspecification on both the observed and unobserved causes. The James bound leverages the pre-intervention outcome data to estimate the influence of the unobserved causes on the outcome variable. To find the weights that minimize the James bound, we develop the James-bound estimator. Finally, we discuss when it is appropriate to use the M bound versus the James bound. ### The James Bound So far, we have used \(x\) to denote all the invariant causes. With a redefinition of notation, we now refer to the _observed causes_ as \(x\), and the _unobserved causes_ as \(z\). Such that Eq. 3 becomes \(\mathbb{E}\left[Y_{jt}\right]=\int_{(x,z)}p_{j}(x,z)\mathbb{E}_{t}\left[Y|x,z \right]\text{d}x\text{d}z\). In general, we cannot bound the effect of unobserved variables without further assumptions. Here, we assume that the unobserved causes and observed causes are independent and that their respective effect on the outcome can be decomposed into two distinct terms, this is A3. We note that A1, which we relaxed, was more restrictive than A3. **A3. Independence of Observed and Unobserved Causes.** For each unit \(j\), the variable \(x\) and \(z\) are independent, \[p_{j}(x,z)=p_{j}(x)p_{j}(z),\] and for each time \(t\), there exist functions \(g\) and \(h\) such that: \[\mathbb{E}_{t}\left[Y|x,z\right]=g_{t}(x)+h_{t}(z).\] We note that the distributions of the observed causes \(x\mapsto p_{j}(x)\) and the unobserved causes \(z\mapsto p_{j}(z)\) remain arbitrary, and so are \(g_{t}\) and \(h_{t}\). With A3, we have "just another misspecification error" bound, the James bound. **Bound 2** (James bound).: _For \(t\geq T_{0}\), assume that \(x\mapsto\mathbb{E}_{t}\left[Y|x\right]\) is \(\ell\)-Lipschitz. Then for any weights \(w\in\Delta^{J}\),_ \[\left|\mathbb{E}\left[Y_{0t}\right]-\sum_{j=1}^{J}w_{j}\mathbb{E}\left[Y_{jt} \right]\right|\!\leq\!\ell\cdot W_{1}(p_{0}(x),\hat{p}_{0}(x)) \tag{7}\] \[+\max\nolimits_{u<T_{0}}\left|\mathbb{E}\left[Y_{0u}\right]-\sum_{j=1}^{J}w_{ j}\mathbb{E}\left[Y_{ju}\right]\right| \tag{8}\] \[+\!\!\!\inf_{\alpha\in\Delta^{T_{0}}}\!\!\left|\int_{z}\!\!\!\left(\!p_{0}(z) \!-\!\hat{p}_{0}(z)\!\!\right)\!\!\left(\!\mathbb{E}_{t}\!\left[Y|z\right]\! -\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! assume that some invariant causes might be unobserved. If the post-intervention target outcomes fall outside the misspecification interval, we have discovered a causal effect robust to A2 misspecification (see Fig. 2). If the post-intervention misspecification interval is too wide to detect a causal effect, then it could be that there is no causal effect. But it could also be that there is too much misspecification to use SC or that the James bound is too loose. We cannot conclude in favor of a causal effect in the first two cases. To check if the James bound is too loose and find a tighter bound, we can use the M bound. The M bound is guaranteed valid if all invariant causes are observed. Since the M estimator does not use outcome data, the target's pre-intervention outcomes can be used as a validation set. If the observed pre-intervention outcomes fall outside the predicted misspecification interval, not all invariant causes were observed, and we cannot apply the M bound. Otherwise, we may use the M bound. ## 5 Empirical Studies We examine the M-bound and James-bound estimators using synthetic and tobacco consumption data. With synthetic data, we demonstrate that the M-bound and James-bound estimators produce better estimates in case of misspecification, and show that their misspecification intervals contain the counterfactual outcomes correctly. Using the tobacco consumption case study, we demonstrate how to collect external data and how to choose between M-bound and James-bound estimators. We find that the post-intervention California outcomes escape the misspecification error bound, suggesting that there is an actual causal effect. **Implementation Details.** To implement the algorithms we need to manipulate probability distributions and calculate Wasserstein distances with their gradients. Our implementation expects the input \(p_{j}\) to be non-parametric distributions represented by a collection of atoms and associated probabilities: \(p_{j}=\sum_{x\in X}\delta_{x}\cdot p_{j}(x)\), where \(X\) is the set of atoms and \(\delta_{x}\) is a point mass at \(x\). If \(p_{j}\) is discrete, such as a histogram, then the atoms are the possible values of the causes, and \(p_{j}(x)\) their associated probabilities. If \(p_{j}\) is continuous, then the atoms are samples of \(p_{j}\), and \(p_{j}(x)\) is the normalized density at \(x\). For all experiments, we compute the gradients of \((w_{1},...,w_{J})\mapsto W_{1}(p_{0},\sum_{j}w_{j}p_{j})\), using the Python Optimal Transport library [10] coupled with PyTorch [12]. We use gradient descent with a learning rate \(\alpha=5\cdot 10^{-6}\) and \(200,000\) epochs. ### Experiments with Synthetic Data **Data Description.** We generate synthetic data by defining the conditional distribution \(\mathbb{E}_{t}\left[Y|x\right]=f(x,t)\) and the causes distributions \(p_{j}(x)\). We create six different units (called g20, g45, g50, g60, g65, g70), and consider that a single cause \(x\in\mathbb{R}\) impacts the outcome \(Y\). The units can be thought of as different groups of people (e.g. cities), and the cause \(x\) as the age of each individual in these groups. The six units have different distributions of age (group gX has an average age of X). The target group is g45, the panel duration is \(T=50\), and the intervention time is \(T_{0}=15\). The closed form equations of \((t,x)\mapsto\mathbb{E}_{t}\left[Y|x\right]\) and \((j,x)\mapsto p_{j}(x)\) are in Appendix B while Figure 3 shows the evolution of \(x\mapsto\mathbb{E}_{t}\left[Y|x\right]\) over time \(t\) as well as the distributions \(x\mapsto p_{j}(x)\) for each unit \(j\). The expected outcome \(\mathbb{E}_{t}\left[Y|x\right]\) varies over time, in different ways for each \(x\). We input the distributions \(p_{j}\) to Algorithm 1 and obtain the weights that minimize the M bound. As a comparison, we calculate the weights obtained from the standard SC in Eq. 1. We report the weights and the induced synthetic outcomes in Fig. 4. Furthermore, we compute \(\ell=4.0\) from \(x\mapsto\mathbb{E}_{t}\left[Y\mid x\right]\) Figure 3: Visualization of the synthetic data generating process. (top) Each colored line represents the expected conditional outcome \(\mathbb{E}_{t}\left[Y|x\right]\) for a different time \(t\) (one line per time), as a function of the cause \(x\) (on the x-axis). As time progresses (from darker to lighter), the expected conditional outcomes increase for all values of \(x\). For different \(x\), the rate of increase over time is different. (bottom) Distributions densities of the causes \(x\mapsto p_{j}(x)\) for each unit \(j\). Different lines represent different units. The target unit is g45, which overlaps mostly with unit g50. (valid for all \(t\)). This way, we obtain the exact value of the M-bound and we can form the misspecification interval of Eq. 6, shaded on Fig. 4. **Analysis.** As shown in Fig. 4, the standard SC places a large weight on donor g20, which is a unit whose individuals are very different from g45 but with similar pre-treatment outcomes. When time increases, the individuals in g20 and g45 evolve differently and the synthetic outcome of the standard SC weights deviates away from the true outcome. In contrast, the M-bound estimator places most of the weight mass on the donor g50, which contains individuals with similar \(x\) as the target g45. By doing so, the synthetic outcomes might not exactly match the g45 outcomes, but they generalize better over time. We also verify that the true outcome is always contained in the misspecification interval (Eq. 6). We repeat the analysis with the James bound and obtain the same conclusions, reported in Appendix B. With external data, we estimated the misspecification error and limited it using the M-bound and James-bound estimators. Without external data, standard SC was incorrect. ### A Case Study on Real Data We revisit the tobacco study from Abadie et al. [1] to illustrate how to collect external data, apply different estimators, and calculate the misspecification intervals. **Prop 99.** A tobacco control program was passed in California in 1988, which increased tobacco taxes by 25 cents. The tax revenue was used to fund anti-tobacco campaigns. Our goal is to estimate the causal effect of the tobacco control program on California's tobacco consumption. The tobacco panel dataset (Fig. 1) is from the Centers for Disease Control and Prevention [15], which provides the per capita tobacco consumption for 50 states from 1970 to 2019. The intervention of interest is the tobacco program, Prop 99. The observed outcomes for California after 1988 are under intervention. All the other observed outcomes in the dataset are assumed to be under no intervention. **External Data Collection.** First, we identify the potential causes of smoking. According to Turner et al. [13], smoking is heavily influenced by societal and cultural factors. While these factors are difficult to measure directly, they are often correlated with demographics. Several studies have found that cigarette consumption varies significantly by age, gender, race, and ethnicity [14, 15]. As a result, we use _age_, _sex_, and _ethnicity/race_ as proxies for the causes of smoking. We use the American Community Survey (ACS) to formulate a distribution of causes for each unit. The ACS is a demographics survey program conducted continuously by the U.S. Census Bureau [15]. It reports population demographics at different geographical scales, from city boroughs to states. We accessed the ACS data with the Census Reporter API [15]. For each state, the ACS provides the joint distribution of the variables _age, race, sex_. Each variable is discretized into multiple bins: _age_ into 14 bins (e.g. 15 to 17, 20 to 24 years old), _race_ takes 8 values (Asian, Black, Native American, Pacific Islander, White non-Hispanic, White Hispanic, Mix, and Other), and _sex_ takes 2 values (Male, Female). The joints \(x\mapsto p_{j}(x)\) over these variables are defined for each state on these \(14\times 8\times 2=224\) demographics combinations (atoms). We estimate \(\ell\) using additional survey data from the Tobacco Use Supplement to the Current Population Survey. This independent study collects individual demographic information along with tobacco consumption. We form the expected tobacco consumption given each invariant cause and compute the induced \(\ell\). More details about the computation of Lipschitz constant can be found in Appendix B. **The M-bound Estimator.** The M-bound estima Figure 4: Comparison of the M-bound estimator and the standard SC estimator on synthetic data. (left) Weights returned by each estimator. The M-bound estimator selects donors (g20 and g50) that are most similar to the target (g45). (right) Synthetic outcomes of each estimator, compared to the true outcome. Under misspecification, the M-bound estimator provides more accurate estimates than the standard SC, despite a poorer pre-intervention fit. tor uses our newly formed distributions \(p_{0},...,p_{J}\) to compute a set of SC weights. We report the weights in Appendix B.4 and the SC outcomes with the misspecification interval in Fig. 2. Among the set of 50 potential donors, five obtained non-zero weights: New Mexico, Nevada, D.C, Hawaii and Texas. As expected, the M-bound estimator selected states that are similar to California. New Mexico and Nevada are geographically close and have similar demographics. Both D.C. and California have a relatively young active population. And, California is the number one destination for Hawaiians moving to the US mainland (from US census). In Fig. 2, the solid and dotted lines denote the observed and synthetic California outcomes. The shaded areas represent the misspecification intervals. California pre-intervention outcomes fall within the estimated M-bound interval, but synthetic California is not a perfect fit, there is misspecification. In spite of the misspecification, Fig. 2 shows that the post-intervention outcomes are outside of the bounds, suggesting a causal effect. **The James-bound Estimator.** As discussed in Section 4.3, the M-bound misspecification interval is only valid if we observe all the invariant causes. According to Fig. 2, California's pre-intervention outcomes fall within the M-bound intervals. We find, however, that when some other states are considered as the target unit, their observed outcomes before the intervention are not always within the interval. We perform _placebo tests_[1] where each donor is considered to be the target and a synthetic control is constructed using the other donors. Because the donors did not receive the intervention, we expect synthetic outcomes to match observed outcomes. In Fig. 5, we illustrate the comparisons for three states, Colorado, Massachusetts, New Mexico. For comparisons on all states, see Appendix B. Fig. 5 (left) shows the synthetic outcome estimates by the M-bound estimator. Both Colorado and Massachusetts's pre-intervention outcomes are outside of the misspecification interval. This suggests that not all invariant causes are observed. While New Mexico's pre-intervention outcomes lie within the misspecification interval of the synthetic New Mexico, the error bound is too large to use synthetic control. Fig. 5 (right) shows the synthetic outcome estimates using the James-bound estimator. We observe that the pre-intervention outcomes across states now all fall within the James-bound misspecification intervals, which are also wider than the M-bound intervals. After the intervention, the observed tobacco consumption in Colorado remains in the James-bound misspecification interval, suggesting the intervention had no effect. This is expected because Colorado did not implement an anti-tobacco program like California. For Massachusetts, the James-bound interval is narrow enough to detect a decrease in tobacco consumption that is not due to misspecification. In fact, this is consistent with the policies taken by this state in 1993 to raise taxes and increase its Massachusetts Tobacco Control Program. The placebo test provides further evidence that the tobacco control program in California indeed had a causal effect on tobacco consumption. In states without tobacco control programs, their outcomes fall within the misspecification interval, whereas California's outcome does not. ## 6 Discussion We address the problem of the misspecification of linear assumptions in synthetic controls. We relax assumptions commonly assumed in the literature (A1, A2), derive two misspecification bounds, and propose corresponding estimators. The key idea is to leverage Figure 5: Placebo study of the M-bound estimator (left) and the James-bound estimator (right), on Colorado, Massachusetts, and New Mexico. The M-bound synthetic outcomes are outside of the misspecification interval before the intervention. This suggests that not all invariant causes are observed and that the James bound should be used. The James-bound estimator accounts for the missing causes, with wider misspecification intervals. external data to bound and minimize misspecification. Each bound comes with requirements: data must be available for the M bound (observe all causes), and we identify a modeling assumption for the James bound (A3). As an area of future research, we can explore other SC estimation procedures that might be enabled by these two requirements.
2306.00684
Balanced Training of Energy-Based Models with Adaptive Flow Sampling
Energy-based models (EBMs) are versatile density estimation models that directly parameterize an unnormalized log density. Although very flexible, EBMs lack a specified normalization constant of the model, making the likelihood of the model computationally intractable. Several approximate samplers and variational inference techniques have been proposed to estimate the likelihood gradients for training. These techniques have shown promising results in generating samples, but little attention has been paid to the statistical accuracy of the estimated density, such as determining the relative importance of different classes in a dataset. In this work, we propose a new maximum likelihood training algorithm for EBMs that uses a different type of generative model, normalizing flows (NF), which have recently been proposed to facilitate sampling. Our method fits an NF to an EBM during training so that an NF-assisted sampling scheme provides an accurate gradient for the EBMs at all times, ultimately leading to a fast sampler for generating new data.
Louis Grenioux, Éric Moulines, Marylou Gabrié
2023-06-01T13:58:06Z
http://arxiv.org/abs/2306.00684v4
# Balanced Training of Energy-Based Models with Adaptive Flow Sampling ###### Abstract Energy-based models (EBMs) are versatile density estimation models that directly parameterize an unnormalized log density. Although very flexible, EBMs lack a specified normalization constant of the model, making the likelihood of the model computationally intractable. Several approximate samplers and variational inference techniques have been proposed to estimate the likelihood gradients for training. These techniques have shown promising results in generating samples, but little attention has been paid to the statistical accuracy of the estimated density, such as determining the relative importance of different classes in a dataset. In this work, we propose a new maximum likelihood training algorithm for EBMs that uses a different type of generative model, normalizing flows (NF), which have recently been proposed to facilitate sampling. Our method fits an NF to an EBM during training so that an NF-assisted sampling scheme provides an accurate gradient for the EBMs at all times, ultimately leading to a fast sampler for generating new data. Machine Learning, Bayesian Inference, EBMs, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Bayesian Inference, Bayesian Inference, Bayesian Bayesian Inference, Bayesian, Bayesian Inference, Bayesian Inference, Bayesian, Bayesian Inference, Bayesian Inference, Bayesian, Bayesian Inference, Bayesian Bayesian Inference, Bayesian, Bayesian Inference, Bayesian, Bayesian Inference, Bayesian, Bayesian Inference, Bayesian Inference, Bayesian Bayesian, Inference, Bayesian Bayesian Inference, Bayesian, Bayesian Inference, Bayesian Bayesian Inference, Bayesian, Bayesian Inference, Bayesian Bayesian Inference, Bayesian, Bayesian Inference, Bayesian Bayesian, Inference, Bayesian, Bayesian Inference, Bayesian, Bayesian Inference, Bayesian Bayesian Inference, Bayesian, Bayesian Inference, Bayesian Bayesian, Inference, Bayesian Bayesian, Inference, Bayesian, Bayesian Inference, Bayesian, Bayesian Inference, Bayesian Bayesian Inference, Bayesian, Bayesian Inference, Bayesian, Bayesian Inference, Bayesian Bayesian, Inference, Bayesian, Bayesian Inference, Bayesian, Bayesian Inference, Bayesian, Bayesian Inference, Bayesian, Bayesian Inference, Bayesian, Bayesian Inference, Bayesian, Bayesian Inference, Bayesian, Bayesian Inference, Bayesian, Bayesian Inference, Bayesian, Bayesian Inference, Bayesian, Bayesian Inference, Bayesian, Bayesian Inference, Bayesian, Bayesian Inference, Bayesian, Bayesian Inference, Bayesian Bayesian, Inference, Bayesian Bayesian, Inference, Bayesian Bayesian, Bayesian Inference, Bayesian, Bayesian Inference, Bayesian, Bayesian, Inference, Bayesian, Bayesian Inference, Bayesian, Bayesian Inference, Bayesian, Bayesian Inference, Bayesian, Bayesian Inference, Bayesian Bayesian, Bayesian Inference, Bayesian, Inference, Bayesian Bayesian, Inference, Bayesian, Bayesian, Inference, Bayesian, Bayesian, Inference, Bayesian, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Inference, Bayesian, Inference, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Inference, Bayesian, Inference, Bayesian, Inference, Inference, Bayesian, Inference, Bayesian, Inference, Inference, Bayesian, Inference, Inference, Bayesian, Inference, Inference, Bayesian, Inference, Inference, Bayesian, Inference, Bayesian, Inference, Inference, Bayesian, Inference, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Inference, Bayesian, Inference, Bayesian, Inference, Inference, Bayesian, Inference, Bayesian, Inference, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Inference, Bayesian, Inference, Bayesian, Inference, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Inference, Bayesian, Inference, Bayesian, Inference, Inference, Inference, Bayesian, Inference, Bayesian, Inference, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Inference, Bayesian, Inference, Bayesian, Inference, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Inference, Bayesian, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Bayesian, Inference, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference Inference, Bayesian, Bayesian, Inference, Bayesian, Inference, Bayesian Inference, Bayesian, Inference, Bayesian, Inference Inference, Bayesian Inference, Bayesian, Inference Inference, Bayesian, Bayesian, Inference Inference, Bayesian, Inference Inference, Bayesian, Inference Inference, Bayesian, Inference, Bayesian, Inference, Bayesian Inference, Bayesian Inference, Bayesian, Inference, Bayesian, Inference Inference, Bayesian Inference, Bayesian Inference, Bayesian, Inference Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian, Bayesian Inference, Inference Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference Inference, Bayesian Inference Inference Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference Inference, Bayesian Inference Inference Inference, Bayesian Inference sonov et al., 2022) recently shown to be particularly robust in the multimodal setting (Grenioux et al., 2023). ## 2 Background EBM maximum likelihood trainingGiven a training data distribution \(p^{\star}\), the EBM log-likelihood can be written as \(\ell_{\mathrm{EBM}}(\theta)=\mathbb{E}_{p^{\star}}[\log p_{\theta}(X)]\). This quantity is intractable due to the unknown \(Z_{\theta}\) of Equation (1), which translates into an expectation over \(p_{\theta}\) in its gradient: \[\nabla_{\theta}\ell_{\mathrm{EBM}}(\theta)=\mathbb{E}_{p^{\star}}[\nabla_{ \theta}E_{\theta}(X)]-\mathbb{E}_{p_{\theta}}[\nabla_{\theta}E_{\theta}(X)]. \tag{2}\] A Monte-Carlo estimation of \(\nabla_{\theta}\ell_{\mathrm{EBM}}(\theta)\) requires training samples \(x_{i}^{(+)}\sim p^{\star}(x)\), commonly referred to as _positive samples_ in the EBM context, and samples from the current model \(x_{i}^{(-)}\sim p_{\theta}(x)\), respectively called _negative samples_. Collecting \(n\) samples of these two kinds yields the approximation for gradient (2) \(\widehat{\nabla_{\theta}\ell_{\mathrm{EBM}}}(\theta,\{x_{i}^{(-)},x_{i}^{(+) }\}_{i=1}^{n})=\) \[-\frac{1}{n}\left(\sum_{i=1}^{n}\nabla_{\theta}E_{\theta_{k}}(x_{i}^{(-)})- \sum_{i=1}^{n}\nabla_{\theta}E_{\theta_{k}}(x_{i}^{(+)})\right). \tag{3}\] Yet, obtaining exact samples from \(p_{\theta_{k}}\) requires converging an MCMC, which is a costly procedure to repeat. As a result, approximate sampling procedures have been proposed: in contrastive divergence (CD) (Hinton, 2002), a fixed small number of MCMC steps is ran starting from training samples at each gradient computation. In _persistent_ CD (PCD), this simple idea was further refined by propagating the MCMC chains the negative samples across gradient updates (Tieleman, 2008). For real valued-valued EBMs, CD and PCD most commonly employ _Uncalibrated Langevin Algorithm_ (ULA) (Roberts and Tweedie, 1996), a local gradient-based sampler, which at step \(k\) updates \(x^{(k)}\) as \[x^{(t+1)}=x^{(t)}-\eta\nabla\log E_{\theta}(x^{(t)})+\sqrt{2\eta}z^{(t)} \tag{4}\] where \(\eta\) is the step size of the algorithm and \(z^{(t)}\sim\mathcal{N}(0,I)\). If ULA samples the target distribution \(p_{\theta}\) asymptotically in time, it typically cannot converge in a manageable number of iterations for distribution that are multimodal. While recent research suggests that using a non-convergent MCMC for drawing negative samples does not compromise sample quality if a consistent sampling scheme is employed during and after training (Nijkamp et al., 2019, 2020; An et al., 2021; Xie et al., 2022), it is not guaranteed that an EBM trained in this fashion captures the overall mass distribution between different modes (see the motivating example of 4). NF-Assisted sampling_Normalizing flows_ (NF) combine a _base_ distribution \(\rho\) on \(\mathbb{R}^{d}\) and a bijective transport map \(T_{\alpha}:\mathbb{R}^{d}\rightarrow\mathbb{R}^{d}\) with parameters \(\alpha\in\mathbb{A}\) to define a generative model with density: \[\lambda_{T_{\alpha}}^{\rho}(x)=\rho(T_{\alpha}^{-1}(x))\left|J_{T_{\alpha}^{- 1}}(x)\right|, \tag{5}\] from which samples are straightforwardly obtained as \(X=T_{\alpha}(Z)\) with \(Z\sim\rho\). NFs can be trained on training data to maximize the explicit likelihood. We point the reader to the reviews (Papamakarios et al., 2021; Kobyzev et al., 2021). Thanks to their tractable densities and direct sampling procedure, NFs have found applications in statistical inference either as a variational family (Rezende and Mohamed, 2015; Wu et al., 2019) or as helpers in sampling algorithms (Parno and Marzouk, 2018; Albergo et al., 2019; Noe et al., 2019; Muller et al., 2019; McNaughton et al., 2020; Hackett et al., 2021) (among others). Given a target distribution \(\pi\), known up to a normalization constant, the general idea of NF-assisted inference is to train the map \(T_{\alpha}\) such that \(\lambda_{T_{\alpha}}^{\rho}\) approaches \(\pi\). In this context, since no training sample is available a priori, the flow is trained either by minimizing the reverse Kulleback-Leibler (KL) divergence \(\mathrm{KL}(\lambda_{T_{\alpha}}^{\rho}||\pi)=\mathbb{E}_{\lambda_{T_{\alpha}} ^{\rho}}[\log\lambda_{T_{\alpha}}^{\rho}(x)/\pi(x)]\)(Rezende and Mohamed, 2015) or through an adaptive MCMC procedure maximizing a proxy of the likelihood (Parno and Marzouk, 2018; McNaughton et al., 2020; Naesseth et al., 2021; Gabrie et al., 2022). When \(\pi\) is multimodal, the reverse KL is prone to mode collapse (Jerfel et al., 2021; Hackett et al., 2021), while the adaptive MCMC training can leverage prior knowledge of the different modes' basins to seed the learning of a model covering all the regions of interest. Once trained, samples from the flow \(\lambda_{T_{\alpha}}^{\rho}\) approximating the target \(\pi\) can be debiased via importance sampling or various MCMC schemes. A recent comparative study (Grenioux et al., 2023) shows that methods based on independent proposals from the flow, such as independent Metropolis-Hastings (e.g., (Nicoli et al., 2020)), are more robust to sample multimodal distributions compared to re-parametrization schemes such as neutra-MCMC (Hoffman et al., 2019). In the context of EBM training, these observations suggest the use a of flow-based adaptive MCMC using independent proposals, FlowMC (Algorithm 5 in Appendix A), to provide accurate negative samples. ## 3 EBMs with Flow Sampling We suggest to train a NF to maintain a good overlap between the flow's \(\lambda_{T_{\alpha}}^{\rho}\) and EBM's \(p_{\theta}\) throughout training so as to implement an NF-assisted sampler to draw negative samples (Algorithm 1). For this symbiosis to work in practice, that is \(\lambda_{T_{\alpha_{i}}}^{\rho}\approx p_{\theta_{i}}^{\rho}\) at all times, we slightly modify the EBM definition Equation (1) using the flow base distribution \(\rho\) as a tilt1: Footnote 1: Note that this change does not modify the gradient of \(\ell_{\mathrm{EBM}}(\theta)\) (see Equation (2)) see proof in Appendix B. \[p_{\theta}^{\rho}(x)=\frac{1}{Z_{\theta}}\exp\left(-E_{\theta}(x)\right)\rho(x)\,. \tag{6}\] Choosing initially \(\theta_{0}\) such that \(E_{\theta_{0}}(\cdot)=0\) and \(\alpha_{0}\) such that \(T_{\alpha_{0}}(\cdot)=\mathrm{Id}(\cdot)\) leads to a perfect equality at initialization \(p_{\theta_{0}}^{\rho}=\rho=\lambda_{T_{\alpha_{0}}}^{\rho}\). Then, the learning rates \(\gamma_{\mathrm{EBM}}\) and \(\gamma_{\mathrm{flow}}\) in Algorithm 1 need to be co-adjusted for the matching to be approximately maintained all along. Unlike strategies using ULA to obtain negative samples, our proposition is statistically reliable as it uses a calibrated MCMC sampler which handles multimodality. Thanks to the good agreement between \(\lambda_{T_{\alpha_{i}}}^{\rho}\) and \(p_{\theta_{t}}^{\rho}\), non-local moves proposed by the NF in the flowMC sampler allow rapid-mixing between modes. This coupled learning of two generative models takes the best of both: the constrained but tractable NF approximates an unconstrained but intractable EBM. Additionally, it provides a natural and efficient way to sample the resulting EBM through FlowMC. Related worksSeveral directions have already been explored combining energy based models with push-forward generative models to leverage their complementary strenghts. (Xie et al., 2018, 2021) suggested to use a Variational Autoencoder (VAE) and (Xie et al., 2022) a NF (CoopFlow algorithm) to provide initial negative samples before running short chains during maximum likelihood training, reporting good sample quality but no guarantee of statistical accuracy. Closer to this work in their concern to resort to a calibrated and converged sampler (NT-EBM algorithm) (Xiao et al., 2020; Nijkamp et al., 2022) leverage an alternative type of NF-assisted sampler using the flow bijective mapping as a preconditioner (Parno and Marzouk, 2018; Hoffman et al., 2019). Yet recent work suggest that a multimodal problem remains so when re-parametrized by a flow and therefore that chains mixing is not guaranteed (Grenioux et al., 2023). This approach was also performed using VAE in (Xiao et al., 2021). Another occurrence of a tilted EBM with a pushforward model was also explored with Generative Adversarial Networks (Arbel et al., 2021) using a Langevin sampler in latent space without assessment of the statistical performance on multimodal datasets. Lastly, a set of works considered the simulatenous learning of an auxiliary model for sampling along with the EBM using the Fenchel dual description of the intractable partition function (Dai et al., 2019; Grathwohl et al., 2021).Yet this strategy amounts to minimizing the reverse KL objective, prone to mode-collapse, and the statistical robustness of the methods to multimodal targets was untested. Figure 1: **Comparison of PCD and flowMC EBM training on toy 2d mixture with 2:1 weight ratio. The EBM learned with flowMC (f) captured correctly the relative weights, unlike the EBM trained with ULA (b), as is also clear from the conditional densities along the axis going trough the centroids (d,h). This statistical accuracy is promoted by the fast mixing NF-assisted sampling (g). (Right) Estimated weight of the top right mode during the training of ULA-EBM and flowMC-EBM (Details in Appendix C.2)** ## 4 Numerical experiments Motivating exampleWe first illustrate the difficulty of learning relative weights with persistent ULA-EBM training on a 2D mixture of Gaussians (Figure 1). The incapacity of ULA to mix between the modes ((c) versus (g)), introduces a biais in the estimation of the gradient (2), which leads to an over-correction of mismatched weights: ULA-EBM entirely entirely erases a mode multiple times during training, before recreating it and the final weight at which the EBM training stops is not a robust estimation of the target weights (Figure 1 Right). In flowMC-EBM training on the other hand, calibrated negative samples lead to a stable estimation of the weights during learning and a final accurate density estimation ((f) and (h)). The companion flow (e), more constrained in its parametrization, does not achieve a fit as accurate as the EBM, yet its match with the EBM remains good enough to facilitate the fast-mixing MCMC key to success. A detailed comparison including more algorithms from Related Works is also presented in Appendix C.3. 2D distributions with more modes and complicated geometries.We benchmark approaches combining EBMs and NFs on the 2D distributions _8-Gaussians_ and _rings_. The different models shared the same EBM/flow architecture and were trained for the same number of iterations. The final densities displayed in Figure 2 highlight that our algorithm outperforms competitors in weighting the different modes. This is quantitatively confirmed by the energy errors computed in Table 1. See Appendix C.4 for more details. High dimensional mixtureWe now consider an equally-weighted mixture of 4 Gaussians in dimensions 16, 32 and 64. We compare here again flowMC-EBM with NT-EBM and CoopFlow, yet focusing this time on characterizing the mixing of the chains throughout learning. Using identical EBM/flow architectures trained for the same number of iterations, we report the \(\hat{R}\) metric of the negative chains for each model at the end of training in Table 2 for the end of training and in Appendix C.5 throughout training. Mean to compare the intra-chain variance and the inter-chains variance, reaching a \(\hat{R}\) close to 1 is a necessary criteria of convergence of an MCMC (Vehtari et al., 2021). flowMC-EBM is the only algorithm allowing proper mixing. See Appendix C.5 for more details. Cifar 10Given our computational budget, we were able to train a flowMC-EBM producing samples of medium quality (see Figure 8 of Appendix C.6 along with training details). Nonetheless, negative chains mix between modes as the companion NF proposal's are accepted around 5-10% of the time. Given the number of parameters reported in related work, we expect that a more expressive flow and energy parametrization would improve the outcome. ## 5 Conclusion By combining an EBM and NF, we manage to tackle the generative models trilemma described in (Xiao et al., 2022). The trilemma states that among the desirable properties of (i) fast sampling, (ii) high-quality samples and (iii) mode-coverage/diversity of the produced samples, a generative model typically only features two out of three. Our numerical experiments show that the cost of training two models is compensated by obtaining a strategy without compromises with respect to the three aspects. Going even further than mode-coverage, we show that our algorithm enables a precise evaluation of the mode relative weights, a topic rarely discussed in the literature. ## Acknowledgements M.G. thanks Eric Vanden-Eijnden for insightful discussions. L.G. and M.G. acknowledge funding from Hi! Paris. The work was partly supported by ANR-19-CHIA-0002-01 "SCAI". Part of this research has been carried out under \begin{table} \begin{tabular}{l c c} \hline \hline & 8 Gaussians & Rings \\ \hline ULA-EBM & 1.86 & 5.39 \\ NT-EBM & 0.97 & 0.62 \\ CoopFlow & 58.75 & 7.78 \\ flowMC-EBM & **0.94** & **0.40** \\ \hline \hline \end{tabular} \end{table} Table 1: Median squared error on log-density (\(\underset{x}{\mathrm{med}}(\log\rho_{\theta}(x)-\log\rho^{*}(x))^{2}\)). Best metrics in bold. \begin{table} \begin{tabular}{l c c c} \hline \hline & Dim. 16 & Dim. 32 & Dim. 64 \\ \hline CoopFlow & 20.52 & 44.12 & 51.66 \\ NT-EBM & 2.02 & 2.50 & 3.10 \\ ULA-EBM & 7.30 & 9.29 & 90.90 \\ flowMC-EBM & **1.01** & **1.01** & **1.05** \\ \hline \hline \end{tabular} \end{table} Table 2: Maximum \(\hat{R}\) across dimension of negative samples on Gaussian mixture computed on 128 independent chains started from the persistent state (or from the flow for CoopFlow). Figure 6 in Appendix C.5 ) Figure 2: Estimated energies using different algorithms on toy 2D experiments - (**Top**) 8 Gaussians (**Bottom**) Rings the auspice of the Lagrange Center for Mathematics and Computing.